WorldWideScience

Sample records for improves prediction accuracy

  1. Audiovisual biofeedback improves motion prediction accuracy.

    Science.gov (United States)

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-04-01

    The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients' respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p biofeedback improves prediction accuracy. This would result in increased efficiency of motion management techniques affected by system latencies used in radiotherapy.

  2. Improving orbit prediction accuracy through supervised machine learning

    Science.gov (United States)

    Peng, Hao; Bai, Xiaoli

    2018-05-01

    Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.

  3. Improving the accuracy of protein secondary structure prediction using structural alignment

    Directory of Open Access Journals (Sweden)

    Gallin Warren J

    2006-06-01

    Full Text Available Abstract Background The accuracy of protein secondary structure prediction has steadily improved over the past 30 years. Now many secondary structure prediction methods routinely achieve an accuracy (Q3 of about 75%. We believe this accuracy could be further improved by including structure (as opposed to sequence database comparisons as part of the prediction process. Indeed, given the large size of the Protein Data Bank (>35,000 sequences, the probability of a newly identified sequence having a structural homologue is actually quite high. Results We have developed a method that performs structure-based sequence alignments as part of the secondary structure prediction process. By mapping the structure of a known homologue (sequence ID >25% onto the query protein's sequence, it is possible to predict at least a portion of that query protein's secondary structure. By integrating this structural alignment approach with conventional (sequence-based secondary structure methods and then combining it with a "jury-of-experts" system to generate a consensus result, it is possible to attain very high prediction accuracy. Using a sequence-unique test set of 1644 proteins from EVA, this new method achieves an average Q3 score of 81.3%. Extensive testing indicates this is approximately 4–5% better than any other method currently available. Assessments using non sequence-unique test sets (typical of those used in proteome annotation or structural genomics indicate that this new method can achieve a Q3 score approaching 88%. Conclusion By using both sequence and structure databases and by exploiting the latest techniques in machine learning it is possible to routinely predict protein secondary structure with an accuracy well above 80%. A program and web server, called PROTEUS, that performs these secondary structure predictions is accessible at http://wishart.biology.ualberta.ca/proteus. For high throughput or batch sequence analyses, the PROTEUS programs

  4. A novel method for improved accuracy of transcription factor binding site prediction

    KAUST Repository

    Khamis, Abdullah M.; Motwalli, Olaa Amin; Oliva, Romina; Jankovic, Boris R.; Medvedeva, Yulia; Ashoor, Haitham; Essack, Magbubah; Gao, Xin; Bajic, Vladimir B.

    2018-01-01

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  5. A novel method for improved accuracy of transcription factor binding site prediction

    KAUST Repository

    Khamis, Abdullah M.

    2018-03-20

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  6. The contribution of educational class in improving accuracy of cardiovascular risk prediction across European regions

    DEFF Research Database (Denmark)

    Ferrario, Marco M; Veronesi, Giovanni; Chambless, Lloyd E

    2014-01-01

    OBJECTIVE: To assess whether educational class, an index of socioeconomic position, improves the accuracy of the SCORE cardiovascular disease (CVD) risk prediction equation. METHODS: In a pooled analysis of 68 455 40-64-year-old men and women, free from coronary heart disease at baseline, from 47...

  7. A New Approach to Improve Accuracy of Grey Model GMC(1,n in Time Series Prediction

    Directory of Open Access Journals (Sweden)

    Sompop Moonchai

    2015-01-01

    Full Text Available This paper presents a modified grey model GMC(1,n for use in systems that involve one dependent system behavior and n-1 relative factors. The proposed model was developed from the conventional GMC(1,n model in order to improve its prediction accuracy by modifying the formula for calculating the background value, the system of parameter estimation, and the model prediction equation. The modified GMC(1,n model was verified by two cases: the study of forecasting CO2 emission in Thailand and forecasting electricity consumption in Thailand. The results demonstrated that the modified GMC(1,n model was able to achieve higher fitting and prediction accuracy compared with the conventional GMC(1,n and D-GMC(1,n models.

  8. Model training across multiple breeding cycles significantly improves genomic prediction accuracy in rye (Secale cereale L.).

    Science.gov (United States)

    Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin

    2016-11-01

    Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS  = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.

  9. THE ACCURACY AND BIAS EVALUATION OF THE USA UNEMPLOYMENT RATE FORECASTS. METHODS TO IMPROVE THE FORECASTS ACCURACY

    Directory of Open Access Journals (Sweden)

    MIHAELA BRATU (SIMIONESCU

    2012-12-01

    Full Text Available In this study some alternative forecasts for the unemployment rate of USA made by four institutions (International Monetary Fund (IMF, Organization for Economic Co-operation and Development (OECD, Congressional Budget Office (CBO and Blue Chips (BC are evaluated regarding the accuracy and the biasness. The most accurate predictions on the forecasting horizon 201-2011 were provided by IMF, followed by OECD, CBO and BC.. These results were gotten using U1 Theil’s statistic and a new method that has not been used before in literature in this context. The multi-criteria ranking was applied to make a hierarchy of the institutions regarding the accuracy and five important accuracy measures were taken into account at the same time: mean errors, mean squared error, root mean squared error, U1 and U2 statistics of Theil. The IMF, OECD and CBO predictions are unbiased. The combined forecasts of institutions’ predictions are a suitable strategy to improve the forecasts accuracy of IMF and OECD forecasts when all combination schemes are used, but INV one is the best. The filtered and smoothed original predictions based on Hodrick-Prescott filter, respectively Holt-Winters technique are a good strategy of improving only the BC expectations. The proposed strategies to improve the accuracy do not solve the problem of biasness. The assessment and improvement of forecasts accuracy have an important contribution in growing the quality of decisional process.

  10. Physiologically-based, predictive analytics using the heart-rate-to-Systolic-Ratio significantly improves the timeliness and accuracy of sepsis prediction compared to SIRS.

    Science.gov (United States)

    Danner, Omar K; Hendren, Sandra; Santiago, Ethel; Nye, Brittany; Abraham, Prasad

    2017-04-01

    Enhancing the efficiency of diagnosis and treatment of severe sepsis by using physiologically-based, predictive analytical strategies has not been fully explored. We hypothesize assessment of heart-rate-to-systolic-ratio significantly increases the timeliness and accuracy of sepsis prediction after emergency department (ED) presentation. We evaluated the records of 53,313 ED patients from a large, urban teaching hospital between January and June 2015. The HR-to-systolic ratio was compared to SIRS criteria for sepsis prediction. There were 884 patients with discharge diagnoses of sepsis, severe sepsis, and/or septic shock. Variations in three presenting variables, heart rate, systolic BP and temperature were determined to be primary early predictors of sepsis with a 74% (654/884) accuracy compared to 34% (304/884) using SIRS criteria (p < 0.0001)in confirmed septic patients. Physiologically-based predictive analytics improved the accuracy and expediency of sepsis identification via detection of variations in HR-to-systolic ratio. This approach may lead to earlier sepsis workup and life-saving interventions. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Theoretical study on new bias factor methods to effectively use critical experiments for improvement of prediction accuracy of neutronic characteristics

    International Nuclear Information System (INIS)

    Kugo, Teruhiko; Mori, Takamasa; Takeda, Toshikazu

    2007-01-01

    Extended bias factor methods are proposed with two new concepts, the LC method and the PE method, in order to effectively use critical experiments and to enhance the applicability of the bias factor method for the improvement of the prediction accuracy of neutronic characteristics of a target core. Both methods utilize a number of critical experimental results and produce a semifictitious experimental value with them. The LC and PE methods define the semifictitious experimental values by a linear combination of experimental values and the product of exponentiated experimental values, respectively, and the corresponding semifictitious calculation values by those of calculation values. A bias factor is defined by the ratio of the semifictitious experimental value to the semifictitious calculation value in both methods. We formulate how to determine weights for the LC method and exponents for the PE method in order to minimize the variance of the design prediction value obtained by multiplying the design calculation value by the bias factor. From a theoretical comparison of these new methods with the conventional method which utilizes a single experimental result and the generalized bias factor method which was previously proposed to utilize a number of experimental results, it is concluded that the PE method is the most useful method for improving the prediction accuracy. The main advantages of the PE method are summarized as follows. The prediction accuracy is necessarily improved compared with the design calculation value even when experimental results include large experimental errors. This is a special feature that the other methods do not have. The prediction accuracy is most effectively improved by utilizing all the experimental results. From these facts, it can be said that the PE method effectively utilizes all the experimental results and has a possibility to make a full-scale-mockup experiment unnecessary with the use of existing and future benchmark

  12. EMUDRA: Ensemble of Multiple Drug Repositioning Approaches to Improve Prediction Accuracy.

    Science.gov (United States)

    Zhou, Xianxiao; Wang, Minghui; Katsyv, Igor; Irie, Hanna; Zhang, Bin

    2018-04-24

    Availability of large-scale genomic, epigenetic and proteomic data in complex diseases makes it possible to objectively and comprehensively identify therapeutic targets that can lead to new therapies. The Connectivity Map has been widely used to explore novel indications of existing drugs. However, the prediction accuracy of the existing methods, such as Kolmogorov-Smirnov statistic remains low. Here we present a novel high-performance drug repositioning approach that improves over the state-of-the-art methods. We first designed an expression weighted cosine method (EWCos) to minimize the influence of the uninformative expression changes and then developed an ensemble approach termed EMUDRA (Ensemble of Multiple Drug Repositioning Approaches) to integrate EWCos and three existing state-of-the-art methods. EMUDRA significantly outperformed individual drug repositioning methods when applied to simulated and independent evaluation datasets. We predicted using EMUDRA and experimentally validated an antibiotic rifabutin as an inhibitor of cell growth in triple negative breast cancer. EMUDRA can identify drugs that more effectively target disease gene signatures and will thus be a useful tool for identifying novel therapies for complex diseases and predicting new indications for existing drugs. The EMUDRA R package is available at doi:10.7303/syn11510888. bin.zhang@mssm.edu or zhangb@hotmail.com. Supplementary data are available at Bioinformatics online.

  13. Improving prediction accuracy of cooling load using EMD, PSR and RBFNN

    Science.gov (United States)

    Shen, Limin; Wen, Yuanmei; Li, Xiaohong

    2017-08-01

    To increase the accuracy for the prediction of cooling load demand, this work presents an EMD (empirical mode decomposition)-PSR (phase space reconstruction) based RBFNN (radial basis function neural networks) method. Firstly, analyzed the chaotic nature of the real cooling load demand, transformed the non-stationary cooling load historical data into several stationary intrinsic mode functions (IMFs) by using EMD. Secondly, compared the RBFNN prediction accuracies of each IMFs and proposed an IMF combining scheme that is combine the lower-frequency components (called IMF4-IMF6 combined) while keep the higher frequency component (IMF1, IMF2, IMF3) and the residual unchanged. Thirdly, reconstruct phase space for each combined components separately, process the highest frequency component (IMF1) by differential method and predict with RBFNN in the reconstructed phase spaces. Real cooling load data of a centralized ice storage cooling systems in Guangzhou are used for simulation. The results show that the proposed hybrid method outperforms the traditional methods.

  14. Accuracy of Genomic Prediction in Switchgrass (Panicum virgatum L. Improved by Accounting for Linkage Disequilibrium

    Directory of Open Access Journals (Sweden)

    Guillaume P. Ramstein

    2016-04-01

    Full Text Available Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.

  15. Improving protein fold recognition and structural class prediction accuracies using physicochemical properties of amino acids.

    Science.gov (United States)

    Raicar, Gaurav; Saini, Harsh; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok

    2016-08-07

    Predicting the three-dimensional (3-D) structure of a protein is an important task in the field of bioinformatics and biological sciences. However, directly predicting the 3-D structure from the primary structure is hard to achieve. Therefore, predicting the fold or structural class of a protein sequence is generally used as an intermediate step in determining the protein's 3-D structure. For protein fold recognition (PFR) and structural class prediction (SCP), two steps are required - feature extraction step and classification step. Feature extraction techniques generally utilize syntactical-based information, evolutionary-based information and physicochemical-based information to extract features. In this study, we explore the importance of utilizing the physicochemical properties of amino acids for improving PFR and SCP accuracies. For this, we propose a Forward Consecutive Search (FCS) scheme which aims to strategically select physicochemical attributes that will supplement the existing feature extraction techniques for PFR and SCP. An exhaustive search is conducted on all the existing 544 physicochemical attributes using the proposed FCS scheme and a subset of physicochemical attributes is identified. Features extracted from these selected attributes are then combined with existing syntactical-based and evolutionary-based features, to show an improvement in the recognition and prediction performance on benchmark datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Improving accuracy of protein-protein interaction prediction by considering the converse problem for sequence representation

    Directory of Open Access Journals (Sweden)

    Wang Yong

    2011-10-01

    Full Text Available Abstract Background With the development of genome-sequencing technologies, protein sequences are readily obtained by translating the measured mRNAs. Therefore predicting protein-protein interactions from the sequences is of great demand. The reason lies in the fact that identifying protein-protein interactions is becoming a bottleneck for eventually understanding the functions of proteins, especially for those organisms barely characterized. Although a few methods have been proposed, the converse problem, if the features used extract sufficient and unbiased information from protein sequences, is almost untouched. Results In this study, we interrogate this problem theoretically by an optimization scheme. Motivated by the theoretical investigation, we find novel encoding methods for both protein sequences and protein pairs. Our new methods exploit sufficiently the information of protein sequences and reduce artificial bias and computational cost. Thus, it significantly outperforms the available methods regarding sensitivity, specificity, precision, and recall with cross-validation evaluation and reaches ~80% and ~90% accuracy in Escherichia coli and Saccharomyces cerevisiae respectively. Our findings here hold important implication for other sequence-based prediction tasks because representation of biological sequence is always the first step in computational biology. Conclusions By considering the converse problem, we propose new representation methods for both protein sequences and protein pairs. The results show that our method significantly improves the accuracy of protein-protein interaction predictions.

  17. Measuring Personality in Context: Improving Predictive Accuracy in Selection Decision Making

    OpenAIRE

    Hoffner, Rebecca Ann

    2009-01-01

    This study examines the accuracy of a context-sensitive (i.e., goal dimensions) measure of personality compared to a traditional measure of personality (NEO-PI-R) and generalized self-efficacy (GSE) to predict variance in task performance. The goal dimensions measure takes a unique perspective in the conceptualization of personality. While traditional measures differentiate within person and collapse across context (e.g., Big Five), the goal dimensions measure employs a hierarchical structure...

  18. Final Technical Report: Increasing Prediction Accuracy.

    Energy Technology Data Exchange (ETDEWEB)

    King, Bruce Hardison [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  19. Prediction of lung tumour position based on spirometry and on abdominal displacement: Accuracy and reproducibility

    International Nuclear Information System (INIS)

    Hoisak, Jeremy D.P.; Sixel, Katharina E.; Tirona, Romeo; Cheung, Patrick C.F.; Pignol, Jean-Philippe

    2006-01-01

    Background and purpose: A simulation investigating the accuracy and reproducibility of a tumour motion prediction model over clinical time frames is presented. The model is formed from surrogate and tumour motion measurements, and used to predict the future position of the tumour from surrogate measurements alone. Patients and methods: Data were acquired from five non-small cell lung cancer patients, on 3 days. Measurements of respiratory volume by spirometry and abdominal displacement by a real-time position tracking system were acquired simultaneously with X-ray fluoroscopy measurements of superior-inferior tumour displacement. A model of tumour motion was established and used to predict future tumour position, based on surrogate input data. The calculated position was compared against true tumour motion as seen on fluoroscopy. Three different imaging strategies, pre-treatment, pre-fraction and intrafractional imaging, were employed in establishing the fitting parameters of the prediction model. The impact of each imaging strategy upon accuracy and reproducibility was quantified. Results: When establishing the predictive model using pre-treatment imaging, four of five patients exhibited poor interfractional reproducibility for either surrogate in subsequent sessions. Simulating the formulation of the predictive model prior to each fraction resulted in improved interfractional reproducibility. The accuracy of the prediction model was only improved in one of five patients when intrafractional imaging was used. Conclusions: Employing a prediction model established from measurements acquired at planning resulted in localization errors. Pre-fractional imaging improved the accuracy and reproducibility of the prediction model. Intrafractional imaging was of less value, suggesting that the accuracy limit of a surrogate-based prediction model is reached with once-daily imaging

  20. Small angle X-ray scattering and cross-linking for data assisted protein structure prediction in CASP 12 with prospects for improved accuracy

    KAUST Repository

    Ogorzalek, Tadeusz L.

    2018-01-04

    Experimental data offers empowering constraints for structure prediction. These constraints can be used to filter equivalently scored models or more powerfully within optimization functions toward prediction. In CASP12, Small Angle X-ray Scattering (SAXS) and Cross-Linking Mass Spectrometry (CLMS) data, measured on an exemplary set of novel fold targets, were provided to the CASP community of protein structure predictors. As HT, solution-based techniques, SAXS and CLMS can efficiently measure states of the full-length sequence in its native solution conformation and assembly. However, this experimental data did not substantially improve prediction accuracy judged by fits to crystallographic models. One issue, beyond intrinsic limitations of the algorithms, was a disconnect between crystal structures and solution-based measurements. Our analyses show that many targets had substantial percentages of disordered regions (up to 40%) or were multimeric or both. Thus, solution measurements of flexibility and assembly support variations that may confound prediction algorithms trained on crystallographic data and expecting globular fully-folded monomeric proteins. Here, we consider the CLMS and SAXS data collected, the information in these solution measurements, and the challenges in incorporating them into computational prediction. As improvement opportunities were only partly realized in CASP12, we provide guidance on how data from the full-length biological unit and the solution state can better aid prediction of the folded monomer or subunit. We furthermore describe strategic integrations of solution measurements with computational prediction programs with the aim of substantially improving foundational knowledge and the accuracy of computational algorithms for biologically-relevant structure predictions for proteins in solution. This article is protected by copyright. All rights reserved.

  1. Small angle X-ray scattering and cross-linking for data assisted protein structure prediction in CASP 12 with prospects for improved accuracy

    KAUST Repository

    Ogorzalek, Tadeusz L.; Hura, Greg L.; Belsom, Adam; Burnett, Kathryn H.; Kryshtafovych, Andriy; Tainer, John A.; Rappsilber, Juri; Tsutakawa, Susan E.; Fidelis, Krzysztof

    2018-01-01

    Experimental data offers empowering constraints for structure prediction. These constraints can be used to filter equivalently scored models or more powerfully within optimization functions toward prediction. In CASP12, Small Angle X-ray Scattering (SAXS) and Cross-Linking Mass Spectrometry (CLMS) data, measured on an exemplary set of novel fold targets, were provided to the CASP community of protein structure predictors. As HT, solution-based techniques, SAXS and CLMS can efficiently measure states of the full-length sequence in its native solution conformation and assembly. However, this experimental data did not substantially improve prediction accuracy judged by fits to crystallographic models. One issue, beyond intrinsic limitations of the algorithms, was a disconnect between crystal structures and solution-based measurements. Our analyses show that many targets had substantial percentages of disordered regions (up to 40%) or were multimeric or both. Thus, solution measurements of flexibility and assembly support variations that may confound prediction algorithms trained on crystallographic data and expecting globular fully-folded monomeric proteins. Here, we consider the CLMS and SAXS data collected, the information in these solution measurements, and the challenges in incorporating them into computational prediction. As improvement opportunities were only partly realized in CASP12, we provide guidance on how data from the full-length biological unit and the solution state can better aid prediction of the folded monomer or subunit. We furthermore describe strategic integrations of solution measurements with computational prediction programs with the aim of substantially improving foundational knowledge and the accuracy of computational algorithms for biologically-relevant structure predictions for proteins in solution. This article is protected by copyright. All rights reserved.

  2. Improved accuracy of intraocular lens power calculation with the Zeiss IOLMaster.

    Science.gov (United States)

    Olsen, Thomas

    2007-02-01

    This study aimed to demonstrate how the level of accuracy in intraocular lens (IOL) power calculation can be improved with optical biometry using partial optical coherence interferometry (PCI) (Zeiss IOLMaster) and current anterior chamber depth (ACD) prediction algorithms. Intraocular lens power in 461 consecutive cataract operations was calculated using both PCI and ultrasound and the accuracy of the results of each technique were compared. To illustrate the importance of ACD prediction per se, predictions were calculated using both a recently published 5-variable method and the Haigis 2-variable method and the results compared. All calculations were optimized in retrospect to account for systematic errors, including IOL constants and other off-set errors. The average absolute IOL prediction error (observed minus expected refraction) was 0.65 dioptres with ultrasound and 0.43 D with PCI using the 5-variable ACD prediction method (p ultrasound, respectively (p power calculation can be significantly improved using calibrated axial length readings obtained with PCI and modern IOL power calculation formulas incorporating the latest generation ACD prediction algorithms.

  3. Improvement of prediction accuracy of large eddy simulation on colocated grids; Colocation koshi wo mochiita LES no keisan seido kaizen ni kansuru ichikosatsu

    Energy Technology Data Exchange (ETDEWEB)

    Inagaki, M.; Abe, K. [Toyota Central Research and Development Labs., Inc., Aichi (Japan)

    1998-07-25

    With the recent advances in computers, large eddy simulation (LES) has become applicable to engineering prediction. However, most cases of the engineering applications need to use the nonorthgonal curvilimear coordinate systems. The staggered grids, usually used in LES in the orthgonal coordinates, don`t keep conservative properties in the nonorthgonal curvilinear coordinates. On the other hand, the colocated grids can be applied in the nonorthgonal curvilinear coordinates without losing its conservative properties, although its prediction accuracy isn`t so high as the staggered grid`s in the orthgonal coordinates especially with the coarse grids. In this research, the discretization method of the colocated grids is modified to improve its prediction accuracy. Plane channel flows are simulated on four grids of different resolution using the modified colocated grids and the original colocated grids. The results show that the modified colocated grids have higher accuracy than the original colocated grids. 17 refs., 13 figs., 1 tab.

  4. Accuracy of depolarization and delay spread predictions using advanced ray-based modeling in indoor scenarios

    Directory of Open Access Journals (Sweden)

    Mani Francesco

    2011-01-01

    Full Text Available Abstract This article investigates the prediction accuracy of an advanced deterministic propagation model in terms of channel depolarization and frequency selectivity for indoor wireless propagation. In addition to specular reflection and diffraction, the developed ray tracing tool considers penetration through dielectric blocks and/or diffuse scattering mechanisms. The sensitivity and prediction accuracy analysis is based on two measurement campaigns carried out in a warehouse and an office building. It is shown that the implementation of diffuse scattering into RT significantly increases the accuracy of the cross-polar discrimination prediction, whereas the delay-spread prediction is only marginally improved.

  5. Improving accuracy of genomic prediction in Brangus cattle by adding animals with imputed low-density SNP genotypes.

    Science.gov (United States)

    Lopes, F B; Wu, X-L; Li, H; Xu, J; Perkins, T; Genho, J; Ferretti, R; Tait, R G; Bauck, S; Rosa, G J M

    2018-02-01

    Reliable genomic prediction of breeding values for quantitative traits requires the availability of sufficient number of animals with genotypes and phenotypes in the training set. As of 31 October 2016, there were 3,797 Brangus animals with genotypes and phenotypes. These Brangus animals were genotyped using different commercial SNP chips. Of them, the largest group consisted of 1,535 animals genotyped by the GGP-LDV4 SNP chip. The remaining 2,262 genotypes were imputed to the SNP content of the GGP-LDV4 chip, so that the number of animals available for training the genomic prediction models was more than doubled. The present study showed that the pooling of animals with both original or imputed 40K SNP genotypes substantially increased genomic prediction accuracies on the ten traits. By supplementing imputed genotypes, the relative gains in genomic prediction accuracies on estimated breeding values (EBV) were from 12.60% to 31.27%, and the relative gain in genomic prediction accuracies on de-regressed EBV was slightly small (i.e. 0.87%-18.75%). The present study also compared the performance of five genomic prediction models and two cross-validation methods. The five genomic models predicted EBV and de-regressed EBV of the ten traits similarly well. Of the two cross-validation methods, leave-one-out cross-validation maximized the number of animals at the stage of training for genomic prediction. Genomic prediction accuracy (GPA) on the ten quantitative traits was validated in 1,106 newly genotyped Brangus animals based on the SNP effects estimated in the previous set of 3,797 Brangus animals, and they were slightly lower than GPA in the original data. The present study was the first to leverage currently available genotype and phenotype resources in order to harness genomic prediction in Brangus beef cattle. © 2018 Blackwell Verlag GmbH.

  6. Meditation experience predicts introspective accuracy.

    Directory of Open Access Journals (Sweden)

    Kieran C R Fox

    Full Text Available The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1-15,000 hrs experience. Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a 'body-scanning' meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices.

  7. Global discriminative learning for higher-accuracy computational gene prediction.

    Directory of Open Access Journals (Sweden)

    Axel Bernal

    2007-03-01

    Full Text Available Most ab initio gene predictors use a probabilistic sequence model, typically a hidden Markov model, to combine separately trained models of genomic signals and content. By combining separate models of relevant genomic features, such gene predictors can exploit small training sets and incomplete annotations, and can be trained fairly efficiently. However, that type of piecewise training does not optimize prediction accuracy and has difficulty in accounting for statistical dependencies among different parts of the gene model. With genomic information being created at an ever-increasing rate, it is worth investigating alternative approaches in which many different types of genomic evidence, with complex statistical dependencies, can be integrated by discriminative learning to maximize annotation accuracy. Among discriminative learning methods, large-margin classifiers have become prominent because of the success of support vector machines (SVM in many classification tasks. We describe CRAIG, a new program for ab initio gene prediction based on a conditional random field model with semi-Markov structure that is trained with an online large-margin algorithm related to multiclass SVMs. Our experiments on benchmark vertebrate datasets and on regions from the ENCODE project show significant improvements in prediction accuracy over published gene predictors that use intrinsic features only, particularly at the gene level and on genes with long introns.

  8. Improve accuracy and sensibility in glycan structure prediction by matching glycan isotope abundance

    International Nuclear Information System (INIS)

    Xu Guang; Liu Xin; Liu Qingyan; Zhou Yanhong; Li Jianjun

    2012-01-01

    Highlights: ► A glycan isotope pattern recognition strategy for glycomics. ► A new data preprocessing procedure to detect ion peaks in a giving MS spectrum. ► A linear soft margin SVM classification for isotope pattern recognition. - Abstract: Mass Spectrometry (MS) is a powerful technique for the determination of glycan structures and is capable of providing qualitative and quantitative information. Recent development in computational method offers an opportunity to use glycan structure databases and de novo algorithms for extracting valuable information from MS or MS/MS data. However, detecting low-intensity peaks that are buried in noisy data sets is still a challenge and an algorithm for accurate prediction and annotation of glycan structures from MS data is highly desirable. The present study describes a novel algorithm for glycan structure prediction by matching glycan isotope abundance (mGIA), which takes isotope masses, abundances, and spacing into account. We constructed a comprehensive database containing 808 glycan compositions and their corresponding isotope abundance. Unlike most previously reported methods, not only did we take into count the m/z values of the peaks but also their corresponding logarithmic Euclidean distance of the calculated and detected isotope vectors. Evaluation against a linear classifier, obtained by training mGIA algorithm with datasets of three different human tissue samples from Consortium for Functional Glycomics (CFG) in association with Support Vector Machine (SVM), was proposed to improve the accuracy of automatic glycan structure annotation. In addition, an effective data preprocessing procedure, including baseline subtraction, smoothing, peak centroiding and composition matching for extracting correct isotope profiles from MS data was incorporated. The algorithm was validated by analyzing the mouse kidney MS data from CFG, resulting in the identification of 6 more glycan compositions than the previous annotation

  9. Lessons learned from accuracy assessment of IAEA-SPE-4 experiment predictions

    International Nuclear Information System (INIS)

    Prosek, A.

    2002-01-01

    The use of methods for code accuracy assessment has strongly increased in the last years. The methods suitable to provide quantitative comparison between the thermalhydraulic code predictions and experimental measurements were proposed e.g. fast Fourier transform based method (FFTBM), stochastic approximation ratio based method (SARBM) and a few methods used in the frame of the recently developed automated code assessment program (ACAP). Further, in the frame of FFTBM also a procedure to quantify the whole calculation was proposed with averaging of the results. The problem is that averaging may hide discrepancies highlighted in the qualitative analysis when only quantitative results are published. The purpose of the study was therefore to propose additional accuracy measures. New proposed measures were tested with IAEA-SPE-4 pre- and post-test predictions. The obtained results showed that the proposed measures improve the whole picture of the code accuracy. This is important when the reader is not provided with the accompanied qualitative analysis. The study shows that proposed accuracy measures efficiently increase the confidence in the quantitative results.(author)

  10. Cadastral Database Positional Accuracy Improvement

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  11. Combining gene prediction methods to improve metagenomic gene annotation

    Directory of Open Access Journals (Sweden)

    Rosen Gail L

    2011-01-01

    Full Text Available Abstract Background Traditional gene annotation methods rely on characteristics that may not be available in short reads generated from next generation technology, resulting in suboptimal performance for metagenomic (environmental samples. Therefore, in recent years, new programs have been developed that optimize performance on short reads. In this work, we benchmark three metagenomic gene prediction programs and combine their predictions to improve metagenomic read gene annotation. Results We not only analyze the programs' performance at different read-lengths like similar studies, but also separate different types of reads, including intra- and intergenic regions, for analysis. The main deficiencies are in the algorithms' ability to predict non-coding regions and gene edges, resulting in more false-positives and false-negatives than desired. In fact, the specificities of the algorithms are notably worse than the sensitivities. By combining the programs' predictions, we show significant improvement in specificity at minimal cost to sensitivity, resulting in 4% improvement in accuracy for 100 bp reads with ~1% improvement in accuracy for 200 bp reads and above. To correctly annotate the start and stop of the genes, we find that a consensus of all the predictors performs best for shorter read lengths while a unanimous agreement is better for longer read lengths, boosting annotation accuracy by 1-8%. We also demonstrate use of the classifier combinations on a real dataset. Conclusions To optimize the performance for both prediction and annotation accuracies, we conclude that the consensus of all methods (or a majority vote is the best for reads 400 bp and shorter, while using the intersection of GeneMark and Orphelia predictions is the best for reads 500 bp and longer. We demonstrate that most methods predict over 80% coding (including partially coding reads on a real human gut sample sequenced by Illumina technology.

  12. Exploring the genetic architecture and improving genomic prediction accuracy for mastitis and milk production traits in dairy cattle by mapping variants to hepatic transcriptomic regions responsive to intra-mammary infection.

    Science.gov (United States)

    Fang, Lingzhao; Sahana, Goutam; Ma, Peipei; Su, Guosheng; Yu, Ying; Zhang, Shengli; Lund, Mogens Sandø; Sørensen, Peter

    2017-05-12

    A better understanding of the genetic architecture of complex traits can contribute to improve genomic prediction. We hypothesized that genomic variants associated with mastitis and milk production traits in dairy cattle are enriched in hepatic transcriptomic regions that are responsive to intra-mammary infection (IMI). Genomic markers [e.g. single nucleotide polymorphisms (SNPs)] from those regions, if included, may improve the predictive ability of a genomic model. We applied a genomic feature best linear unbiased prediction model (GFBLUP) to implement the above strategy by considering the hepatic transcriptomic regions responsive to IMI as genomic features. GFBLUP, an extension of GBLUP, includes a separate genomic effect of SNPs within a genomic feature, and allows differential weighting of the individual marker relationships in the prediction equation. Since GFBLUP is computationally intensive, we investigated whether a SNP set test could be a computationally fast way to preselect predictive genomic features. The SNP set test assesses the association between a genomic feature and a trait based on single-SNP genome-wide association studies. We applied these two approaches to mastitis and milk production traits (milk, fat and protein yield) in Holstein (HOL, n = 5056) and Jersey (JER, n = 1231) cattle. We observed that a majority of genomic features were enriched in genomic variants that were associated with mastitis and milk production traits. Compared to GBLUP, the accuracy of genomic prediction with GFBLUP was marginally improved (3.2 to 3.9%) in within-breed prediction. The highest increase (164.4%) in prediction accuracy was observed in across-breed prediction. The significance of genomic features based on the SNP set test were correlated with changes in prediction accuracy of GFBLUP (P layers of biological knowledge to provide novel insights into the biological basis of complex traits, and to improve the accuracy of genomic prediction. The SNP set

  13. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    Science.gov (United States)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  14. Estimation of genomic prediction accuracy from reference populations with varying degrees of relationship.

    Directory of Open Access Journals (Sweden)

    S Hong Lee

    Full Text Available Genomic prediction is emerging in a wide range of fields including animal and plant breeding, risk prediction in human precision medicine and forensic. It is desirable to establish a theoretical framework for genomic prediction accuracy when the reference data consists of information sources with varying degrees of relationship to the target individuals. A reference set can contain both close and distant relatives as well as 'unrelated' individuals from the wider population in the genomic prediction. The various sources of information were modeled as different populations with different effective population sizes (Ne. Both the effective number of chromosome segments (Me and Ne are considered to be a function of the data used for prediction. We validate our theory with analyses of simulated as well as real data, and illustrate that the variation in genomic relationships with the target is a predictor of the information content of the reference set. With a similar amount of data available for each source, we show that close relatives can have a substantially larger effect on genomic prediction accuracy than lesser related individuals. We also illustrate that when prediction relies on closer relatives, there is less improvement in prediction accuracy with an increase in training data or marker panel density. We release software that can estimate the expected prediction accuracy and power when combining different reference sources with various degrees of relationship to the target, which is useful when planning genomic prediction (before or after collecting data in animal, plant and human genetics.

  15. Learning linear spatial-numeric associations improves accuracy of memory for numbers

    Directory of Open Access Journals (Sweden)

    Clarissa Ann Thompson

    2016-01-01

    Full Text Available Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1. Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status. To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2. As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  16. Accuracy of algorithms to predict accessory pathway location in children with Wolff-Parkinson-White syndrome.

    Science.gov (United States)

    Wren, Christopher; Vogel, Melanie; Lord, Stephen; Abrams, Dominic; Bourke, John; Rees, Philip; Rosenthal, Eric

    2012-02-01

    The aim of this study was to examine the accuracy in predicting pathway location in children with Wolff-Parkinson-White syndrome for each of seven published algorithms. ECGs from 100 consecutive children with Wolff-Parkinson-White syndrome undergoing electrophysiological study were analysed by six investigators using seven published algorithms, six of which had been developed in adult patients. Accuracy and concordance of predictions were adjusted for the number of pathway locations. Accessory pathways were left-sided in 49, septal in 20 and right-sided in 31 children. Overall accuracy of prediction was 30-49% for the exact location and 61-68% including adjacent locations. Concordance between investigators varied between 41% and 86%. No algorithm was better at predicting septal pathways (accuracy 5-35%, improving to 40-78% including adjacent locations), but one was significantly worse. Predictive accuracy was 24-53% for the exact location of right-sided pathways (50-71% including adjacent locations) and 32-55% for the exact location of left-sided pathways (58-73% including adjacent locations). All algorithms were less accurate in our hands than in other authors' own assessment. None performed well in identifying midseptal or right anteroseptal accessory pathway locations.

  17. Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases

    Directory of Open Access Journals (Sweden)

    Peng Lu

    2018-01-01

    Full Text Available Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively.

  18. Prediction of earth rotation parameters based on improved weighted least squares and autoregressive model

    Directory of Open Access Journals (Sweden)

    Sun Zhangzhen

    2012-08-01

    Full Text Available In this paper, an improved weighted least squares (WLS, together with autoregressive (AR model, is proposed to improve prediction accuracy of earth rotation parameters(ERP. Four weighting schemes are developed and the optimal power e for determination of the weight elements is studied. The results show that the improved WLS-AR model can improve the ERP prediction accuracy effectively, and for different prediction intervals of ERP, different weight scheme should be chosen.

  19. The Accuracy and Bias of Single-Step Genomic Prediction for Populations Under Selection

    Directory of Open Access Journals (Sweden)

    Wan-Ling Hsu

    2017-08-01

    Full Text Available In single-step analyses, missing genotypes are explicitly or implicitly imputed, and this requires centering the observed genotypes using the means of the unselected founders. If genotypes are only available for selected individuals, centering on the unselected founder mean is not straightforward. Here, computer simulation is used to study an alternative analysis that does not require centering genotypes but fits the mean μg of unselected individuals as a fixed effect. Starting with observed diplotypes from 721 cattle, a five-generation population was simulated with sire selection to produce 40,000 individuals with phenotypes, of which the 1000 sires had genotypes. The next generation of 8000 genotyped individuals was used for validation. Evaluations were undertaken with (J or without (N μg when marker covariates were not centered; and with (JC or without (C μg when all observed and imputed marker covariates were centered. Centering did not influence accuracy of genomic prediction, but fitting μg did. Accuracies were improved when the panel comprised only quantitative trait loci (QTL; models JC and J had accuracies of 99.4%, whereas models C and N had accuracies of 90.2%. When only markers were in the panel, the 4 models had accuracies of 80.4%. In panels that included QTL, fitting μg in the model improved accuracy, but had little impact when the panel contained only markers. In populations undergoing selection, fitting μg in the model is recommended to avoid bias and reduction in prediction accuracy due to selection.

  20. Systematic bias of correlation coefficient may explain negative accuracy of genomic prediction.

    Science.gov (United States)

    Zhou, Yao; Vales, M Isabel; Wang, Aoxue; Zhang, Zhiwu

    2017-09-01

    Accuracy of genomic prediction is commonly calculated as the Pearson correlation coefficient between the predicted and observed phenotypes in the inference population by using cross-validation analysis. More frequently than expected, significant negative accuracies of genomic prediction have been reported in genomic selection studies. These negative values are surprising, given that the minimum value for prediction accuracy should hover around zero when randomly permuted data sets are analyzed. We reviewed the two common approaches for calculating the Pearson correlation and hypothesized that these negative accuracy values reflect potential bias owing to artifacts caused by the mathematical formulas used to calculate prediction accuracy. The first approach, Instant accuracy, calculates correlations for each fold and reports prediction accuracy as the mean of correlations across fold. The other approach, Hold accuracy, predicts all phenotypes in all fold and calculates correlation between the observed and predicted phenotypes at the end of the cross-validation process. Using simulated and real data, we demonstrated that our hypothesis is true. Both approaches are biased downward under certain conditions. The biases become larger when more fold are employed and when the expected accuracy is low. The bias of Instant accuracy can be corrected using a modified formula. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Effect of genetic architecture on the prediction accuracy of quantitative traits in samples of unrelated individuals.

    Science.gov (United States)

    Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C

    2018-06-01

    Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.

  2. Improving Accuracy of Processing Through Active Control

    Directory of Open Access Journals (Sweden)

    N. N. Barbashov

    2016-01-01

    Full Text Available An important task of modern mathematical statistics with its methods based on the theory of probability is a scientific estimate of measurement results. There are certain costs under control, and under ineffective control when a customer has got defective products these costs are significantly higher because of parts recall.When machining the parts, under the influence of errors a range scatter of part dimensions is offset towards the tolerance limit. To improve a processing accuracy and avoid defective products involves reducing components of error in machining, i.e. to improve the accuracy of machine and tool, tool life, rigidity of the system, accuracy of the adjustment. In a given time it is also necessary to adapt machine.To improve an accuracy and a machining rate there, currently  become extensively popular various the in-process gaging devices and controlled machining that uses adaptive control systems for the process monitoring. Improving the accuracy in this case is compensation of a majority of technological errors. The in-cycle measuring sensors (sensors of active control allow processing accuracy improvement by one or two quality and provide a capability for simultaneous operation of several machines.Efficient use of in-cycle measuring sensors requires development of methods to control the accuracy through providing the appropriate adjustments. Methods based on the moving average, appear to be the most promising for accuracy control since they include data on the change in some last measured values of the parameter under control.

  3. Explicit Modeling of Ancestry Improves Polygenic Risk Scores and BLUP Prediction.

    Science.gov (United States)

    Chen, Chia-Yen; Han, Jiali; Hunter, David J; Kraft, Peter; Price, Alkes L

    2015-09-01

    Polygenic prediction using genome-wide SNPs can provide high prediction accuracy for complex traits. Here, we investigate the question of how to account for genetic ancestry when conducting polygenic prediction. We show that the accuracy of polygenic prediction in structured populations may be partly due to genetic ancestry. However, we hypothesized that explicitly modeling ancestry could improve polygenic prediction accuracy. We analyzed three GWAS of hair color (HC), tanning ability (TA), and basal cell carcinoma (BCC) in European Americans (sample size from 7,440 to 9,822) and considered two widely used polygenic prediction approaches: polygenic risk scores (PRSs) and best linear unbiased prediction (BLUP). We compared polygenic prediction without correction for ancestry to polygenic prediction with ancestry as a separate component in the model. In 10-fold cross-validation using the PRS approach, the R(2) for HC increased by 66% (0.0456-0.0755; P ancestry, which prevents ancestry effects from entering into each SNP effect and being overweighted. Surprisingly, explicitly modeling ancestry produces a similar improvement when using the BLUP approach, which fits all SNPs simultaneously in a single variance component and causes ancestry to be underweighted. We validate our findings via simulations, which show that the differences in prediction accuracy will increase in magnitude as sample sizes increase. In summary, our results show that explicitly modeling ancestry can be important in both PRS and BLUP prediction. © 2015 WILEY PERIODICALS, INC.

  4. Assessing Genomic Selection Prediction Accuracy in a Dynamic Barley Breeding Population

    Directory of Open Access Journals (Sweden)

    A. H. Sallam

    2015-03-01

    Full Text Available Prediction accuracy of genomic selection (GS has been previously evaluated through simulation and cross-validation; however, validation based on progeny performance in a plant breeding program has not been investigated thoroughly. We evaluated several prediction models in a dynamic barley breeding population comprised of 647 six-row lines using four traits differing in genetic architecture and 1536 single nucleotide polymorphism (SNP markers. The breeding lines were divided into six sets designated as one parent set and five consecutive progeny sets comprised of representative samples of breeding lines over a 5-yr period. We used these data sets to investigate the effect of model and training population composition on prediction accuracy over time. We found little difference in prediction accuracy among the models confirming prior studies that found the simplest model, random regression best linear unbiased prediction (RR-BLUP, to be accurate across a range of situations. In general, we found that using the parent set was sufficient to predict progeny sets with little to no gain in accuracy from generating larger training populations by combining the parent set with subsequent progeny sets. The prediction accuracy ranged from 0.03 to 0.99 across the four traits and five progeny sets. We explored characteristics of the training and validation populations (marker allele frequency, population structure, and linkage disequilibrium, LD as well as characteristics of the trait (genetic architecture and heritability, . Fixation of markers associated with a trait over time was most clearly associated with reduced prediction accuracy for the mycotoxin trait DON. Higher trait in the training population and simpler trait architecture were associated with greater prediction accuracy.

  5. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  6. Improving the accuracy of energy baseline models for commercial buildings with occupancy data

    International Nuclear Information System (INIS)

    Liang, Xin; Hong, Tianzhen; Shen, Geoffrey Qiping

    2016-01-01

    Highlights: • We evaluated several baseline models predicting energy use in buildings. • Including occupancy data improved accuracy of baseline model prediction. • Occupancy is highly correlated with energy use in buildings. • This simple approach can be used in decision makings of energy retrofit projects. - Abstract: More than 80% of energy is consumed during operation phase of a building’s life cycle, so energy efficiency retrofit for existing buildings is considered a promising way to reduce energy use in buildings. The investment strategies of retrofit depend on the ability to quantify energy savings by “measurement and verification” (M&V), which compares actual energy consumption to how much energy would have been used without retrofit (called the “baseline” of energy use). Although numerous models exist for predicting baseline of energy use, a critical limitation is that occupancy has not been included as a variable. However, occupancy rate is essential for energy consumption and was emphasized by previous studies. This study develops a new baseline model which is built upon the Lawrence Berkeley National Laboratory (LBNL) model but includes the use of building occupancy data. The study also proposes metrics to quantify the accuracy of prediction and the impacts of variables. However, the results show that including occupancy data does not significantly improve the accuracy of the baseline model, especially for HVAC load. The reasons are discussed further. In addition, sensitivity analysis is conducted to show the influence of parameters in baseline models. The results from this study can help us understand the influence of occupancy on energy use, improve energy baseline prediction by including the occupancy factor, reduce risks of M&V and facilitate investment strategies of energy efficiency retrofit.

  7. Combined Scintigraphy and Tumor Marker Analysis Predicts Unfavorable Histopathology of Neuroblastic Tumors with High Accuracy.

    Directory of Open Access Journals (Sweden)

    Wolfgang Peter Fendler

    Full Text Available Our aim was to improve the prediction of unfavorable histopathology (UH in neuroblastic tumors through combined imaging and biochemical parameters.123I-MIBG SPECT and MRI was performed before surgical resection or biopsy in 47 consecutive pediatric patients with neuroblastic tumor. Semi-quantitative tumor-to-liver count-rate ratio (TLCRR, MRI tumor size and margins, urine catecholamine and NSE blood levels of neuron specific enolase (NSE were recorded. Accuracy of single and combined variables for prediction of UH was tested by ROC analysis with Bonferroni correction.34 of 47 patients had UH based on the International Neuroblastoma Pathology Classification (INPC. TLCRR and serum NSE both predicted UH with moderate accuracy. Optimal cut-off for TLCRR was 2.0, resulting in 68% sensitivity and 100% specificity (AUC-ROC 0.86, p < 0.001. Optimal cut-off for NSE was 25.8 ng/ml, resulting in 74% sensitivity and 85% specificity (AUC-ROC 0.81, p = 0.001. Combination of TLCRR/NSE criteria reduced false negative findings from 11/9 to only five, with improved sensitivity and specificity of 85% (AUC-ROC 0.85, p < 0.001.Strong 123I-MIBG uptake and high serum level of NSE were each predictive of UH. Combined analysis of both parameters improved the prediction of UH in patients with neuroblastic tumor. MRI parameters and urine catecholamine levels did not predict UH.

  8. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.

    Science.gov (United States)

    Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter

    2013-12-06

    In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least

  9. High accuracy prediction of beta-turns and their types using propensities and multiple alignments.

    Science.gov (United States)

    Fuchs, Patrick F J; Alix, Alain J P

    2005-06-01

    We have developed a method that predicts both the presence and the type of beta-turns, using a straightforward approach based on propensities and multiple alignments. The propensities were calculated classically, but the way to use them for prediction was completely new: starting from a tetrapeptide sequence on which one wants to evaluate the presence of a beta-turn, the propensity for a given residue is modified by taking into account all the residues present in the multiple alignment at this position. The evaluation of a score is then done by weighting these propensities by the use of Position-specific score matrices generated by PSI-BLAST. The introduction of secondary structure information predicted by PSIPRED or SSPRO2 as well as taking into account the flanking residues around the tetrapeptide improved the accuracy greatly. This latter evaluated on a database of 426 reference proteins (previously used on other studies) by a sevenfold crossvalidation gave very good results with a Matthews Correlation Coefficient (MCC) of 0.42 and an overall prediction accuracy of 74.8%; this places our method among the best ones. A jackknife test was also done, which gave results within the same range. This shows that it is possible to reach neural networks accuracy with considerably less computional cost and complexity. Furthermore, propensities remain excellent descriptors of amino acid tendencies to belong to beta-turns, which can be useful for peptide or protein engineering and design. For beta-turn type prediction, we reached the best accuracy ever published in terms of MCC (except for the irregular type IV) in the range of 0.25-0.30 for types I, II, and I' and 0.13-0.15 for types VIII, II', and IV. To our knowledge, our method is the only one available on the Web that predicts types I' and II'. The accuracy evaluated on two larger databases of 547 and 823 proteins was not improved significantly. All of this was implemented into a Web server called COUDES (French acronym

  10. Can machine-learning improve cardiovascular risk prediction using routine clinical data?

    Science.gov (United States)

    Kai, Joe; Garibaldi, Jonathan M.; Qureshi, Nadeem

    2017-01-01

    Background Current approaches to predict cardiovascular risk fail to identify many people who would benefit from preventive treatment, while others receive unnecessary intervention. Machine-learning offers opportunity to improve accuracy by exploiting complex interactions between risk factors. We assessed whether machine-learning can improve cardiovascular risk prediction. Methods Prospective cohort study using routine clinical data of 378,256 patients from UK family practices, free from cardiovascular disease at outset. Four machine-learning algorithms (random forest, logistic regression, gradient boosting machines, neural networks) were compared to an established algorithm (American College of Cardiology guidelines) to predict first cardiovascular event over 10-years. Predictive accuracy was assessed by area under the ‘receiver operating curve’ (AUC); and sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) to predict 7.5% cardiovascular risk (threshold for initiating statins). Findings 24,970 incident cardiovascular events (6.6%) occurred. Compared to the established risk prediction algorithm (AUC 0.728, 95% CI 0.723–0.735), machine-learning algorithms improved prediction: random forest +1.7% (AUC 0.745, 95% CI 0.739–0.750), logistic regression +3.2% (AUC 0.760, 95% CI 0.755–0.766), gradient boosting +3.3% (AUC 0.761, 95% CI 0.755–0.766), neural networks +3.6% (AUC 0.764, 95% CI 0.759–0.769). The highest achieving (neural networks) algorithm predicted 4,998/7,404 cases (sensitivity 67.5%, PPV 18.4%) and 53,458/75,585 non-cases (specificity 70.7%, NPV 95.7%), correctly predicting 355 (+7.6%) more patients who developed cardiovascular disease compared to the established algorithm. Conclusions Machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others

  11. Improving the accuracy of the structure prediction of the third hypervariable loop of the heavy chains of antibodies.

    KAUST Repository

    Messih, Mario Abdel; Lepore, Rosalba; Marcatili, Paolo; Tramontano, Anna

    2014-01-01

    MOTIVATION: Antibodies are able to recognize a wide range of antigens through their complementary determining regions formed by six hypervariable loops. Predicting the 3D structure of these loops is essential for the analysis and reengineering of novel antibodies with enhanced affinity and specificity. The canonical structure model allows high accuracy prediction for five of the loops. The third loop of the heavy chain, H3, is the hardest to predict because of its diversity in structure, length and sequence composition. RESULTS: We describe a method, based on the Random Forest automatic learning technique, to select structural templates for H3 loops among a dataset of candidates. These can be used to predict the structure of the loop with a higher accuracy than that achieved by any of the presently available methods. The method also has the advantage of being extremely fast and returning a reliable estimate of the model quality. AVAILABILITY AND IMPLEMENTATION: The source code is freely available at http://www.biocomputing.it/H3Loopred/ .

  12. Improving the accuracy of the structure prediction of the third hypervariable loop of the heavy chains of antibodies.

    KAUST Repository

    Messih, Mario Abdel

    2014-06-13

    MOTIVATION: Antibodies are able to recognize a wide range of antigens through their complementary determining regions formed by six hypervariable loops. Predicting the 3D structure of these loops is essential for the analysis and reengineering of novel antibodies with enhanced affinity and specificity. The canonical structure model allows high accuracy prediction for five of the loops. The third loop of the heavy chain, H3, is the hardest to predict because of its diversity in structure, length and sequence composition. RESULTS: We describe a method, based on the Random Forest automatic learning technique, to select structural templates for H3 loops among a dataset of candidates. These can be used to predict the structure of the loop with a higher accuracy than that achieved by any of the presently available methods. The method also has the advantage of being extremely fast and returning a reliable estimate of the model quality. AVAILABILITY AND IMPLEMENTATION: The source code is freely available at http://www.biocomputing.it/H3Loopred/ .

  13. A Modified LS+AR Model to Improve the Accuracy of the Short-term Polar Motion Prediction

    Science.gov (United States)

    Wang, Z. W.; Wang, Q. X.; Ding, Y. Q.; Zhang, J. J.; Liu, S. S.

    2017-03-01

    There are two problems of the LS (Least Squares)+AR (AutoRegressive) model in polar motion forecast: the inner residual value of LS fitting is reasonable, but the residual value of LS extrapolation is poor; and the LS fitting residual sequence is non-linear. It is unsuitable to establish an AR model for the residual sequence to be forecasted, based on the residual sequence before forecast epoch. In this paper, we make solution to those two problems with two steps. First, restrictions are added to the two endpoints of LS fitting data to fix them on the LS fitting curve. Therefore, the fitting values next to the two endpoints are very close to the observation values. Secondly, we select the interpolation residual sequence of an inward LS fitting curve, which has a similar variation trend as the LS extrapolation residual sequence, as the modeling object of AR for the residual forecast. Calculation examples show that this solution can effectively improve the short-term polar motion prediction accuracy by the LS+AR model. In addition, the comparison results of the forecast models of RLS (Robustified Least Squares)+AR, RLS+ARIMA (AutoRegressive Integrated Moving Average), and LS+ANN (Artificial Neural Network) confirm the feasibility and effectiveness of the solution for the polar motion forecast. The results, especially for the polar motion forecast in the 1-10 days, show that the forecast accuracy of the proposed model can reach the world level.

  14. Accuracy improvement of irradiation data by combining ground and satellite measurements

    Energy Technology Data Exchange (ETDEWEB)

    Betcke, J. [Energy and Semiconductor Research Laboratory, Carl von Ossietzky University, Oldenburg (Germany); Beyer, H.G. [Department of Electrical Engineering, University of Applied Science (F.H.) Magdeburg-Stendal, Magdeburg (Germany)

    2004-07-01

    Accurate and site-specific irradiation data are essential input for optimal planning, monitoring and operation of solar energy technologies. A concrete example is the performance check of grid connected PV systems with the PVSAT-2 procedure. This procedure detects system faults in an early stage by a daily comparison of an individual reference yield with the actual yield. Calculation of the reference yield requires hourly irradiation data with a known accuracy. A field test of the predecessing PVSAT-1 procedure showed that the accuracy of the irradiation input is the determining factor for the overall accuracy of the yield calculation. In this paper we will investigate if it is possible to improve the accuracy of sitespeci.c irradiation data by combining accurate localised pyranometer data with semi-continuous satellite data.We will therefore introduce the ''Kriging of Differences'' data fusion method. Kriging of Differences also offers the possibility to estimate it's own accuracy. The obtainable accuracy gain and the effectiveness of the accuracy prediction will be investigated by validation on monthly and daily irradiation datasets. Results will be compared with the Heliosat method and interpolation of ground data. (orig.)

  15. An Information Theory Account of Preference Prediction Accuracy

    NARCIS (Netherlands)

    Pollmann, Monique; Scheibehenne, Benjamin

    2015-01-01

    Knowledge about other people's preferences is essential for successful social interactions, but what exactly are the driving factors that determine how well we can predict the likes and dislikes of people around us? To investigate the accuracy of couples’ preference predictions we outline and

  16. Application of FFTBM with signal mirroring to improve accuracy assessment of MELCOR code

    International Nuclear Information System (INIS)

    Saghafi, Mahdi; Ghofrani, Mohammad Bagher; D’Auria, Francesco

    2016-01-01

    Highlights: • FFTBM-SM is an improved Fast Fourier Transform Base Method by signal mirroring. • FFTBM-SM has been applied to accuracy assessment of MELCOR code predictions. • The case studied was Station Black-Out accident in PSB-VVER integral test facility. • FFTBM-SM eliminates fluctuations of accuracy indices when signals sharply change. • Accuracy assessment is performed in a more realistic and consistent way by FFTBM-SM. - Abstract: This paper deals with the application of Fast Fourier Transform Base Method (FFTBM) with signal mirroring (FFTBM-SM) to assess accuracy of MELCOR code. This provides deeper insights into how the accuracy of MELCOR code in predictions of thermal-hydraulic parameters varies during transients. The case studied was modeling of Station Black-Out (SBO) accident in PSB-VVER integral test facility by MELCOR code. The accuracy of this thermal-hydraulic modeling was previously quantified using original FFTBM in a few number of time-intervals, based on phenomenological windows of SBO accident. Accuracy indices calculated by original FFTBM in a series of time-intervals unreasonably fluctuate when the investigated signals sharply increase or decrease. In the current study, accuracy of MELCOR code is quantified using FFTBM-SM in a series of increasing time-intervals, and the results are compared to those with original FFTBM. Also, differences between the accuracy indices of original FFTBM and FFTBM-SM are investigated and correction factors calculated to eliminate unphysical effects in original FFTBM. The main findings are: (1) replacing limited number of phenomena-based time-intervals by a series of increasing time-intervals provides deeper insights about accuracy variation of the MELCOR calculations, and (2) application of FFTBM-SM for accuracy evaluation of the MELCOR predictions, provides more reliable results than original FFTBM by eliminating the fluctuations of accuracy indices when experimental signals sharply increase or

  17. Application of FFTBM with signal mirroring to improve accuracy assessment of MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Saghafi, Mahdi [Department of Energy Engineering, Sharif University of Technology, Azadi Avenue, Tehran (Iran, Islamic Republic of); Ghofrani, Mohammad Bagher, E-mail: ghofrani@sharif.edu [Department of Energy Engineering, Sharif University of Technology, Azadi Avenue, Tehran (Iran, Islamic Republic of); D’Auria, Francesco [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, San Piero a Grado, Pisa (Italy)

    2016-11-15

    Highlights: • FFTBM-SM is an improved Fast Fourier Transform Base Method by signal mirroring. • FFTBM-SM has been applied to accuracy assessment of MELCOR code predictions. • The case studied was Station Black-Out accident in PSB-VVER integral test facility. • FFTBM-SM eliminates fluctuations of accuracy indices when signals sharply change. • Accuracy assessment is performed in a more realistic and consistent way by FFTBM-SM. - Abstract: This paper deals with the application of Fast Fourier Transform Base Method (FFTBM) with signal mirroring (FFTBM-SM) to assess accuracy of MELCOR code. This provides deeper insights into how the accuracy of MELCOR code in predictions of thermal-hydraulic parameters varies during transients. The case studied was modeling of Station Black-Out (SBO) accident in PSB-VVER integral test facility by MELCOR code. The accuracy of this thermal-hydraulic modeling was previously quantified using original FFTBM in a few number of time-intervals, based on phenomenological windows of SBO accident. Accuracy indices calculated by original FFTBM in a series of time-intervals unreasonably fluctuate when the investigated signals sharply increase or decrease. In the current study, accuracy of MELCOR code is quantified using FFTBM-SM in a series of increasing time-intervals, and the results are compared to those with original FFTBM. Also, differences between the accuracy indices of original FFTBM and FFTBM-SM are investigated and correction factors calculated to eliminate unphysical effects in original FFTBM. The main findings are: (1) replacing limited number of phenomena-based time-intervals by a series of increasing time-intervals provides deeper insights about accuracy variation of the MELCOR calculations, and (2) application of FFTBM-SM for accuracy evaluation of the MELCOR predictions, provides more reliable results than original FFTBM by eliminating the fluctuations of accuracy indices when experimental signals sharply increase or

  18. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)

    Science.gov (United States)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  19. The accuracy of new wheelchair users' predictions about their future wheelchair use.

    Science.gov (United States)

    Hoenig, Helen; Griffiths, Patricia; Ganesh, Shanti; Caves, Kevin; Harris, Frances

    2012-06-01

    This study examined the accuracy of new wheelchair user predictions about their future wheelchair use. This was a prospective cohort study of 84 community-dwelling veterans provided a new manual wheelchair. The association between predicted and actual wheelchair use was strong at 3 mos (ϕ coefficient = 0.56), with 90% of those who anticipated using the wheelchair at 3 mos still using it (i.e., positive predictive value = 0.96) and 60% of those who anticipated not using it indeed no longer using the wheelchair (i.e., negative predictive value = 0.60, overall accuracy = 0.92). Predictive accuracy diminished over time, with overall accuracy declining from 0.92 at 3 mos to 0.66 at 6 mos. At all time points, and for all types of use, patients better predicted use as opposed to disuse, with correspondingly higher positive than negative predictive values. Accuracy of prediction of use in specific indoor and outdoor locations varied according to location. This study demonstrates the importance of better understanding the potential mismatch between the anticipated and actual patterns of wheelchair use. The findings suggest that users can be relied upon to accurately predict their basic wheelchair-related needs in the short-term. Further exploration is needed to identify characteristics that will aid users and their providers in more accurately predicting mobility needs for the long-term.

  20. Increasing imputation and prediction accuracy for Chinese Holsteins using joint Chinese-Nordic reference population

    DEFF Research Database (Denmark)

    Ma, Peipei; Lund, Mogens Sandø; Ding, X

    2015-01-01

    This study investigated the effect of including Nordic Holsteins in the reference population on the imputation accuracy and prediction accuracy for Chinese Holsteins. The data used in this study include 85 Chinese Holstein bulls genotyped with both 54K chip and 777K (HD) chip, 2862 Chinese cows...... was improved slightly when using the marker data imputed based on the combined HD reference data, compared with using the marker data imputed based on the Chinese HD reference data only. On the other hand, when using the combined reference population including 4398 Nordic Holstein bulls, the accuracy...... to increase reference population rather than increasing marker density...

  1. Improving shuffler assay accuracy

    International Nuclear Information System (INIS)

    Rinard, P.M.

    1995-01-01

    Drums of uranium waste should be disposed of in an economical and environmentally sound manner. The most accurate possible assays of the uranium masses in the drums are required for proper disposal. The accuracies of assays from a shuffler are affected by the type of matrix material in the drums. Non-hydrogenous matrices have little effect on neutron transport and accuracies are very good. If self-shielding is known to be a minor problem, good accuracies are also obtained with hydrogenous matrices when a polyethylene sleeve is placed around the drums. But for those cases where self-shielding may be a problem, matrices are hydrogenous, and uranium distributions are non-uniform throughout the drums, the accuracies are degraded. They can be greatly improved by determining the distributions of the uranium and then applying correction factors based on the distributions. This paper describes a technique for determining uranium distributions by using the neutron count rates in detector banks around the waste drum and solving a set of overdetermined linear equations. Other approaches were studied to determine the distributions and are described briefly. Implementation of this correction is anticipated on an existing shuffler next year

  2. Genomic Prediction Accuracy for Resistance Against Piscirickettsia salmonis in Farmed Rainbow Trout

    Directory of Open Access Journals (Sweden)

    Grazyella M. Yoshida

    2018-02-01

    Full Text Available Salmonid rickettsial syndrome (SRS, caused by the intracellular bacterium Piscirickettsia salmonis, is one of the main diseases affecting rainbow trout (Oncorhynchus mykiss farming. To accelerate genetic progress, genomic selection methods can be used as an effective approach to control the disease. The aims of this study were: (i to compare the accuracy of estimated breeding values using pedigree-based best linear unbiased prediction (PBLUP with genomic BLUP (GBLUP, single-step GBLUP (ssGBLUP, Bayes C, and Bayesian Lasso (LASSO; and (ii to test the accuracy of genomic prediction and PBLUP using different marker densities (0.5, 3, 10, 20, and 27 K for resistance against P. salmonis in rainbow trout. Phenotypes were recorded as number of days to death (DD and binary survival (BS from 2416 fish challenged with P. salmonis. A total of 1934 fish were genotyped using a 57 K single-nucleotide polymorphism (SNP array. All genomic prediction methods achieved higher accuracies than PBLUP. The relative increase in accuracy for different genomic models ranged from 28 to 41% for both DD and BS at 27 K SNP. Between different genomic models, the highest relative increase in accuracy was obtained with Bayes C (∼40%, where 3 K SNP was enough to achieve a similar accuracy to that of the 27 K SNP for both traits. For resistance against P. salmonis in rainbow trout, we showed that genomic predictions using GBLUP, ssGBLUP, Bayes C, and LASSO can increase accuracy compared with PBLUP. Moreover, it is possible to use relatively low-density SNP panels for genomic prediction without compromising accuracy predictions for resistance against P. salmonis in rainbow trout.

  3. Improving the Accuracy of the Hyperspectral Model for Apple Canopy Water Content Prediction using the Equidistant Sampling Method.

    Science.gov (United States)

    Zhao, Huan-San; Zhu, Xi-Cun; Li, Cheng; Wei, Yu; Zhao, Geng-Xing; Jiang, Yuan-Mao

    2017-09-11

    The influence of the equidistant sampling method was explored in a hyperspectral model for the accurate prediction of the water content of apple tree canopy. The relationship between spectral reflectance and water content was explored using the sample partition methods of equidistant sampling and random sampling, and a stepwise regression model of the apple canopy water content was established. The results showed that the random sampling model was Y = 0.4797 - 721787.3883 × Z 3 - 766567.1103 × Z 5 - 771392.9030 × Z 6 ; the equidistant sampling model was Y = 0.4613 - 480610.4213 × Z 2 - 552189.0450 × Z 5 - 1006181.8358 × Z 6 . After verification, the equidistant sampling method was verified to offer a superior prediction ability. The calibration set coefficient of determination of 0.6599 and validation set coefficient of determination of 0.8221 were higher than that of the random sampling model by 9.20% and 10.90%, respectively. The root mean square error (RMSE) of 0.0365 and relative error (RE) of 0.0626 were lower than that of the random sampling model by 17.23% and 17.09%, respectively. Dividing the calibration set and validation set by the equidistant sampling method can improve the prediction accuracy of the hyperspectral model of apple canopy water content.

  4. Improving coding accuracy in an academic practice.

    Science.gov (United States)

    Nguyen, Dana; O'Mara, Heather; Powell, Robert

    2017-01-01

    Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.

  5. Empirical and deterministic accuracies of across-population genomic prediction

    NARCIS (Netherlands)

    Wientjes, Y.C.J.; Veerkamp, R.F.; Bijma, P.; Bovenhuis, H.; Schrooten, C.; Calus, M.P.L.

    2015-01-01

    Background: Differences in linkage disequilibrium and in allele substitution effects of QTL (quantitative trait loci) may hinder genomic prediction across populations. Our objective was to develop a deterministic formula to estimate the accuracy of across-population genomic prediction, for which

  6. Improving Saliency Models by Predicting Human Fixation Patches

    KAUST Repository

    Dubey, Rachit

    2015-04-16

    There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.

  7. Improving Saliency Models by Predicting Human Fixation Patches

    KAUST Repository

    Dubey, Rachit; Dave, Akshat; Ghanem, Bernard

    2015-01-01

    There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.

  8. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Directory of Open Access Journals (Sweden)

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  9. RAPID COMMUNICATION: Improving prediction accuracy of GPS satellite clocks with periodic variation behaviour

    Science.gov (United States)

    Heo, Youn Jeong; Cho, Jeongho; Heo, Moon Beom

    2010-07-01

    The broadcast ephemeris and IGS ultra-rapid predicted (IGU-P) products are primarily available for use in real-time GPS applications. The IGU orbit precision has been remarkably improved since late 2007, but its clock products have not shown acceptably high-quality prediction performance. One reason for this fact is that satellite atomic clocks in space can be easily influenced by various factors such as temperature and environment and this leads to complicated aspects like periodic variations, which are not sufficiently described by conventional models. A more reliable prediction model is thus proposed in this paper in order to be utilized particularly in describing the periodic variation behaviour satisfactorily. The proposed prediction model for satellite clocks adds cyclic terms to overcome the periodic effects and adopts delay coordinate embedding, which offers the possibility of accessing linear or nonlinear coupling characteristics like satellite behaviour. The simulation results have shown that the proposed prediction model outperforms the IGU-P solutions at least on a daily basis.

  10. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    International Nuclear Information System (INIS)

    Ko, P; Kurosawa, S

    2014-01-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine

  11. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    Science.gov (United States)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  12. BAYESIAN FORECASTS COMBINATION TO IMPROVE THE ROMANIAN INFLATION PREDICTIONS BASED ON ECONOMETRIC MODELS

    Directory of Open Access Journals (Sweden)

    Mihaela Simionescu

    2014-12-01

    Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.

  13. Quantitative accuracy assessment of thermalhydraulic code predictions with SARBM

    International Nuclear Information System (INIS)

    Prosek, A.

    2001-01-01

    In recent years, the nuclear reactor industry has focused significant attention on nuclear reactor systems code accuracy and uncertainty issues. A few methods suitable to quantify code accuracy of thermalhydraulic code calculations were proposed and applied in the past. In this study a Stochastic Approximation Ratio Based Method (SARBM) was adapted and proposed for accuracy quantification. The objective of the study was to qualify the SARBM. The study compare the accuracy obtained by SARBM with the results obtained by widely used Fast Fourier Transform Based Method (FFTBM). The methods were applied to RELAP5/MOD3.2 code calculations of various BETHSY experiments. The obtained results showed that the SARBM was able to satisfactorily predict the accuracy of the calculated trends when visually comparing plots and comparing the results with the qualified FFTBM. The analysis also showed that the new figure-of-merit called accuracy factor (AF) is more convenient than stochastic approximation ratio for combining single variable accuracy's into total accuracy. The accuracy results obtained for the selected tests suggest that the acceptability factors for the SAR method were reasonably defined. The results also indicate that AF is a useful quantitative measure of accuracy.(author)

  14. Boosted classification trees result in minor to modest improvement in the accuracy in classifying cardiovascular outcomes compared to conventional classification trees

    Science.gov (United States)

    Austin, Peter C; Lee, Douglas S

    2011-01-01

    Purpose: Classification trees are increasingly being used to classifying patients according to the presence or absence of a disease or health outcome. A limitation of classification trees is their limited predictive accuracy. In the data-mining and machine learning literature, boosting has been developed to improve classification. Boosting with classification trees iteratively grows classification trees in a sequence of reweighted datasets. In a given iteration, subjects that were misclassified in the previous iteration are weighted more highly than subjects that were correctly classified. Classifications from each of the classification trees in the sequence are combined through a weighted majority vote to produce a final classification. The authors' objective was to examine whether boosting improved the accuracy of classification trees for predicting outcomes in cardiovascular patients. Methods: We examined the utility of boosting classification trees for classifying 30-day mortality outcomes in patients hospitalized with either acute myocardial infarction or congestive heart failure. Results: Improvements in the misclassification rate using boosted classification trees were at best minor compared to when conventional classification trees were used. Minor to modest improvements to sensitivity were observed, with only a negligible reduction in specificity. For predicting cardiovascular mortality, boosted classification trees had high specificity, but low sensitivity. Conclusions: Gains in predictive accuracy for predicting cardiovascular outcomes were less impressive than gains in performance observed in the data mining literature. PMID:22254181

  15. Accuracy of ultrasound for the prediction of placenta accreta.

    Science.gov (United States)

    Bowman, Zachary S; Eller, Alexandra G; Kennedy, Anne M; Richards, Douglas S; Winter, Thomas C; Woodward, Paula J; Silver, Robert M

    2014-08-01

    Ultrasound has been reported to be greater than 90% sensitive for the diagnosis of accreta. Prior studies may be subject to bias because of single expert observers, suspicion for accreta, and knowledge of risk factors. We aimed to assess the accuracy of ultrasound for the prediction of accreta. Patients with accreta at a single academic center were matched to patients with placenta previa, but no accreta, by year of delivery. Ultrasound studies with views of the placenta were collected, deidentified, blinded to clinical history, and placed in random sequence. Six investigators prospectively interpreted each study for the presence of accreta and findings reported to be associated with its diagnosis. Sensitivity, specificity, positive predictive, negative predictive value, and accuracy were calculated. Characteristics of accurate findings were compared using univariate and multivariate analyses. Six investigators examined 229 ultrasound studies from 55 patients with accreta and 56 controls for 1374 independent observations. 1205/1374 (87.7% overall, 90% controls, 84.9% cases) studies were given a diagnosis. There were 371 (27.0%) true positives; 81 (5.9%) false positives; 533 (38.8%) true negatives, 220 (16.0%) false negatives, and 169 (12.3%) with uncertain diagnosis. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were 53.5%, 88.0%, 82.1%, 64.8%, and 64.8%, respectively. In multivariate analysis, true positives were more likely to have placental lacunae (odds ratio [OR], 1.5; 95% confidence interval [CI], 1.4-1.6), loss of retroplacental clear space (OR, 2.4; 95% CI, 1.1-4.9), or abnormalities on color Doppler (OR, 2.1; 95% CI, 1.8-2.4). Ultrasound for the prediction of placenta accreta may not be as sensitive as previously described. Copyright © 2014 Mosby, Inc. All rights reserved.

  16. Typing speed, spelling accuracy, and the use of word-prediction

    Directory of Open Access Journals (Sweden)

    Marina Herold

    2008-02-01

    Full Text Available Children with spelling difficulties are limited in their participation in all written school activities. We aimed to investigate the influence of word-prediction as a tool on spelling accuracy and typing speed. To this end, we selected 80 Grade 4 - 6 children with spelling difficulties in a school for special needs to participate in a research project involving a cross-over within-subject design. The research task took the form of entering 30 words through an on-screen keyboard, with and without the use of word-prediction software. The Graded Word Spelling Test served to investigate whether there was a relationship between the children's current spelling knowledge and word-prediction efficacy. The results indicated an increase in spelling accuracy with the use of word-prediction, but at the cost of time and the tendency to use word approximations, and no significant relationship between spelling knowledge and word-prediction efficacy.

  17. How social information can improve estimation accuracy in human groups.

    Science.gov (United States)

    Jayles, Bertrand; Kim, Hye-Rin; Escobedo, Ramón; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy

    2017-11-21

    In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects' sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. Copyright © 2017 the Author(s). Published by PNAS.

  18. Improving Accuracy of Processing by Adaptive Control Techniques

    Directory of Open Access Journals (Sweden)

    N. N. Barbashov

    2016-01-01

    Full Text Available When machining the work-pieces a range of scatter of the work-piece dimensions to the tolerance limit is displaced in response to the errors. To improve an accuracy of machining and prevent products from defects it is necessary to diminish the machining error components, i.e. to improve the accuracy of machine tool, tool life, rigidity of the system, accuracy of adjustment. It is also necessary to provide on-machine adjustment after a certain time. However, increasing number of readjustments reduces the performance and high machine and tool requirements lead to a significant increase in the machining cost.To improve the accuracy and machining rate, various devices of active control (in-process gaging devices, as well as controlled machining through adaptive systems for a technological process control now become widely used. Thus, the accuracy improvement in this case is reached by compensation of a majority of technological errors. The sensors of active control can provide improving the accuracy of processing by one or two quality classes, and simultaneous operation of several machines.For efficient use of sensors of active control it is necessary to develop the accuracy control methods by means of introducing the appropriate adjustments to solve this problem. Methods based on the moving average, appear to be the most promising for accuracy control, since they contain information on the change in the last several measured values of the parameter under control.When using the proposed method in calculation, the first three members of the sequence of deviations remain unchanged, therefore 1 1 x  x , 2 2 x  x , 3 3 x  x Then, for each i-th member of the sequence we calculate that way: , ' i i i x  x  k x , where instead of the i x values will be populated with the corresponding values ' i x calculated as an average of three previous members:3 ' 1  2  3  i i i i x x x x .As a criterion for the estimate of the control

  19. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Directory of Open Access Journals (Sweden)

    Michael F Sloma

    2017-11-01

    Full Text Available Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  20. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Science.gov (United States)

    Sloma, Michael F; Mathews, David H

    2017-11-01

    Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  1. Breeding Jatropha curcas by genomic selection: A pilot assessment of the accuracy of predictive models.

    Science.gov (United States)

    Azevedo Peixoto, Leonardo de; Laviola, Bruno Galvêas; Alves, Alexandre Alonso; Rosado, Tatiana Barbosa; Bhering, Leonardo Lopes

    2017-01-01

    Genomic wide selection is a promising approach for improving the selection accuracy in plant breeding, particularly in species with long life cycles, such as Jatropha. Therefore, the objectives of this study were to estimate the genetic parameters for grain yield (GY) and the weight of 100 seeds (W100S) using restricted maximum likelihood (REML); to compare the performance of GWS methods to predict GY and W100S; and to estimate how many markers are needed to train the GWS model to obtain the maximum accuracy. Eight GWS models were compared in terms of predictive ability. The impact that the marker density had on the predictive ability was investigated using a varying number of markers, from 2 to 1,248. Because the genetic variance between evaluated genotypes was significant, it was possible to obtain selection gain. All of the GWS methods tested in this study can be used to predict GY and W100S in Jatropha. A training model fitted using 1,000 and 800 markers is sufficient to capture the maximum genetic variance and, consequently, maximum prediction ability of GY and W100S, respectively. This study demonstrated the applicability of genome-wide prediction to identify useful genetic sources of GY and W100S for Jatropha breeding. Further research is needed to confirm the applicability of the proposed approach to other complex traits.

  2. Breeding Jatropha curcas by genomic selection: A pilot assessment of the accuracy of predictive models.

    Directory of Open Access Journals (Sweden)

    Leonardo de Azevedo Peixoto

    Full Text Available Genomic wide selection is a promising approach for improving the selection accuracy in plant breeding, particularly in species with long life cycles, such as Jatropha. Therefore, the objectives of this study were to estimate the genetic parameters for grain yield (GY and the weight of 100 seeds (W100S using restricted maximum likelihood (REML; to compare the performance of GWS methods to predict GY and W100S; and to estimate how many markers are needed to train the GWS model to obtain the maximum accuracy. Eight GWS models were compared in terms of predictive ability. The impact that the marker density had on the predictive ability was investigated using a varying number of markers, from 2 to 1,248. Because the genetic variance between evaluated genotypes was significant, it was possible to obtain selection gain. All of the GWS methods tested in this study can be used to predict GY and W100S in Jatropha. A training model fitted using 1,000 and 800 markers is sufficient to capture the maximum genetic variance and, consequently, maximum prediction ability of GY and W100S, respectively. This study demonstrated the applicability of genome-wide prediction to identify useful genetic sources of GY and W100S for Jatropha breeding. Further research is needed to confirm the applicability of the proposed approach to other complex traits.

  3. Combining sequence-based prediction methods and circular dichroism and infrared spectroscopic data to improve protein secondary structure determinations

    Directory of Open Access Journals (Sweden)

    Lees Jonathan G

    2008-01-01

    Full Text Available Abstract Background A number of sequence-based methods exist for protein secondary structure prediction. Protein secondary structures can also be determined experimentally from circular dichroism, and infrared spectroscopic data using empirical analysis methods. It has been proposed that comparable accuracy can be obtained from sequence-based predictions as from these biophysical measurements. Here we have examined the secondary structure determination accuracies of sequence prediction methods with the empirically determined values from the spectroscopic data on datasets of proteins for which both crystal structures and spectroscopic data are available. Results In this study we show that the sequence prediction methods have accuracies nearly comparable to those of spectroscopic methods. However, we also demonstrate that combining the spectroscopic and sequences techniques produces significant overall improvements in secondary structure determinations. In addition, combining the extra information content available from synchrotron radiation circular dichroism data with sequence methods also shows improvements. Conclusion Combining sequence prediction with experimentally determined spectroscopic methods for protein secondary structure content significantly enhances the accuracy of the overall results obtained.

  4. Genomic Selection Accuracy using Multifamily Prediction Models in a Wheat Breeding Program

    Directory of Open Access Journals (Sweden)

    Elliot L. Heffner

    2011-03-01

    Full Text Available Genomic selection (GS uses genome-wide molecular marker data to predict the genetic value of selection candidates in breeding programs. In plant breeding, the ability to produce large numbers of progeny per cross allows GS to be conducted within each family. However, this approach requires phenotypes of lines from each cross before conducting GS. This will prolong the selection cycle and may result in lower gains per year than approaches that estimate marker-effects with multiple families from previous selection cycles. In this study, phenotypic selection (PS, conventional marker-assisted selection (MAS, and GS prediction accuracy were compared for 13 agronomic traits in a population of 374 winter wheat ( L. advanced-cycle breeding lines. A cross-validation approach that trained and validated prediction accuracy across years was used to evaluate effects of model selection, training population size, and marker density in the presence of genotype × environment interactions (G×E. The average prediction accuracies using GS were 28% greater than with MAS and were 95% as accurate as PS. For net merit, the average accuracy across six selection indices for GS was 14% greater than for PS. These results provide empirical evidence that multifamily GS could increase genetic gain per unit time and cost in plant breeding.

  5. Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM

    Science.gov (United States)

    Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan

    2018-02-01

    The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.

  6. The use of imprecise processing to improve accuracy in weather and climate prediction

    Energy Technology Data Exchange (ETDEWEB)

    Düben, Peter D., E-mail: dueben@atm.ox.ac.uk [University of Oxford, Atmospheric, Oceanic and Planetary Physics (United Kingdom); McNamara, Hugh [University of Oxford, Mathematical Institute (United Kingdom); Palmer, T.N. [University of Oxford, Atmospheric, Oceanic and Planetary Physics (United Kingdom)

    2014-08-15

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce

  7. The use of imprecise processing to improve accuracy in weather and climate prediction

    International Nuclear Information System (INIS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T.N.

    2014-01-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  8. Improving the Accuracy of a Heliocentric Potential (HCP Prediction Model for the Aviation Radiation Dose

    Directory of Open Access Journals (Sweden)

    Junga Hwang

    2016-12-01

    Full Text Available The space radiation dose over air routes including polar routes should be carefully considered, especially when space weather shows sudden disturbances such as coronal mass ejections (CMEs, flares, and accompanying solar energetic particle events. We recently established a heliocentric potential (HCP prediction model for real-time operation of the CARI-6 and CARI-6M programs. Specifically, the HCP value is used as a critical input value in the CARI-6/6M programs, which estimate the aviation route dose based on the effective dose rate. The CARI-6/6M approach is the most widely used technique, and the programs can be obtained from the U.S. Federal Aviation Administration (FAA. However, HCP values are given at a one month delay on the FAA official webpage, which makes it difficult to obtain real-time information on the aviation route dose. In order to overcome this critical limitation regarding the time delay for space weather customers, we developed a HCP prediction model based on sunspot number variations (Hwang et al. 2015. In this paper, we focus on improvements to our HCP prediction model and update it with neutron monitoring data. We found that the most accurate method to derive the HCP value involves (1 real-time daily sunspot assessments, (2 predictions of the daily HCP by our prediction algorithm, and (3 calculations of the resultant daily effective dose rate. Additionally, we also derived the HCP prediction algorithm in this paper by using ground neutron counts. With the compensation stemming from the use of ground neutron count data, the newly developed HCP prediction model was improved.

  9. Improved Wind Speed Prediction Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    ZHANG, Y.

    2018-05-01

    Full Text Available Wind power industry plays an important role in promoting the development of low-carbon economic and energy transformation in the world. However, the randomness and volatility of wind speed series restrict the healthy development of the wind power industry. Accurate wind speed prediction is the key to realize the stability of wind power integration and to guarantee the safe operation of the power system. In this paper, combined with the Empirical Mode Decomposition (EMD, the Radial Basis Function Neural Network (RBF and the Least Square Support Vector Machine (SVM, an improved wind speed prediction model based on Empirical Mode Decomposition (EMD-RBF-LS-SVM is proposed. The prediction result indicates that compared with the traditional prediction model (RBF, LS-SVM, the EMD-RBF-LS-SVM model can weaken the random fluctuation to a certain extent and improve the short-term accuracy of wind speed prediction significantly. In a word, this research will significantly reduce the impact of wind power instability on the power grid, ensure the power grid supply and demand balance, reduce the operating costs in the grid-connected systems, and enhance the market competitiveness of the wind power.

  10. Improving consensus contact prediction via server correlation reduction.

    Science.gov (United States)

    Gao, Xin; Bu, Dongbo; Xu, Jinbo; Li, Ming

    2009-05-06

    Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.

  11. Improving consensus contact prediction via server correlation reduction

    Directory of Open Access Journals (Sweden)

    Xu Jinbo

    2009-05-01

    Full Text Available Abstract Background Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. Results In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Conclusion Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.

  12. Improvement of gas entrainment prediction method. Introduction of surface tension effect

    International Nuclear Information System (INIS)

    Ito, Kei; Sakai, Takaaki; Ohshima, Hiroyuki; Uchibori, Akihiro; Eguchi, Yuzuru; Monji, Hideaki; Xu, Yongze

    2010-01-01

    A gas entrainment (GE) prediction method has been developed to establish design criteria for the large-scale sodium-cooled fast reactor (JSFR) systems. The prototype of the GE prediction method was already confirmed to give reasonable gas core lengths by simple calculation procedures. However, for simplification, the surface tension effects were neglected. In this paper, the evaluation accuracy of gas core lengths is improved by introducing the surface tension effects into the prototype GE prediction method. First, the mechanical balance between gravitational, centrifugal, and surface tension forces is considered. Then, the shape of a gas core tip is approximated by a quadratic function. Finally, using the approximated gas core shape, the authors determine the gas core length satisfying the mechanical balance. This improved GE prediction method is validated by analyzing the gas core lengths observed in simple experiments. Results show that the analytical gas core lengths calculated by the improved GE prediction method become shorter in comparison to the prototype GE prediction method, and are in good agreement with the experimental data. In addition, the experimental data under different temperature and surfactant concentration conditions are reproduced by the improved GE prediction method. (author)

  13. Accuracy Improvement of Neutron Nuclear Data on Minor Actinides

    Science.gov (United States)

    Harada, Hideo; Iwamoto, Osamu; Iwamoto, Nobuyuki; Kimura, Atsushi; Terada, Kazushi; Nakao, Taro; Nakamura, Shoji; Mizuyama, Kazuhito; Igashira, Masayuki; Katabuchi, Tatsuya; Sano, Tadafumi; Takahashi, Yoshiyuki; Takamiya, Koichi; Pyeon, Cheol Ho; Fukutani, Satoshi; Fujii, Toshiyuki; Hori, Jun-ichi; Yagi, Takahiro; Yashima, Hiroshi

    2015-05-01

    Improvement of accuracy of neutron nuclear data for minor actinides (MAs) and long-lived fission products (LLFPs) is required for developing innovative nuclear system transmuting these nuclei. In order to meet the requirement, the project entitled as "Research and development for Accuracy Improvement of neutron nuclear data on Minor ACtinides (AIMAC)" has been started as one of the "Innovative Nuclear Research and Development Program" in Japan at October 2013. The AIMAC project team is composed of researchers in four different fields: differential nuclear data measurement, integral nuclear data measurement, nuclear chemistry, and nuclear data evaluation. By integrating all of the forefront knowledge and techniques in these fields, the team aims at improving the accuracy of the data. The background and research plan of the AIMAC project are presented.

  14. Predictive Accuracy of Exercise Stress Testing the Healthy Adult.

    Science.gov (United States)

    Lamont, Linda S.

    1981-01-01

    Exercise stress testing provides information on the aerobic capacity, heart rate, and blood pressure responses to graded exercises of a healthy adult. The reliability of exercise tests as a diagnostic procedure is discussed in relation to sensitivity and specificity and predictive accuracy. (JN)

  15. Adjusting the Stems Regional Forest Growth Model to Improve Local Predictions

    Science.gov (United States)

    W. Brad Smith

    1983-01-01

    A simple procedure using double sampling is described for adjusting growth in the STEMS regional forest growth model to compensate for subregional variations. Predictive accuracy of the STEMS model (a distance-independent, individual tree growth model for Lake States forests) was improved by using this procedure

  16. Accuracy of taxonomy prediction for 16S rRNA and fungal ITS sequences

    Directory of Open Access Journals (Sweden)

    Robert C. Edgar

    2018-04-01

    Full Text Available Prediction of taxonomy for marker gene sequences such as 16S ribosomal RNA (rRNA is a fundamental task in microbiology. Most experimentally observed sequences are diverged from reference sequences of authoritatively named organisms, creating a challenge for prediction methods. I assessed the accuracy of several algorithms using cross-validation by identity, a new benchmark strategy which explicitly models the variation in distances between query sequences and the closest entry in a reference database. When the accuracy of genus predictions was averaged over a representative range of identities with the reference database (100%, 99%, 97%, 95% and 90%, all tested methods had ≤50% accuracy on the currently-popular V4 region of 16S rRNA. Accuracy was found to fall rapidly with identity; for example, better methods were found to have V4 genus prediction accuracy of ∼100% at 100% identity but ∼50% at 97% identity. The relationship between identity and taxonomy was quantified as the probability that a rank is the lowest shared by a pair of sequences with a given pair-wise identity. With the V4 region, 95% identity was found to be a twilight zone where taxonomy is highly ambiguous because the probabilities that the lowest shared rank between pairs of sequences is genus, family, order or class are approximately equal.

  17. Parametric Bayesian priors and better choice of negative examples improve protein function prediction.

    Science.gov (United States)

    Youngs, Noah; Penfold-Brown, Duncan; Drew, Kevin; Shasha, Dennis; Bonneau, Richard

    2013-05-01

    Computational biologists have demonstrated the utility of using machine learning methods to predict protein function from an integration of multiple genome-wide data types. Yet, even the best performing function prediction algorithms rely on heuristics for important components of the algorithm, such as choosing negative examples (proteins without a given function) or determining key parameters. The improper choice of negative examples, in particular, can hamper the accuracy of protein function prediction. We present a novel approach for choosing negative examples, using a parameterizable Bayesian prior computed from all observed annotation data, which also generates priors used during function prediction. We incorporate this new method into the GeneMANIA function prediction algorithm and demonstrate improved accuracy of our algorithm over current top-performing function prediction methods on the yeast and mouse proteomes across all metrics tested. Code and Data are available at: http://bonneaulab.bio.nyu.edu/funcprop.html

  18. Improving the Accuracy of Cloud Detection Using Machine Learning

    Science.gov (United States)

    Craddock, M. E.; Alliss, R. J.; Mason, M.

    2017-12-01

    Cloud detection from geostationary satellite imagery has long been accomplished through multi-spectral channel differencing in comparison to the Earth's surface. The distinction of clear/cloud is then determined by comparing these differences to empirical thresholds. Using this methodology, the probability of detecting clouds exceeds 90% but performance varies seasonally, regionally and temporally. The Cloud Mask Generator (CMG) database developed under this effort, consists of 20 years of 4 km, 15minute clear/cloud images based on GOES data over CONUS and Hawaii. The algorithms to determine cloudy pixels in the imagery are based on well-known multi-spectral techniques and defined thresholds. These thresholds were produced by manually studying thousands of images and thousands of man-hours to determine the success and failure of the algorithms to fine tune the thresholds. This study aims to investigate the potential of improving cloud detection by using Random Forest (RF) ensemble classification. RF is the ideal methodology to employ for cloud detection as it runs efficiently on large datasets, is robust to outliers and noise and is able to deal with highly correlated predictors, such as multi-spectral satellite imagery. The RF code was developed using Python in about 4 weeks. The region of focus selected was Hawaii and includes the use of visible and infrared imagery, topography and multi-spectral image products as predictors. The development of the cloud detection technique is realized in three steps. First, tuning of the RF models is completed to identify the optimal values of the number of trees and number of predictors to employ for both day and night scenes. Second, the RF models are trained using the optimal number of trees and a select number of random predictors identified during the tuning phase. Lastly, the model is used to predict clouds for an independent time period than used during training and compared to truth, the CMG cloud mask. Initial results

  19. Plant water potential improves prediction of empirical stomatal models.

    Directory of Open Access Journals (Sweden)

    William R L Anderegg

    Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.

  20. Probability of criminal acts of violence: a test of jury predictive accuracy.

    Science.gov (United States)

    Reidy, Thomas J; Sorensen, Jon R; Cunningham, Mark D

    2013-01-01

    The ability of capital juries to accurately predict future prison violence at the sentencing phase of aggravated murder trials was examined through retrospective review of the disciplinary records of 115 male inmates sentenced to either life (n = 65) or death (n = 50) in Oregon from 1985 through 2008, with a mean post-conviction time at risk of 15.3 years. Violent prison behavior was completely unrelated to predictions made by capital jurors, with bidirectional accuracy simply reflecting the base rate of assaultive misconduct in the group. Rejection of the special issue predicting future violence enjoyed 90% accuracy. Conversely, predictions that future violence was probable had 90% error rates. More than 90% of the assaultive rule violations committed by these offenders resulted in no harm or only minor injuries. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Utilizing multiple scale models to improve predictions of extra-axial hemorrhage in the immature piglet.

    Science.gov (United States)

    Scott, Gregory G; Margulies, Susan S; Coats, Brittany

    2016-10-01

    Traumatic brain injury (TBI) is a leading cause of death and disability in the USA. To help understand and better predict TBI, researchers have developed complex finite element (FE) models of the head which incorporate many biological structures such as scalp, skull, meninges, brain (with gray/white matter differentiation), and vasculature. However, most models drastically simplify the membranes and substructures between the pia and arachnoid membranes. We hypothesize that substructures in the pia-arachnoid complex (PAC) contribute substantially to brain deformation following head rotation, and that when included in FE models accuracy of extra-axial hemorrhage prediction improves. To test these hypotheses, microscale FE models of the PAC were developed to span the variability of PAC substructure anatomy and regional density. The constitutive response of these models were then integrated into an existing macroscale FE model of the immature piglet brain to identify changes in cortical stress distribution and predictions of extra-axial hemorrhage (EAH). Incorporating regional variability of PAC substructures substantially altered the distribution of principal stress on the cortical surface of the brain compared to a uniform representation of the PAC. Simulations of 24 non-impact rapid head rotations in an immature piglet animal model resulted in improved accuracy of EAH prediction (to 94 % sensitivity, 100 % specificity), as well as a high accuracy in regional hemorrhage prediction (to 82-100 % sensitivity, 100 % specificity). We conclude that including a biofidelic PAC substructure variability in FE models of the head is essential for improved predictions of hemorrhage at the brain/skull interface.

  2. Gene network inherent in genomic big data improves the accuracy of prognostic prediction for cancer patients.

    Science.gov (United States)

    Kim, Yun Hak; Jeong, Dae Cheon; Pak, Kyoungjune; Goh, Tae Sik; Lee, Chi-Seung; Han, Myoung-Eun; Kim, Ji-Young; Liangwen, Liu; Kim, Chi Dae; Jang, Jeon Yeob; Cha, Wonjae; Oh, Sae-Ock

    2017-09-29

    Accurate prediction of prognosis is critical for therapeutic decisions regarding cancer patients. Many previously developed prognostic scoring systems have limitations in reflecting recent progress in the field of cancer biology such as microarray, next-generation sequencing, and signaling pathways. To develop a new prognostic scoring system for cancer patients, we used mRNA expression and clinical data in various independent breast cancer cohorts (n=1214) from the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) and Gene Expression Omnibus (GEO). A new prognostic score that reflects gene network inherent in genomic big data was calculated using Network-Regularized high-dimensional Cox-regression (Net-score). We compared its discriminatory power with those of two previously used statistical methods: stepwise variable selection via univariate Cox regression (Uni-score) and Cox regression via Elastic net (Enet-score). The Net scoring system showed better discriminatory power in prediction of disease-specific survival (DSS) than other statistical methods (p=0 in METABRIC training cohort, p=0.000331, 4.58e-06 in two METABRIC validation cohorts) when accuracy was examined by log-rank test. Notably, comparison of C-index and AUC values in receiver operating characteristic analysis at 5 years showed fewer differences between training and validation cohorts with the Net scoring system than other statistical methods, suggesting minimal overfitting. The Net-based scoring system also successfully predicted prognosis in various independent GEO cohorts with high discriminatory power. In conclusion, the Net-based scoring system showed better discriminative power than previous statistical methods in prognostic prediction for breast cancer patients. This new system will mark a new era in prognosis prediction for cancer patients.

  3. Accuracy Improvement of Real-Time Location Tracking for Construction Workers

    Directory of Open Access Journals (Sweden)

    Hyunsoo Kim

    2018-05-01

    Full Text Available Extensive research has been conducted on the real-time locating system (RTLS for tracking construction components, including workers, equipment, and materials, in order to improve construction performance (e.g., productivity improvement or accident prevention. In order to prevent safety accidents and make more sustainable construction job sites, the higher accuracy of RTLS is required. To improve the accuracy of RTLS in construction projects, this paper presents a RTLS using radio frequency identification (RFID. For this goal, this paper develops a location tracking error mitigation algorithm and presents the concept of using assistant tags. The applicability and effectiveness of the developed RTLS are tested under eight different construction environments and the test results confirm the system’s strong potential for improving the accuracy of real-time location tracking in construction projects, thus enhancing construction performance.

  4. Accuracy of predicting milk yield from alternative recording schemes

    NARCIS (Netherlands)

    Berry, D.P.; Olori, V.E.; Cromie, A.R.; Rath, M.; Veerkamp, R.F.; Dilon, P.

    2005-01-01

    The effect of reducing the frequency of official milk recording and the number of recorded samples per test-day on the accuracy of predicting daily yield and cumulative 305-day yield was investigated. A control data set consisting of 58 210 primiparous cows with milk test-day records every 4 weeks

  5. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    Science.gov (United States)

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…

  6. Joint modeling of genetically correlated diseases and functional annotations increases accuracy of polygenic risk prediction.

    Directory of Open Access Journals (Sweden)

    Yiming Hu

    2017-06-01

    Full Text Available Accurate prediction of disease risk based on genetic factors is an important goal in human genetics research and precision medicine. Advanced prediction models will lead to more effective disease prevention and treatment strategies. Despite the identification of thousands of disease-associated genetic variants through genome-wide association studies (GWAS in the past decade, accuracy of genetic risk prediction remains moderate for most diseases, which is largely due to the challenges in both identifying all the functionally relevant variants and accurately estimating their effect sizes. In this work, we introduce PleioPred, a principled framework that leverages pleiotropy and functional annotations in genetic risk prediction for complex diseases. PleioPred uses GWAS summary statistics as its input, and jointly models multiple genetically correlated diseases and a variety of external information including linkage disequilibrium and diverse functional annotations to increase the accuracy of risk prediction. Through comprehensive simulations and real data analyses on Crohn's disease, celiac disease and type-II diabetes, we demonstrate that our approach can substantially increase the accuracy of polygenic risk prediction and risk population stratification, i.e. PleioPred can significantly better separate type-II diabetes patients with early and late onset ages, illustrating its potential clinical application. Furthermore, we show that the increment in prediction accuracy is significantly correlated with the genetic correlation between the predicted and jointly modeled diseases.

  7. Protein docking prediction using predicted protein-protein interface

    Directory of Open Access Journals (Sweden)

    Li Bin

    2012-01-01

    Full Text Available Abstract Background Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. Results We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm, is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. Conclusion We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.

  8. Protein docking prediction using predicted protein-protein interface.

    Science.gov (United States)

    Li, Bin; Kihara, Daisuke

    2012-01-10

    Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm), is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.

  9. Analysis of spatial distribution of land cover maps accuracy

    Science.gov (United States)

    Khatami, R.; Mountrakis, G.; Stehman, S. V.

    2017-12-01

    Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain

  10. Enhancing Predictive Accuracy of Cardiac Autonomic Neuropathy Using Blood Biochemistry Features and Iterative Multitier Ensembles.

    Science.gov (United States)

    Abawajy, Jemal; Kelarev, Andrei; Chowdhury, Morshed U; Jelinek, Herbert F

    2016-01-01

    Blood biochemistry attributes form an important class of tests, routinely collected several times per year for many patients with diabetes. The objective of this study is to investigate the role of blood biochemistry for improving the predictive accuracy of the diagnosis of cardiac autonomic neuropathy (CAN) progression. Blood biochemistry contributes to CAN, and so it is a causative factor that can provide additional power for the diagnosis of CAN especially in the absence of a complete set of Ewing tests. We introduce automated iterative multitier ensembles (AIME) and investigate their performance in comparison to base classifiers and standard ensemble classifiers for blood biochemistry attributes. AIME incorporate diverse ensembles into several tiers simultaneously and combine them into one automatically generated integrated system so that one ensemble acts as an integral part of another ensemble. We carried out extensive experimental analysis using large datasets from the diabetes screening research initiative (DiScRi) project. The results of our experiments show that several blood biochemistry attributes can be used to supplement the Ewing battery for the detection of CAN in situations where one or more of the Ewing tests cannot be completed because of the individual difficulties faced by each patient in performing the tests. The results show that AIME provide higher accuracy as a multitier CAN classification paradigm. The best predictive accuracy of 99.57% has been obtained by the AIME combining decorate on top tier with bagging on middle tier based on random forest. Practitioners can use these findings to increase the accuracy of CAN diagnosis.

  11. Statewide Quality Improvement Initiative to Reduce Early Elective Deliveries and Improve Birth Registry Accuracy.

    Science.gov (United States)

    Kaplan, Heather C; King, Eileen; White, Beth E; Ford, Susan E; Fuller, Sandra; Krew, Michael A; Marcotte, Michael P; Iams, Jay D; Bailit, Jennifer L; Bouchard, Jo M; Friar, Kelly; Lannon, Carole M

    2018-04-01

    To evaluate the success of a quality improvement initiative to reduce early elective deliveries at less than 39 weeks of gestation and improve birth registry data accuracy rapidly and at scale in Ohio. Between February 2013 and March 2014, participating hospitals were involved in a quality improvement initiative to reduce early elective deliveries at less than 39 weeks of gestation and improve birth registry data. This initiative was designed as a learning collaborative model (group webinars and a single face-to-face meeting) and included individual quality improvement coaching. It was implemented using a stepped wedge design with hospitals divided into three balanced groups (waves) participating in the initiative sequentially. Birth registry data were used to assess hospital rates of nonmedically indicated inductions at less than 39 weeks of gestation. Comparisons were made between groups participating and those not participating in the initiative at two time points. To measure birth registry accuracy, hospitals conducted monthly audits comparing birth registry data with the medical record. Associations were assessed using generalized linear repeated measures models accounting for time effects. Seventy of 72 (97%) eligible hospitals participated. Based on birth registry data, nonmedically indicated inductions at less than 39 weeks of gestation declined in all groups with implementation (wave 1: 6.2-3.2%, Pinitiative, they saw significant decreases in rates of early elective deliveries as compared with wave 3 (control; P=.018). All waves had significant improvement in birth registry accuracy (wave 1: 80-90%, P=.017; wave 2: 80-100%, P=.002; wave 3: 75-100%, Pinitiative enabled statewide spread of change strategies to decrease early elective deliveries and improve birth registry accuracy over 14 months and could be used for rapid dissemination of other evidence-based obstetric care practices across states or hospital systems.

  12. Selecting fillers on emotional appearance improves lineup identification accuracy.

    Science.gov (United States)

    Flowe, Heather D; Klatt, Thimna; Colloff, Melissa F

    2014-12-01

    Mock witnesses sometimes report using criminal stereotypes to identify a face from a lineup, a tendency known as criminal face bias. Faces are perceived as criminal-looking if they appear angry. We tested whether matching the emotional appearance of the fillers to an angry suspect can reduce criminal face bias. In Study 1, mock witnesses (n = 226) viewed lineups in which the suspect had an angry, happy, or neutral expression, and we varied whether the fillers matched the expression. An additional group of participants (n = 59) rated the faces on criminal and emotional appearance. As predicted, mock witnesses tended to identify suspects who appeared angrier and more criminal-looking than the fillers. This tendency was reduced when the lineup fillers matched the emotional appearance of the suspect. Study 2 extended the results, testing whether the emotional appearance of the suspect and fillers affects recognition memory. Participants (n = 1,983) studied faces and took a lineup test in which the emotional appearance of the target and fillers was varied between subjects. Discrimination accuracy was enhanced when the fillers matched an angry target's emotional appearance. We conclude that lineup member emotional appearance plays a critical role in the psychology of lineup identification. The fillers should match an angry suspect's emotional appearance to improve lineup identification accuracy. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  13. Bagging Approach for Increasing Classification Accuracy of CART on Family Participation Prediction in Implementation of Elderly Family Development Program

    Directory of Open Access Journals (Sweden)

    Wisoedhanie Widi Anugrahanti

    2017-06-01

    Full Text Available Classification and Regression Tree (CART was a method of Machine Learning where data exploration was done by decision tree technique. CART was a classification technique with binary recursive reconciliation algorithms where the sorting was performed on a group of data collected in a space called a node / node into two child nodes (Lewis, 2000. The aim of this study was to predict family participation in Elderly Family Development program based on family behavior in providing physical, mental, social care for the elderly. Family involvement accuracy using Bagging CART method was calculated based on 1-APER value, sensitivity, specificity, and G-Means. Based on CART method, classification accuracy was obtained 97,41% with Apparent Error Rate value 2,59%. The most important determinant of family behavior as a sorter was society participation (100,00000, medical examination (98,95988, providing nutritious food (68.60476, establishing communication (67,19877 and worship (57,36587. To improved the stability and accuracy of CART prediction, used CART Bootstrap Aggregating (Bagging with 100% accuracy result. Bagging CART classifies a total of 590 families (84.77% were appropriately classified into implement elderly Family Development program class.

  14. Accuracy improvement of SPACE code using the optimization for CHF subroutine

    International Nuclear Information System (INIS)

    Yang, Chang Keun; Kim, Yo Han; Park, Jong Eun; Ha, Sang Jun

    2010-01-01

    Typically, a subroutine to calculate the CHF (Critical Heat Flux) is loaded in code for safety analysis of nuclear power plant. CHF subroutine calculates CHF phenomenon using arbitrary condition (Temperature, pressure, flow rate, power, etc). When safety analysis for nuclear power plant is performed using major factor, CHF parameter is one of the most important factor. But the subroutines used in most codes, such as Biasi method, etc., estimate some different values from experimental data. Most CHF subroutines in the codes could predict only in their specification area, such as pressure, mass flow, void fraction, etc. Even though the most accurate CHF subroutine is used in the high quality nuclear safety analysis code, it is not assured that the valued predicted values by the subroutine are acceptable out of their application area. To overcome this hardship, various approaches to estimate the CHF have been examined during the code developing stage of SPACE. And the six sigma technique was adopted for the examination as mentioned this study. The objective of this study is to improvement of CHF prediction accuracy for nuclear power plant safety analysis code using the CHF database and Six Sigma technique. Through the study, it was concluded that the six sigma technique was useful to quantify the deviation of prediction values to experimental data and the implemented CHF prediction method in SPACE code had well-predict capabilities compared with those from other methods

  15. Accuracy of High-Resolution MRI with Lumen Distention in Rectal Cancer Staging and Circumferential Margin Involvement Prediction

    International Nuclear Information System (INIS)

    Iannicelli, Elsa; Di Renzo, Sara; Ferri, Mario; Pilozzi, Emanuela; Di Girolamo, Marco; Sapori, Alessandra; Ziparo, Vincenzo; David, Vincenzo

    2014-01-01

    To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting

  16. Accuracy of High-Resolution MRI with Lumen Distention in Rectal Cancer Staging and Circumferential Margin Involvement Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Iannicelli, Elsa; Di Renzo, Sara [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ferri, Mario [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Pilozzi, Emanuela [Department of Clinical and Molecular Sciences, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Di Girolamo, Marco; Sapori, Alessandra [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ziparo, Vincenzo [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); David, Vincenzo [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy)

    2014-07-01

    To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting.

  17. Functional knowledge transfer for high-accuracy prediction of under-studied biological processes.

    Directory of Open Access Journals (Sweden)

    Christopher Y Park

    Full Text Available A key challenge in genetics is identifying the functional roles of genes in pathways. Numerous functional genomics techniques (e.g. machine learning that predict protein function have been developed to address this question. These methods generally build from existing annotations of genes to pathways and thus are often unable to identify additional genes participating in processes that are not already well studied. Many of these processes are well studied in some organism, but not necessarily in an investigator's organism of interest. Sequence-based search methods (e.g. BLAST have been used to transfer such annotation information between organisms. We demonstrate that functional genomics can complement traditional sequence similarity to improve the transfer of gene annotations between organisms. Our method transfers annotations only when functionally appropriate as determined by genomic data and can be used with any prediction algorithm to combine transferred gene function knowledge with organism-specific high-throughput data to enable accurate function prediction. We show that diverse state-of-art machine learning algorithms leveraging functional knowledge transfer (FKT dramatically improve their accuracy in predicting gene-pathway membership, particularly for processes with little experimental knowledge in an organism. We also show that our method compares favorably to annotation transfer by sequence similarity. Next, we deploy FKT with state-of-the-art SVM classifier to predict novel genes to 11,000 biological processes across six diverse organisms and expand the coverage of accurate function predictions to processes that are often ignored because of a dearth of annotated genes in an organism. Finally, we perform in vivo experimental investigation in Danio rerio and confirm the regulatory role of our top predicted novel gene, wnt5b, in leftward cell migration during heart development. FKT is immediately applicable to many bioinformatics

  18. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    International Nuclear Information System (INIS)

    Anderson, Ryan B.; Bell, James F.; Wiens, Roger C.; Morris, Richard V.; Clegg, Samuel M.

    2012-01-01

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO 2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ∼ 3 wt.%. The statistical significance of these improvements was ∼ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically

  19. Link Prediction Methods and Their Accuracy for Different Social Networks and Network Metrics

    Directory of Open Access Journals (Sweden)

    Fei Gao

    2015-01-01

    Full Text Available Currently, we are experiencing a rapid growth of the number of social-based online systems. The availability of the vast amounts of data gathered in those systems brings new challenges that we face when trying to analyse it. One of the intensively researched topics is the prediction of social connections between users. Although a lot of effort has been made to develop new prediction approaches, the existing methods are not comprehensively analysed. In this paper we investigate the correlation between network metrics and accuracy of different prediction methods. We selected six time-stamped real-world social networks and ten most widely used link prediction methods. The results of the experiments show that the performance of some methods has a strong correlation with certain network metrics. We managed to distinguish “prediction friendly” networks, for which most of the prediction methods give good performance, as well as “prediction unfriendly” networks, for which most of the methods result in high prediction error. Correlation analysis between network metrics and prediction accuracy of prediction methods may form the basis of a metalearning system where based on network characteristics it will be able to recommend the right prediction method for a given network.

  20. The accuracy of prediction of genomic selection in elite hybrid rye populations surpasses the accuracy of marker-assisted selection and is equally augmented by multiple field evaluation locations and test years.

    Science.gov (United States)

    Wang, Yu; Mette, Michael Florian; Miedaner, Thomas; Gottwald, Marlen; Wilde, Peer; Reif, Jochen C; Zhao, Yusheng

    2014-07-04

    Marker-assisted selection (MAS) and genomic selection (GS) based on genome-wide marker data provide powerful tools to predict the genotypic value of selection material in plant breeding. However, case-to-case optimization of these approaches is required to achieve maximum accuracy of prediction with reasonable input. Based on extended field evaluation data for grain yield, plant height, starch content and total pentosan content of elite hybrid rye derived from testcrosses involving two bi-parental populations that were genotyped with 1048 molecular markers, we compared the accuracy of prediction of MAS and GS in a cross-validation approach. MAS delivered generally lower and in addition potentially over-estimated accuracies of prediction than GS by ridge regression best linear unbiased prediction (RR-BLUP). The grade of relatedness of the plant material included in the estimation and test sets clearly affected the accuracy of prediction of GS. Within each of the two bi-parental populations, accuracies differed depending on the relatedness of the respective parental lines. Across populations, accuracy increased when both populations contributed to estimation and test set. In contrast, accuracy of prediction based on an estimation set from one population to a test set from the other population was low despite that the two bi-parental segregating populations under scrutiny shared one parental line. Limiting the number of locations or years in field testing reduced the accuracy of prediction of GS equally, supporting the view that to establish robust GS calibration models a sufficient number of test locations is of similar importance as extended testing for more than one year. In hybrid rye, genomic selection is superior to marker-assisted selection. However, it achieves high accuracies of prediction only for selection candidates closely related to the plant material evaluated in field trials, resulting in a rather pessimistic prognosis for distantly related material

  1. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    Science.gov (United States)

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples

  2. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    Science.gov (United States)

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  3. Does a Structured Data Collection Form Improve The Accuracy of ...

    African Journals Online (AJOL)

    and multiple etiologies for similar presentation. Standardized forms may harmonize the initial assessment, improve accuracy of diagnosis and enhance outcomes. Objectives: To determine the extent to which use of a structured data collection form (SDCF) affected the diagnostic accuracy of AAP. Methodology: A before and ...

  4. Genomic selection: genome-wide prediction in plant improvement.

    Science.gov (United States)

    Desta, Zeratsion Abera; Ortiz, Rodomiro

    2014-09-01

    Association analysis is used to measure relations between markers and quantitative trait loci (QTL). Their estimation ignores genes with small effects that trigger underpinning quantitative traits. By contrast, genome-wide selection estimates marker effects across the whole genome on the target population based on a prediction model developed in the training population (TP). Whole-genome prediction models estimate all marker effects in all loci and capture small QTL effects. Here, we review several genomic selection (GS) models with respect to both the prediction accuracy and genetic gain from selection. Phenotypic selection or marker-assisted breeding protocols can be replaced by selection, based on whole-genome predictions in which phenotyping updates the model to build up the prediction accuracy. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Can assimilation of crowdsourced data in hydrological modelling improve flood prediction?

    Science.gov (United States)

    Mazzoleni, Maurizio; Verlaan, Martin; Alfonso, Leonardo; Monego, Martina; Norbiato, Daniele; Ferri, Miche; Solomatine, Dimitri P.

    2017-02-01

    Monitoring stations have been used for decades to properly measure hydrological variables and better predict floods. To this end, methods to incorporate these observations into mathematical water models have also been developed. Besides, in recent years, the continued technological advances, in combination with the growing inclusion of citizens in participatory processes related to water resources management, have encouraged the increase of citizen science projects around the globe. In turn, this has stimulated the spread of low-cost sensors to allow citizens to participate in the collection of hydrological data in a more distributed way than the classic static physical sensors do. However, two main disadvantages of such crowdsourced data are the irregular availability and variable accuracy from sensor to sensor, which makes them challenging to use in hydrological modelling. This study aims to demonstrate that streamflow data, derived from crowdsourced water level observations, can improve flood prediction if integrated in hydrological models. Two different hydrological models, applied to four case studies, are considered. Realistic (albeit synthetic) time series are used to represent crowdsourced data in all case studies. In this study, it is found that the data accuracies have much more influence on the model results than the irregular frequencies of data availability at which the streamflow data are assimilated. This study demonstrates that data collected by citizens, characterized by being asynchronous and inaccurate, can still complement traditional networks formed by few accurate, static sensors and improve the accuracy of flood forecasts.

  6. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    Science.gov (United States)

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208

  7. A two-stage approach for improved prediction of residue contact maps

    Directory of Open Access Journals (Sweden)

    Pollastri Gianluca

    2006-03-01

    Full Text Available Abstract Background Protein topology representations such as residue contact maps are an important intermediate step towards ab initio prediction of protein structure. Although improvements have occurred over the last years, the problem of accurately predicting residue contact maps from primary sequences is still largely unsolved. Among the reasons for this are the unbalanced nature of the problem (with far fewer examples of contacts than non-contacts, the formidable challenge of capturing long-range interactions in the maps, the intrinsic difficulty of mapping one-dimensional input sequences into two-dimensional output maps. In order to alleviate these problems and achieve improved contact map predictions, in this paper we split the task into two stages: the prediction of a map's principal eigenvector (PE from the primary sequence; the reconstruction of the contact map from the PE and primary sequence. Predicting the PE from the primary sequence consists in mapping a vector into a vector. This task is less complex than mapping vectors directly into two-dimensional matrices since the size of the problem is drastically reduced and so is the scale length of interactions that need to be learned. Results We develop architectures composed of ensembles of two-layered bidirectional recurrent neural networks to classify the components of the PE in 2, 3 and 4 classes from protein primary sequence, predicted secondary structure, and hydrophobicity interaction scales. Our predictor, tested on a non redundant set of 2171 proteins, achieves classification performances of up to 72.6%, 16% above a base-line statistical predictor. We design a system for the prediction of contact maps from the predicted PE. Our results show that predicting maps through the PE yields sizeable gains especially for long-range contacts which are particularly critical for accurate protein 3D reconstruction. The final predictor's accuracy on a non-redundant set of 327 targets is 35

  8. Can biomechanical variables predict improvement in crouch gait?

    Science.gov (United States)

    Hicks, Jennifer L.; Delp, Scott L.; Schwartz, Michael H.

    2011-01-01

    Many patients respond positively to treatments for crouch gait, yet surgical outcomes are inconsistent and unpredictable. In this study, we developed a multivariable regression model to determine if biomechanical variables and other subject characteristics measured during a physical exam and gait analysis can predict which subjects with crouch gait will demonstrate improved knee kinematics on a follow-up gait analysis. We formulated the model and tested its performance by retrospectively analyzing 353 limbs of subjects who walked with crouch gait. The regression model was able to predict which subjects would demonstrate ‘improved’ and ‘unimproved’ knee kinematics with over 70% accuracy, and was able to explain approximately 49% of the variance in subjects’ change in knee flexion between gait analyses. We found that improvement in stance phase knee flexion was positively associated with three variables that were drawn from knowledge about the biomechanical contributors to crouch gait: i) adequate hamstrings lengths and velocities, possibly achieved via hamstrings lengthening surgery, ii) normal tibial torsion, possibly achieved via tibial derotation osteotomy, and iii) sufficient muscle strength. PMID:21616666

  9. Quantifying and estimating the predictive accuracy for censored time-to-event data with competing risks.

    Science.gov (United States)

    Wu, Cai; Li, Liang

    2018-05-15

    This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time-to-event outcomes with competing events. We consider the time-dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time-dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end-stage renal disease, accounting for the competing risk of pre-end-stage renal disease death, and evaluate its numerical performance in extensive simulation studies. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Typing speed, spelling accuracy, and the use of word-prediction ...

    African Journals Online (AJOL)

    Children with spelling difficulties are limited in their participation in all written school activities. We aimed to investigate the influence of word-prediction as a tool on spelling accuracy and typing speed. To this end, we selected 80 Grade 4 – 6 children with spelling difficulties in a school for special needs to participate

  11. The importance of the accuracy of the experimental data for the prediction of solubility

    Directory of Open Access Journals (Sweden)

    SLAVICA ERIĆ

    2010-04-01

    Full Text Available Aqueous solubility is an important factor influencing several aspects of the pharmacokinetic profile of a drug. Numerous publications present different methodologies for the development of reliable computational models for the prediction of solubility from structure. The quality of such models can be significantly affected by the accuracy of the employed experimental solubility data. In this work, the importance of the accuracy of the experimental solubility data used for model training was investigated. Three data sets were used as training sets – data set 1, containing solubility data collected from various literature sources using a few criteria (n = 319, data set 2, created by substituting 28 values from data set 1 with uniformly determined experimental data from one laboratory (n = 319, and data set 3, created by including 56 additional components, for which the solubility was also determined under uniform conditions in the same laboratory, in the data set 2 (n = 375. The selection of the most significant descriptors was performed by the heuristic method, using one-parameter and multi-parameter analysis. The correlations between the most significant descriptors and solubility were established using multi-linear regression analysis (MLR for all three investigated data sets. Notable differences were observed between the equations corresponding to different data sets, suggesting that models updated with new experimental data need to be additionally optimized. It was successfully shown that the inclusion of uniform experimental data consistently leads to an improvement in the correlation coefficients. These findings contribute to an emerging consensus that improving the reliability of solubility prediction requires the inclusion of many diverse compounds for which solubility was measured under standardized conditions in the data set.

  12. Accuracy Analysis of a Box-wing Theoretical SRP Model

    Science.gov (United States)

    Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui

    2016-07-01

    For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.

  13. Improvement of energy expenditure prediction from heart rate during running

    International Nuclear Information System (INIS)

    Charlot, Keyne; Borne, Rachel; Richalet, Jean-Paul; Chapelot, Didier; Pichon, Aurélien; Cornolo, Jérémy; Brugniaux, Julien Vincent

    2014-01-01

    We aimed to develop new equations that predict exercise-induced energy expenditure (EE) more accurately than previous ones during running by including new parameters as fitness level, body composition and/or running intensity in addition to heart rate (HR). Original equations predicting EE were created from data obtained during three running intensities (25%, 50% and 70% of HR reserve) performed by 50 subjects. Five equations were conserved according to their accuracy assessed from error rates, interchangeability and correlations analyses: one containing only basic parameters, two containing VO 2max  or speed at VO 2max  and two including running speed with or without HR. Equations accuracy was further tested in an independent sample during a 40 min validation test at 50% of HR reserve. It appeared that: (1) the new basic equation was more accurate than pre-existing equations (R 2  0.809 versus. 0,737 respectively); (2) the prediction of EE was more accurate with the addition of VO 2max  (R 2  = 0.879); and (3) the equations containing running speed were the most accurate and were considered to have good agreement with indirect calorimetry. In conclusion, EE estimation during running might be significantly improved by including running speed in the predictive models, a parameter readily available with treadmill or GPS. (paper)

  14. ESLpred2: improved method for predicting subcellular localization of eukaryotic proteins

    Directory of Open Access Journals (Sweden)

    Raghava Gajendra PS

    2008-11-01

    Full Text Available Abstract Background The expansion of raw protein sequence databases in the post genomic era and availability of fresh annotated sequences for major localizations particularly motivated us to introduce a new improved version of our previously forged eukaryotic subcellular localizations prediction method namely "ESLpred". Since, subcellular localization of a protein offers essential clues about its functioning, hence, availability of localization predictor would definitely aid and expedite the protein deciphering studies. However, robustness of a predictor is highly dependent on the superiority of dataset and extracted protein attributes; hence, it becomes imperative to improve the performance of presently available method using latest dataset and crucial input features. Results Here, we describe augmentation in the prediction performance obtained for our most popular ESLpred method using new crucial features as an input to Support Vector Machine (SVM. In addition, recently available, highly non-redundant dataset encompassing three kingdoms specific protein sequence sets; 1198 fungi sequences, 2597 from animal and 491 plant sequences were also included in the present study. First, using the evolutionary information in the form of profile composition along with whole and N-terminal sequence composition as an input feature vector of 440 dimensions, overall accuracies of 72.7, 75.8 and 74.5% were achieved respectively after five-fold cross-validation. Further, enhancement in performance was observed when similarity search based results were coupled with whole and N-terminal sequence composition along with profile composition by yielding overall accuracies of 75.9, 80.8, 76.6% respectively; best accuracies reported till date on the same datasets. Conclusion These results provide confidence about the reliability and accurate prediction of SVM modules generated in the present study using sequence and profile compositions along with similarity search

  15. Improving Accuracy for Image Fusion in Abdominal Ultrasonography

    Directory of Open Access Journals (Sweden)

    Caroline Ewertsen

    2012-08-01

    Full Text Available Image fusion involving real-time ultrasound (US is a technique where previously recorded computed tomography (CT or magnetic resonance images (MRI are reformatted in a projection to fit the real-time US images after an initial co-registration. The co-registration aligns the images by means of common planes or points. We evaluated the accuracy of the alignment when varying parameters as patient position, respiratory phase and distance from the co-registration points/planes. We performed a total of 80 co-registrations and obtained the highest accuracy when the respiratory phase for the co-registration procedure was the same as when the CT or MRI was obtained. Furthermore, choosing co-registration points/planes close to the area of interest also improved the accuracy. With all settings optimized a mean error of 3.2 mm was obtained. We conclude that image fusion involving real-time US is an accurate method for abdominal examinations and that the accuracy is influenced by various adjustable factors that should be kept in mind.

  16. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Ryan B., E-mail: randerson@astro.cornell.edu [Cornell University Department of Astronomy, 406 Space Sciences Building, Ithaca, NY 14853 (United States); Bell, James F., E-mail: Jim.Bell@asu.edu [Arizona State University School of Earth and Space Exploration, Bldg.: INTDS-A, Room: 115B, Box 871404, Tempe, AZ 85287 (United States); Wiens, Roger C., E-mail: rwiens@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States); Morris, Richard V., E-mail: richard.v.morris@nasa.gov [NASA Johnson Space Center, 2101 NASA Parkway, Houston, TX 77058 (United States); Clegg, Samuel M., E-mail: sclegg@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States)

    2012-04-15

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO{sub 2} at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by {approx} 3 wt.%. The statistical significance of these improvements was {approx} 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and

  17. Improved Prediction of Preterm Delivery Using Empirical Mode Decomposition Analysis of Uterine Electromyography Signals.

    Directory of Open Access Journals (Sweden)

    Peng Ren

    Full Text Available Preterm delivery increases the risk of infant mortality and morbidity, and therefore developing reliable methods for predicting its likelihood are of great importance. Previous work using uterine electromyography (EMG recordings has shown that they may provide a promising and objective way for predicting risk of preterm delivery. However, to date attempts at utilizing computational approaches to achieve sufficient predictive confidence, in terms of area under the curve (AUC values, have not achieved the high discrimination accuracy that a clinical application requires. In our study, we propose a new analytical approach for assessing the risk of preterm delivery using EMG recordings which firstly employs Empirical Mode Decomposition (EMD to obtain their Intrinsic Mode Functions (IMF. Next, the entropy values of both instantaneous amplitude and instantaneous frequency of the first ten IMF components are computed in order to derive ratios of these two distinct components as features. Discrimination accuracy of this approach compared to those proposed previously was then calculated using six differently representative classifiers. Finally, three different electrode positions were analyzed for their prediction accuracy of preterm delivery in order to establish which uterine EMG recording location was optimal signal data. Overall, our results show a clear improvement in prediction accuracy of preterm delivery risk compared with previous approaches, achieving an impressive maximum AUC value of 0.986 when using signals from an electrode positioned below the navel. In sum, this provides a promising new method for analyzing uterine EMG signals to permit accurate clinical assessment of preterm delivery risk.

  18. Machine-learning scoring functions to improve structure-based binding affinity prediction and virtual screening.

    Science.gov (United States)

    Ain, Qurrat Ul; Aleksandrova, Antoniya; Roessler, Florian D; Ballester, Pedro J

    2015-01-01

    Docking tools to predict whether and how a small molecule binds to a target can be applied if a structural model of such target is available. The reliability of docking depends, however, on the accuracy of the adopted scoring function (SF). Despite intense research over the years, improving the accuracy of SFs for structure-based binding affinity prediction or virtual screening has proven to be a challenging task for any class of method. New SFs based on modern machine-learning regression models, which do not impose a predetermined functional form and thus are able to exploit effectively much larger amounts of experimental data, have recently been introduced. These machine-learning SFs have been shown to outperform a wide range of classical SFs at both binding affinity prediction and virtual screening. The emerging picture from these studies is that the classical approach of using linear regression with a small number of expert-selected structural features can be strongly improved by a machine-learning approach based on nonlinear regression allied with comprehensive data-driven feature selection. Furthermore, the performance of classical SFs does not grow with larger training datasets and hence this performance gap is expected to widen as more training data becomes available in the future. Other topics covered in this review include predicting the reliability of a SF on a particular target class, generating synthetic data to improve predictive performance and modeling guidelines for SF development. WIREs Comput Mol Sci 2015, 5:405-424. doi: 10.1002/wcms.1225 For further resources related to this article, please visit the WIREs website.

  19. Can Translation Improve EFL Students' Grammatical Accuracy? [

    Directory of Open Access Journals (Sweden)

    Carol Ebbert-Hübner

    2018-01-01

    Full Text Available This report focuses on research results from a project completed at Trier University in December 2015 that provides insight into whether a monolingual group of learners can improve their grammatical accuracy and reduce interference mistakes in their English via contrastive analysis and translation instruction and activities. Contrastive analysis and translation (CAT instruction in this setting focusses on comparing grammatical differences between students’ dominant language (German and English, and practice activities where sentences or short texts are translated from German into English. The results of a pre- and post-test administered in the first and final week of a translation class were compared to two other class types: a grammar class which consisted of form-focused instruction but not translation, and a process-approach essay writing class where students received feedback on their written work throughout the semester. The results of our study indicate that with C1 level EAP students, more improvement in grammatical accuracy is seen through teaching with CAT than in explicit grammar instruction or through language feedback on written work alone. These results indicate that CAT does indeed have a place in modern language classes.

  20. Accuracy of an improved device for remote measuring of tree-trunk diameters

    International Nuclear Information System (INIS)

    Matsushita, T.; Kato, S.; Komiyama, A.

    2000-01-01

    For measuring the diameters of tree trunks from a distant position, a recent device using a laser beam was developed by Kantou. We improved this device to serve our own practical purposes. The improved device consists of a 1-m-long metal caliper and a small telescope sliding smoothly onto it. Using the cross hairs in the scope, one can measure both edges of an object on the caliper and calculate its length. The laser beam is used just for guiding the telescopic sights to the correct positions on the object. In this study, the accuracy of this new device was examined by measuring objects of differing lengths, the distance from the object, and the angle of elevation to the object. Since each result of the experiment predicted absolute errors of measurement of less than 3 mm, this new device will be suitable for the measurement of trunk diameters in the field

  1. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    Science.gov (United States)

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  2. Partition dataset according to amino acid type improves the prediction of deleterious non-synonymous SNPs

    International Nuclear Information System (INIS)

    Yang, Jing; Li, Yuan-Yuan; Li, Yi-Xue; Ye, Zhi-Qiang

    2012-01-01

    Highlights: ► Proper dataset partition can improve the prediction of deleterious nsSNPs. ► Partition according to original residue type at nsSNP is a good criterion. ► Similar strategy is supposed promising in other machine learning problems. -- Abstract: Many non-synonymous SNPs (nsSNPs) are associated with diseases, and numerous machine learning methods have been applied to train classifiers for sorting disease-associated nsSNPs from neutral ones. The continuously accumulated nsSNP data allows us to further explore better prediction approaches. In this work, we partitioned the training data into 20 subsets according to either original or substituted amino acid type at the nsSNP site. Using support vector machine (SVM), training classification models on each subset resulted in an overall accuracy of 76.3% or 74.9% depending on the two different partition criteria, while training on the whole dataset obtained an accuracy of only 72.6%. Moreover, the dataset was also randomly divided into 20 subsets, but the corresponding accuracy was only 73.2%. Our results demonstrated that partitioning the whole training dataset into subsets properly, i.e., according to the residue type at the nsSNP site, will improve the performance of the trained classifiers significantly, which should be valuable in developing better tools for predicting the disease-association of nsSNPs.

  3. Video image analysis in the Australian meat industry - precision and accuracy of predicting lean meat yield in lamb carcasses.

    Science.gov (United States)

    Hopkins, D L; Safari, E; Thompson, J M; Smith, C R

    2004-06-01

    A wide selection of lamb types of mixed sex (ewes and wethers) were slaughtered at a commercial abattoir and during this process images of 360 carcasses were obtained online using the VIAScan® system developed by Meat and Livestock Australia. Soft tissue depth at the GR site (thickness of tissue over the 12th rib 110 mm from the midline) was measured by an abattoir employee using the AUS-MEAT sheep probe (PGR). Another measure of this thickness was taken in the chiller using a GR knife (NGR). Each carcass was subsequently broken down to a range of trimmed boneless retail cuts and the lean meat yield determined. The current industry model for predicting meat yield uses hot carcass weight (HCW) and tissue depth at the GR site. A low level of accuracy and precision was found when HCW and PGR were used to predict lean meat yield (R(2)=0.19, r.s.d.=2.80%), which could be improved markedly when PGR was replaced by NGR (R(2)=0.41, r.s.d.=2.39%). If the GR measures were replaced by 8 VIAScan® measures then greater prediction accuracy could be achieved (R(2)=0.52, r.s.d.=2.17%). A similar result was achieved when the model was based on principal components (PCs) computed from the 8 VIAScan® measures (R(2)=0.52, r.s.d.=2.17%). The use of PCs also improved the stability of the model compared to a regression model based on HCW and NGR. The transportability of the models was tested by randomly dividing the data set and comparing coefficients and the level of accuracy and precision. Those models based on PCs were superior to those based on regression. It is demonstrated that with the appropriate modeling the VIAScan® system offers a workable method for predicting lean meat yield automatically.

  4. A Kolmogorov-Smirnov Based Test for Comparing the Predictive Accuracy of Two Sets of Forecasts

    Directory of Open Access Journals (Sweden)

    Hossein Hassani

    2015-08-01

    Full Text Available This paper introduces a complement statistical test for distinguishing between the predictive accuracy of two sets of forecasts. We propose a non-parametric test founded upon the principles of the Kolmogorov-Smirnov (KS test, referred to as the KS Predictive Accuracy (KSPA test. The KSPA test is able to serve two distinct purposes. Initially, the test seeks to determine whether there exists a statistically significant difference between the distribution of forecast errors, and secondly it exploits the principles of stochastic dominance to determine whether the forecasts with the lower error also reports a stochastically smaller error than forecasts from a competing model, and thereby enables distinguishing between the predictive accuracy of forecasts. We perform a simulation study for the size and power of the proposed test and report the results for different noise distributions, sample sizes and forecasting horizons. The simulation results indicate that the KSPA test is correctly sized, and robust in the face of varying forecasting horizons and sample sizes along with significant accuracy gains reported especially in the case of small sample sizes. Real world applications are also considered to illustrate the applicability of the proposed KSPA test in practice.

  5. Using Genetic Distance to Infer the Accuracy of Genomic Prediction.

    Directory of Open Access Journals (Sweden)

    Marco Scutari

    2016-09-01

    Full Text Available The prediction of phenotypic traits using high-density genomic data has many applications such as the selection of plants and animals of commercial interest; and it is expected to play an increasing role in medical diagnostics. Statistical models used for this task are usually tested using cross-validation, which implicitly assumes that new individuals (whose phenotypes we would like to predict originate from the same population the genomic prediction model is trained on. In this paper we propose an approach based on clustering and resampling to investigate the effect of increasing genetic distance between training and target populations when predicting quantitative traits. This is important for plant and animal genetics, where genomic selection programs rely on the precision of predictions in future rounds of breeding. Therefore, estimating how quickly predictive accuracy decays is important in deciding which training population to use and how often the model has to be recalibrated. We find that the correlation between true and predicted values decays approximately linearly with respect to either FST or mean kinship between the training and the target populations. We illustrate this relationship using simulations and a collection of data sets from mice, wheat and human genetics.

  6. Prediction of parturition in dogs and cats: accuracy at different gestational ages.

    Science.gov (United States)

    Beccaglia, M; Luvoni, G C

    2012-12-01

    In bitches and queens, the ultrasonographic measurement of extrafoetal and foetal structures allows the evaluation of gestational age and the prediction of the parturition term for an extended period of time. The aim of this study was to investigate whether the accuracy of parturition date prediction is affected by the week of pregnancy when the ultrasonographic examination is performed. The results were obtained by retrospective analysis on the gestational period basis (from week 4 to week 9 of pregnancy) in 495 ultrasonographic examinations of pregnant bitches (small and medium size) and 60 of pregnant queens. They demonstrated that a similar accuracy (p > 0.05) was obtained by the measurement of inner chorionic cavity at week 4 and 5 of pregnancy (± 1 day, 81% vs 67.7%; ± 2 days, 93.1% vs 85.9%). Accuracy (± 1 day) based on biparietal (BP) measurement was similar at week 5 and 6 of pregnancy (78.6% vs 78.9%; p > 0.05), whereas a significant decrease (p parturition term is highly consistent for 6 and 8 weeks of gestation, respectively. © 2012 Blackwell Verlag GmbH.

  7. Improved prediction of reservoir behavior through integration of quantitative geological and petrophysical data

    Energy Technology Data Exchange (ETDEWEB)

    Auman, J. B.; Davies, D. K.; Vessell, R. K.

    1997-08-01

    Methodology that promises improved reservoir characterization and prediction of permeability, production and injection behavior during primary and enhanced recovery operations was demonstrated. The method is based on identifying intervals of unique pore geometry by a combination of image analysis techniques and traditional petrophysical measurements to calculate rock type and estimate permeability and saturation. Results from a complex carbonate and sandstone reservoir were presented as illustrative examples of the versatility and high level of accuracy of this method in predicting reservoir quality. 16 refs., 5 tabs., 14 figs.

  8. Improving the accuracy of dynamic mass calculation

    Directory of Open Access Journals (Sweden)

    Oleksandr F. Dashchenko

    2015-06-01

    Full Text Available With the acceleration of goods transporting, cargo accounting plays an important role in today's global and complex environment. Weight is the most reliable indicator of the materials control. Unlike many other variables that can be measured indirectly, the weight can be measured directly and accurately. Using strain-gauge transducers, weight value can be obtained within a few milliseconds; such values correspond to the momentary load, which acts on the sensor. Determination of the weight of moving transport is only possible by appropriate processing of the sensor signal. The aim of the research is to develop a methodology for weighing freight rolling stock, which increases the accuracy of the measurement of dynamic mass, in particular wagon that moves. Apart from time-series methods, preliminary filtration for improving the accuracy of calculation is used. The results of the simulation are presented.

  9. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  10. Considering Organic Carbon for Improved Predictions of Clay Content from Water Vapor Sorption

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per

    2014-01-01

    Accurate determination of the soil clay fraction (CF) is of crucial importance for characterization of numerous environmental, agricultural, and engineering processes. Because traditional methods for measurement of the CF are laborious and susceptible to errors, regression models relating the CF...... to water vapor sorption isotherms that can be rapidly measured with a fully automated vapor sorption analyzer are a viable alternative. In this presentation we evaluate the performance of recently developed regression models based on comparison with standard CF measurements for soils with high organic...... carbon (OC) content and propose a modification to improve prediction accuracy. Evaluation of the CF prediction accuracy for 29 soils with clay contents ranging from 6 to 25% and with OC contents from 2.0 to 8.4% showed that the models worked reasonably well for all soils when the OC content was below 2...

  11. Appropriate Combination of Artificial Intelligence and Algorithms for Increasing Predictive Accuracy Management

    Directory of Open Access Journals (Sweden)

    Shahram Gilani Nia

    2010-03-01

    Full Text Available In this paper a simple and effective expert system to predict random data fluctuation in short-term period is established. Evaluation process includes introducing Fourier series, Markov chain model prediction and comparison (Gray combined with the model prediction Gray- Fourier- Markov that the mixed results, to create an expert system predicted with artificial intelligence, made this model to predict the effectiveness of random fluctuation in most data management programs to increase. The outcome of this study introduced artificial intelligence algorithms that help detect that the computer environment to create a system that experts predict the short-term and unstable situation happens correctly and accurately predict. To test the effectiveness of the algorithm presented studies (Chen Tzay len,2008, and predicted data of tourism demand for Iran model is used. Results for the two countries show output model has high accuracy.

  12. Development and application of a statistical methodology to evaluate the predictive accuracy of building energy baseline models

    Energy Technology Data Exchange (ETDEWEB)

    Granderson, Jessica [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). Energy Technologies Area Div.; Price, Phillip N. [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). Energy Technologies Area Div.

    2014-03-01

    This paper documents the development and application of a general statistical methodology to assess the accuracy of baseline energy models, focusing on its application to Measurement and Verification (M&V) of whole-­building energy savings. The methodology complements the principles addressed in resources such as ASHRAE Guideline 14 and the International Performance Measurement and Verification Protocol. It requires fitting a baseline model to data from a ``training period’’ and using the model to predict total electricity consumption during a subsequent ``prediction period.’’ We illustrate the methodology by evaluating five baseline models using data from 29 buildings. The training period and prediction period were varied, and model predictions of daily, weekly, and monthly energy consumption were compared to meter data to determine model accuracy. Several metrics were used to characterize the accuracy of the predictions, and in some cases the best-­performing model as judged by one metric was not the best performer when judged by another metric.

  13. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  14. Estimating the Accuracy of the Chedoke-McMaster Stroke Assessment Predictive Equations for Stroke Rehabilitation.

    Science.gov (United States)

    Dang, Mia; Ramsaran, Kalinda D; Street, Melissa E; Syed, S Noreen; Barclay-Goddard, Ruth; Stratford, Paul W; Miller, Patricia A

    2011-01-01

    To estimate the predictive accuracy and clinical usefulness of the Chedoke-McMaster Stroke Assessment (CMSA) predictive equations. A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from -0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted.

  15. Accuracy test for link prediction in terms of similarity index: The case of WS and BA models

    Science.gov (United States)

    Ahn, Min-Woo; Jung, Woo-Sung

    2015-07-01

    Link prediction is a technique that uses the topological information in a given network to infer the missing links in it. Since past research on link prediction has primarily focused on enhancing performance for given empirical systems, negligible attention has been devoted to link prediction with regard to network models. In this paper, we thus apply link prediction to two network models: The Watts-Strogatz (WS) model and Barabási-Albert (BA) model. We attempt to gain a better understanding of the relation between accuracy and each network parameter (mean degree, the number of nodes and the rewiring probability in the WS model) through network models. Six similarity indices are used, with precision and area under the ROC curve (AUC) value as the accuracy metrics. We observe a positive correlation between mean degree and accuracy, and size independence of the AUC value.

  16. Explained variation and predictive accuracy in general parametric statistical models: the role of model misspecification

    DEFF Research Database (Denmark)

    Rosthøj, Susanne; Keiding, Niels

    2004-01-01

    When studying a regression model measures of explained variation are used to assess the degree to which the covariates determine the outcome of interest. Measures of predictive accuracy are used to assess the accuracy of the predictions based on the covariates and the regression model. We give a ...... a detailed and general introduction to the two measures and the estimation procedures. The framework we set up allows for a study of the effect of misspecification on the quantities estimated. We also introduce a generalization to survival analysis....

  17. Dynamic Filtering Improves Attentional State Prediction with fNIRS

    Science.gov (United States)

    Harrivel, Angela R.; Weissman, Daniel H.; Noll, Douglas C.; Huppert, Theodore; Peltier, Scott J.

    2016-01-01

    Brain activity can predict a person's level of engagement in an attentional task. However, estimates of brain activity are often confounded by measurement artifacts and systemic physiological noise. The optimal method for filtering this noise - thereby increasing such state prediction accuracy - remains unclear. To investigate this, we asked study participants to perform an attentional task while we monitored their brain activity with functional near infrared spectroscopy (fNIRS). We observed higher state prediction accuracy when noise in the fNIRS hemoglobin [Hb] signals was filtered with a non-stationary (adaptive) model as compared to static regression (84% +/- 6% versus 72% +/- 15%).

  18. Predictive Validity and Accuracy of Oral Reading Fluency for English Learners

    Science.gov (United States)

    Vanderwood, Michael L.; Tung, Catherine Y.; Checca, C. Jason

    2014-01-01

    The predictive validity and accuracy of an oral reading fluency (ORF) measure for a statewide assessment in English language arts was examined for second-grade native English speakers (NESs) and English learners (ELs) with varying levels of English proficiency. In addition to comparing ELs with native English speakers, the impact of English…

  19. Improving the accuracy of Møller-Plesset perturbation theory with neural networks

    Science.gov (United States)

    McGibbon, Robert T.; Taube, Andrew G.; Donchev, Alexander G.; Siva, Karthik; Hernández, Felipe; Hargus, Cory; Law, Ka-Hei; Klepeis, John L.; Shaw, David E.

    2017-10-01

    Noncovalent interactions are of fundamental importance across the disciplines of chemistry, materials science, and biology. Quantum chemical calculations on noncovalently bound complexes, which allow for the quantification of properties such as binding energies and geometries, play an essential role in advancing our understanding of, and building models for, a vast array of complex processes involving molecular association or self-assembly. Because of its relatively modest computational cost, second-order Møller-Plesset perturbation (MP2) theory is one of the most widely used methods in quantum chemistry for studying noncovalent interactions. MP2 is, however, plagued by serious errors due to its incomplete treatment of electron correlation, especially when modeling van der Waals interactions and π-stacked complexes. Here we present spin-network-scaled MP2 (SNS-MP2), a new semi-empirical MP2-based method for dimer interaction-energy calculations. To correct for errors in MP2, SNS-MP2 uses quantum chemical features of the complex under study in conjunction with a neural network to reweight terms appearing in the total MP2 interaction energy. The method has been trained on a new data set consisting of over 200 000 complete basis set (CBS)-extrapolated coupled-cluster interaction energies, which are considered the gold standard for chemical accuracy. SNS-MP2 predicts gold-standard binding energies of unseen test compounds with a mean absolute error of 0.04 kcal mol-1 (root-mean-square error 0.09 kcal mol-1), a 6- to 7-fold improvement over MP2. To the best of our knowledge, its accuracy exceeds that of all extant density functional theory- and wavefunction-based methods of similar computational cost, and is very close to the intrinsic accuracy of our benchmark coupled-cluster methodology itself. Furthermore, SNS-MP2 provides reliable per-conformation confidence intervals on the predicted interaction energies, a feature not available from any alternative method.

  20. Accuracy statistics in predicting Independent Activities of Daily Living (IADL) capacity with comprehensive and brief neuropsychological test batteries.

    Science.gov (United States)

    Karzmark, Peter; Deutsch, Gayle K

    2018-01-01

    This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.

  1. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    OpenAIRE

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course corr...

  2. Comparison of the accuracy of three algorithms in predicting accessory pathways among adult Wolff-Parkinson-White syndrome patients.

    Science.gov (United States)

    Maden, Orhan; Balci, Kevser Gülcihan; Selcuk, Mehmet Timur; Balci, Mustafa Mücahit; Açar, Burak; Unal, Sefa; Kara, Meryem; Selcuk, Hatice

    2015-12-01

    The aim of this study was to investigate the accuracy of three algorithms in predicting accessory pathway locations in adult patients with Wolff-Parkinson-White syndrome in Turkish population. A total of 207 adult patients with Wolff-Parkinson-White syndrome were retrospectively analyzed. The most preexcited 12-lead electrocardiogram in sinus rhythm was used for analysis. Two investigators blinded to the patient data used three algorithms for prediction of accessory pathway location. Among all locations, 48.5% were left-sided, 44% were right-sided, and 7.5% were located in the midseptum or anteroseptum. When only exact locations were accepted as match, predictive accuracy for Chiang was 71.5%, 72.4% for d'Avila, and 71.5% for Arruda. The percentage of predictive accuracy of all algorithms did not differ between the algorithms (p = 1.000; p = 0.875; p = 0.885, respectively). The best algorithm for prediction of right-sided, left-sided, and anteroseptal and midseptal accessory pathways was Arruda (p algorithms were similar in predicting accessory pathway location and the predicted accuracy was lower than previously reported by their authors. However, according to the accessory pathway site, the algorithm designed by Arruda et al. showed better predictions than the other algorithms and using this algorithm may provide advantages before a planned ablation.

  3. Resource allocation for maximizing prediction accuracy and genetic gain of genomic selection in plant breeding: a simulation experiment.

    Science.gov (United States)

    Lorenz, Aaron J

    2013-03-01

    Allocating resources between population size and replication affects both genetic gain through phenotypic selection and quantitative trait loci detection power and effect estimation accuracy for marker-assisted selection (MAS). It is well known that because alleles are replicated across individuals in quantitative trait loci mapping and MAS, more resources should be allocated to increasing population size compared with phenotypic selection. Genomic selection is a form of MAS using all marker information simultaneously to predict individual genetic values for complex traits and has widely been found superior to MAS. No studies have explicitly investigated how resource allocation decisions affect success of genomic selection. My objective was to study the effect of resource allocation on response to MAS and genomic selection in a single biparental population of doubled haploid lines by using computer simulation. Simulation results were compared with previously derived formulas for the calculation of prediction accuracy under different levels of heritability and population size. Response of prediction accuracy to resource allocation strategies differed between genomic selection models (ridge regression best linear unbiased prediction [RR-BLUP], BayesCπ) and multiple linear regression using ordinary least-squares estimation (OLS), leading to different optimal resource allocation choices between OLS and RR-BLUP. For OLS, it was always advantageous to maximize population size at the expense of replication, but a high degree of flexibility was observed for RR-BLUP. Prediction accuracy of doubled haploid lines included in the training set was much greater than of those excluded from the training set, so there was little benefit to phenotyping only a subset of the lines genotyped. Finally, observed prediction accuracies in the simulation compared well to calculated prediction accuracies, indicating these theoretical formulas are useful for making resource allocation

  4. Mortality Predicted Accuracy for Hepatocellular Carcinoma Patients with Hepatic Resection Using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Herng-Chia Chiu

    2013-01-01

    Full Text Available The aim of this present study is firstly to compare significant predictors of mortality for hepatocellular carcinoma (HCC patients undergoing resection between artificial neural network (ANN and logistic regression (LR models and secondly to evaluate the predictive accuracy of ANN and LR in different survival year estimation models. We constructed a prognostic model for 434 patients with 21 potential input variables by Cox regression model. Model performance was measured by numbers of significant predictors and predictive accuracy. The results indicated that ANN had double to triple numbers of significant predictors at 1-, 3-, and 5-year survival models as compared with LR models. Scores of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC of 1-, 3-, and 5-year survival estimation models using ANN were superior to those of LR in all the training sets and most of the validation sets. The study demonstrated that ANN not only had a great number of predictors of mortality variables but also provided accurate prediction, as compared with conventional methods. It is suggested that physicians consider using data mining methods as supplemental tools for clinical decision-making and prognostic evaluation.

  5. Mortality Predicted Accuracy for Hepatocellular Carcinoma Patients with Hepatic Resection Using Artificial Neural Network

    Science.gov (United States)

    Chiu, Herng-Chia; Ho, Te-Wei; Lee, King-Teh; Chen, Hong-Yaw; Ho, Wen-Hsien

    2013-01-01

    The aim of this present study is firstly to compare significant predictors of mortality for hepatocellular carcinoma (HCC) patients undergoing resection between artificial neural network (ANN) and logistic regression (LR) models and secondly to evaluate the predictive accuracy of ANN and LR in different survival year estimation models. We constructed a prognostic model for 434 patients with 21 potential input variables by Cox regression model. Model performance was measured by numbers of significant predictors and predictive accuracy. The results indicated that ANN had double to triple numbers of significant predictors at 1-, 3-, and 5-year survival models as compared with LR models. Scores of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) of 1-, 3-, and 5-year survival estimation models using ANN were superior to those of LR in all the training sets and most of the validation sets. The study demonstrated that ANN not only had a great number of predictors of mortality variables but also provided accurate prediction, as compared with conventional methods. It is suggested that physicians consider using data mining methods as supplemental tools for clinical decision-making and prognostic evaluation. PMID:23737707

  6. Effects of field plot size on prediction accuracy of aboveground biomass in airborne laser scanning-assisted inventories in tropical rain forests of Tanzania.

    Science.gov (United States)

    Mauya, Ernest William; Hansen, Endre Hofstad; Gobakken, Terje; Bollandsås, Ole Martin; Malimbwi, Rogers Ernest; Næsset, Erik

    2015-12-01

    Airborne laser scanning (ALS) has recently emerged as a promising tool to acquire auxiliary information for improving aboveground biomass (AGB) estimation in sample-based forest inventories. Under design-based and model-assisted inferential frameworks, the estimation relies on a model that relates the auxiliary ALS metrics to AGB estimated on ground plots. The size of the field plots has been identified as one source of model uncertainty because of the so-called boundary effects which increases with decreasing plot size. Recent research in tropical forests has aimed to quantify the boundary effects on model prediction accuracy, but evidence of the consequences for the final AGB estimates is lacking. In this study we analyzed the effect of field plot size on model prediction accuracy and its implication when used in a model-assisted inferential framework. The results showed that the prediction accuracy of the model improved as the plot size increased. The adjusted R 2 increased from 0.35 to 0.74 while the relative root mean square error decreased from 63.6 to 29.2%. Indicators of boundary effects were identified and confirmed to have significant effects on the model residuals. Variance estimates of model-assisted mean AGB relative to corresponding variance estimates of pure field-based AGB, decreased with increasing plot size in the range from 200 to 3000 m 2 . The variance ratio of field-based estimates relative to model-assisted variance ranged from 1.7 to 7.7. This study showed that the relative improvement in precision of AGB estimation when increasing field-plot size, was greater for an ALS-assisted inventory compared to that of a pure field-based inventory.

  7. Long-term prediction of reading accuracy and speed: The importance of paired-associate learning

    DEFF Research Database (Denmark)

    Poulsen, Mads; Asmussen, Vibeke; Elbro, Carsten

    Purpose: Several cross-sectional studies have found a correlation between paired-associate learning (PAL) and reading (e.g. Litt et al., 2013; Messbauer & de Jong, 2003, 2006). These findings suggest that verbal learning of phonological forms is important for reading. However, results from...... longitudinal studies have been mixed (e.g. Lervåg & Hulme, 2009; Horbach et al. 2015). The present study investigated the possibility that the mixed results may be a result of a conflation of accuracy and speed. It is possible that PAL is a stronger correlate of reading accuracy than speed (Litt et al., 2013...... of reading comprehension and isolated sight word reading accuracy and speed. Results: PAL predicted unique variance in sight word accuracy, but not speed. Furthermore, PAL was indirectly linked to reading comprehension through sight word accuracy. RAN correlated with both accuracy and speed...

  8. Test accuracy of metabolic indicators in predicting decreased fertility in dairy cows

    DEFF Research Database (Denmark)

    Lomander, H; Gustafsson, H; Svensson, C

    2012-01-01

    Negative energy balance is a known risk factor for decreased fertility in dairy cows. This study evaluated the accuracy of plasma concentrations of nonesterified fatty acids (NEFA), β-hydroxybutyrate (BHBA), and insulin-like growth factor 1 (IGF-1)—factors related to negative energy balance...... was low when metabolic indicators measured as single values in early lactation were used to predict fertility in dairy cows, but accuracy was influenced by cow-level factors such as parity. The prevalence of the target condition (in this case, decreased fertility) also influences test usefulness......—in predicting decreased fertility. One plasma sample per cow was collected from 480 cows in 12 herds during the period from d 4 to 21 in milk and analyzed for NEFA, BHBA, and IGF-1. For each cow, data on breed, parity, calving date, gynecological examinations, and insemination dates were obtained. Milk samples...

  9. Improving default risk prediction using Bayesian model uncertainty techniques.

    Science.gov (United States)

    Kazemi, Reza; Mosleh, Ali

    2012-11-01

    Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis. © 2012 Society for Risk Analysis.

  10. Improved accuracy in estimation of left ventricular function parameters from QGS software with Tc-99m tetrofosmin gated-SPECT. A multivariate analysis

    International Nuclear Information System (INIS)

    Okizaki, Atsutaka; Shuke, Noriyuki; Sato, Junichi; Ishikawa, Yukio; Yamamoto, Wakako; Kikuchi, Kenjiro; Aburano, Tamio

    2003-01-01

    The purpose of this study was to verify whether the accuracy of left ventricular parameters related to left ventricular function from gated-SPECT improved or not, using multivariate analysis. Ninety-six patients with cardiovascular diseases were studied. Gated-SPECT with the quantitative gated SPECT (QGS) software and left ventriculography (LVG) were performed to obtain left ventricular ejection fraction (LVEF), end-diastolic volume (EDV) and end-systolic volume (ESV). Then, multivariate analyses were performed to determine empirical formulas for predicting these parameters. The calculated values of left ventricular parameters were compared with those obtained directly from the QGS software and LVG. Multivariate analyses were able to improve accuracy in estimation of LVEF, EDV and ESV. Statistically significant improvement was seen in LVEF (from r=0.6965 to r=0.8093, p<0.05). Although not statistically significant, improvements in correlation coefficients were seen in EDV (from r=0.7199 to r=0.7595, p=0.2750) and ESV (from r=0.5694 to r=0.5871, p=0.4281). The empirical equations with multivariate analysis improved the accuracy in estimating LVEF from gated-SPECT with the QGS software. (author)

  11. Improved prediction for the mass of the W boson in the NMSSM

    International Nuclear Information System (INIS)

    Staal, O.; Zeune, L.

    2015-10-01

    Electroweak precision observables, being highly sensitive to loop contributions of new physics, provide a powerful tool to test the theory and to discriminate between different models of the underlying physics. In that context, the W boson mass, M W , plays a crucial role. The accuracy of the M W measurement has been significantly improved over the last years, and further improvement of the experimental accuracy is expected from future LHC measurements. In order to fully exploit the precise experimental determination, an accurate theoretical prediction for M W in the Standard Model (SM) and extensions of it is of central importance. We present the currently most accurate prediction for the W boson mass in the Next-to-Minimal Supersymmetric extension of the Standard Model (NMSSM), including the full one-loop result and all available higher-order corrections of SM and SUSY type. The evaluation of M W is performed in a flexible framework, which facilitates the extension to other models beyond the SM. We show numerical results for the W boson mass in the NMSSM, focussing on phenomenologically interesting scenarios, in which the Higgs signal can be interpreted as the lightest or second lightest CP-even Higgs boson of the NMSSM. We find that, for both Higgs signal interpretations, the NMSSM M W prediction is well compatible with the measurement. We study the SUSY contributions to M W in detail and investigate in particular the genuine NMSSM effects from the Higgs and neutralino sectors.

  12. IMPROVED MOTOR-TIMING: EFFECTS OF SYNCHRONIZED METRO-NOME TRAINING ON GOLF SHOT ACCURACY

    Directory of Open Access Journals (Sweden)

    Louise Rönnqvist

    2009-12-01

    Full Text Available This study investigates the effect of synchronized metronome training (SMT on motor timing and how this training might affect golf shot accuracy. Twenty-six experienced male golfers participated (mean age 27 years; mean golf handicap 12.6 in this study. Pre- and post-test investigations of golf shots made by three different clubs were conducted by use of a golf simulator. The golfers were randomized into two groups: a SMT group and a Control group. After the pre-test, the golfers in the SMT group completed a 4-week SMT program designed to improve their motor timing, the golfers in the Control group were merely training their golf-swings during the same time period. No differences between the two groups were found from the pre-test outcomes, either for motor timing scores or for golf shot accuracy. However, the post-test results after the 4-weeks SMT showed evident motor timing improvements. Additionally, significant improvements for golf shot accuracy were found for the SMT group and with less variability in their performance. No such improvements were found for the golfers in the Control group. As with previous studies that used a SMT program, this study's results provide further evidence that motor timing can be improved by SMT and that such timing improvement also improves golf accuracy

  13. Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration

    Directory of Open Access Journals (Sweden)

    Mingjun Deng

    2017-12-01

    Full Text Available The Chinese Gaofen-3 (GF-3 mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method.

  14. Multiple-Trait Genomic Selection Methods Increase Genetic Value Prediction Accuracy

    Science.gov (United States)

    Jia, Yi; Jannink, Jean-Luc

    2012-01-01

    Genetic correlations between quantitative traits measured in many breeding programs are pervasive. These correlations indicate that measurements of one trait carry information on other traits. Current single-trait (univariate) genomic selection does not take advantage of this information. Multivariate genomic selection on multiple traits could accomplish this but has been little explored and tested in practical breeding programs. In this study, three multivariate linear models (i.e., GBLUP, BayesA, and BayesCπ) were presented and compared to univariate models using simulated and real quantitative traits controlled by different genetic architectures. We also extended BayesA with fixed hyperparameters to a full hierarchical model that estimated hyperparameters and BayesCπ to impute missing phenotypes. We found that optimal marker-effect variance priors depended on the genetic architecture of the trait so that estimating them was beneficial. We showed that the prediction accuracy for a low-heritability trait could be significantly increased by multivariate genomic selection when a correlated high-heritability trait was available. Further, multiple-trait genomic selection had higher prediction accuracy than single-trait genomic selection when phenotypes are not available on all individuals and traits. Additional factors affecting the performance of multiple-trait genomic selection were explored. PMID:23086217

  15. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Science.gov (United States)

    Drzewiecki, Wojciech

    2016-12-01

    In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

  16. Estimating the Accuracy of the Chedoke–McMaster Stroke Assessment Predictive Equations for Stroke Rehabilitation

    Science.gov (United States)

    Dang, Mia; Ramsaran, Kalinda D.; Street, Melissa E.; Syed, S. Noreen; Barclay-Goddard, Ruth; Miller, Patricia A.

    2011-01-01

    ABSTRACT Purpose: To estimate the predictive accuracy and clinical usefulness of the Chedoke–McMaster Stroke Assessment (CMSA) predictive equations. Method: A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Results: Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from −0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. Conclusions: This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted. PMID:22654239

  17. Improved hybrid optimization algorithm for 3D protein structure prediction.

    Science.gov (United States)

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  18. A Method of Calculating Functional Independence Measure at Discharge from Functional Independence Measure Effectiveness Predicted by Multiple Regression Analysis Has a High Degree of Predictive Accuracy.

    Science.gov (United States)

    Tokunaga, Makoto; Watanabe, Susumu; Sonoda, Shigeru

    2017-09-01

    Multiple linear regression analysis is often used to predict the outcome of stroke rehabilitation. However, the predictive accuracy may not be satisfactory. The objective of this study was to elucidate the predictive accuracy of a method of calculating motor Functional Independence Measure (mFIM) at discharge from mFIM effectiveness predicted by multiple regression analysis. The subjects were 505 patients with stroke who were hospitalized in a convalescent rehabilitation hospital. The formula "mFIM at discharge = mFIM effectiveness × (91 points - mFIM at admission) + mFIM at admission" was used. By including the predicted mFIM effectiveness obtained through multiple regression analysis in this formula, we obtained the predicted mFIM at discharge (A). We also used multiple regression analysis to directly predict mFIM at discharge (B). The correlation between the predicted and the measured values of mFIM at discharge was compared between A and B. The correlation coefficients were .916 for A and .878 for B. Calculating mFIM at discharge from mFIM effectiveness predicted by multiple regression analysis had a higher degree of predictive accuracy of mFIM at discharge than that directly predicted. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  19. Sensitivity, Specificity, Predictive Values, and Accuracy of Three Diagnostic Tests to Predict Inferior Alveolar Nerve Blockade Failure in Symptomatic Irreversible Pulpitis

    Directory of Open Access Journals (Sweden)

    Daniel Chavarría-Bolaños

    2017-01-01

    Full Text Available Introduction. The inferior alveolar nerve block (IANB is the most common anesthetic technique used on mandibular teeth during root canal treatment. Its success in the presence of preoperative inflammation is still controversial. The aim of this study was to evaluate the sensitivity, specificity, predictive values, and accuracy of three diagnostic tests used to predict IANB failure in symptomatic irreversible pulpitis (SIP. Methodology. A cross-sectional study was carried out on the mandibular molars of 53 patients with SIP. All patients received a single cartridge of mepivacaine 2% with 1 : 100000 epinephrine using the IANB technique. Three diagnostic clinical tests were performed to detect anesthetic failure. Anesthetic failure was defined as a positive painful response to any of the three tests. Sensitivity, specificity, predictive values, accuracy, and ROC curves were calculated and compared and significant differences were analyzed. Results. IANB failure was determined in 71.7% of the patients. The sensitivity scores for the three tests (lip numbness, the cold stimuli test, and responsiveness during endodontic access were 0.03, 0.35, and 0.55, respectively, and the specificity score was determined as 1 for all of the tests. Clinically, none of the evaluated tests demonstrated a high enough accuracy (0.30, 0.53, and 0.68 for lip numbness, the cold stimuli test, and responsiveness during endodontic access, resp.. A comparison of the areas under the curve in the ROC analyses showed statistically significant differences between the three tests (p<0.05. Conclusion. None of the analyzed tests demonstrated a high enough accuracy to be considered a reliable diagnostic tool for the prediction of anesthetic failure.

  20. Accuracy of Igenity genomically estimated breeding values for predicting Australian Angus BREEDPLAN traits.

    Science.gov (United States)

    Boerner, V; Johnston, D; Wu, X-L; Bauck, S

    2015-02-01

    Genomically estimated breeding values (GEBV) for Angus beef cattle are available from at least 2 commercial suppliers (Igenity [http://www.igenity.com] and Zoetis [http://www.zoetis.com]). The utility of these GEBV for improving genetic evaluation depends on their accuracies, which can be estimated by the genetic correlation with phenotypic target traits. Genomically estimated breeding values of 1,032 Angus bulls calculated from prediction equations (PE) derived by 2 different procedures in the U.S. Angus population were supplied by Igenity. Both procedures were based on Illuminia BovineSNP50 BeadChip genotypes. In procedure sg, GEBV were calculated from PE that used subsets of only 392 SNP, where these subsets were individually selected for each trait by BayesCπ. In procedure rg GEBV were calculated from PE derived in a ridge regression approach using all available SNP. Because the total set of 1,032 bulls with GEBV contained 732 individuals used in the Igenity training population, GEBV subsets were formed characterized by a decreasing average relationship between individuals in the subsets and individuals in the training population. Accuracies of GEBV were estimated as genetic correlations between GEBV and their phenotypic target traits modeling GEBV as trait observations in a bivariate REML approach, in which phenotypic observations were those recorded in the commercial Australian Angus seed stock sector. Using results from the GEBV subset excluding all training individuals as a reference, estimated accuracies were generally in agreement with those already published, with both types of GEBV (sg and rg) yielding similar results. Accuracies for growth traits ranged from 0.29 to 0.45, for reproductive traits from 0.11 to 0.53, and for carcass traits from 0.3 to 0.75. Accuracies generally decreased with an increasing genetic distance between the training and the validation population. However, for some carcass traits characterized by a low number of phenotypic

  1. Models of alien species richness show moderate predictive accuracy and poor transferability

    Directory of Open Access Journals (Sweden)

    César Capinha

    2018-06-01

    Full Text Available Robust predictions of alien species richness are useful to assess global biodiversity change. Nevertheless, the capacity to predict spatial patterns of alien species richness remains largely unassessed. Using 22 data sets of alien species richness from diverse taxonomic groups and covering various parts of the world, we evaluated whether different statistical models were able to provide useful predictions of absolute and relative alien species richness, as a function of explanatory variables representing geographical, environmental and socio-economic factors. Five state-of-the-art count data modelling techniques were used and compared: Poisson and negative binomial generalised linear models (GLMs, multivariate adaptive regression splines (MARS, random forests (RF and boosted regression trees (BRT. We found that predictions of absolute alien species richness had a low to moderate accuracy in the region where the models were developed and a consistently poor accuracy in new regions. Predictions of relative richness performed in a superior manner in both geographical settings, but still were not good. Flexible tree ensembles-type techniques (RF and BRT were shown to be significantly better in modelling alien species richness than parametric linear models (such as GLM, despite the latter being more commonly applied for this purpose. Importantly, the poor spatial transferability of models also warrants caution in assuming the generality of the relationships they identify, e.g. by applying projections under future scenario conditions. Ultimately, our results strongly suggest that predictability of spatial variation in richness of alien species richness is limited. The somewhat more robust ability to rank regions according to the number of aliens they have (i.e. relative richness, suggests that models of aliens species richness may be useful for prioritising and comparing regions, but not for predicting exact species numbers.

  2. Accuracy Improvement of Boron Meter Adopting New Fitting Function and Multi-Detector

    Directory of Open Access Journals (Sweden)

    Chidong Kong

    2016-12-01

    Full Text Available This paper introduces a boron meter with improved accuracy compared with other commercially available boron meters. Its design includes a new fitting function and a multi-detector. In pressurized water reactors (PWRs in Korea, many boron meters have been used to continuously monitor boron concentration in reactor coolant. However, it is difficult to use the boron meters in practice because the measurement uncertainty is high. For this reason, there has been a strong demand for improvement in their accuracy. In this work, a boron meter evaluation model was developed, and two approaches were considered to improve the boron meter accuracy: the first approach uses a new fitting function and the second approach uses a multi-detector. With the new fitting function, the boron concentration error was decreased from 3.30 ppm to 0.73 ppm. With the multi-detector, the count signals were contaminated with noise such as field measurement data, and analyses were repeated 1,000 times to obtain average and standard deviations of the boron concentration errors. Finally, using the new fitting formulation and multi-detector together, the average error was decreased from 5.95 ppm to 1.83 ppm and its standard deviation was decreased from 0.64 ppm to 0.26 ppm. This result represents a great improvement of the boron meter accuracy.

  3. Accuracy improvement of boron meter adopting new fitting function and multi-detector

    Energy Technology Data Exchange (ETDEWEB)

    Kong, Chidong; Lee, Hyun Suk; Tak, Tae Woo; Lee, Deok Jung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of); KIm, Si Hwan; Lyou, Seok Jean [Users Incorporated Company, Hansin S-MECA, Daejeon (Korea, Republic of)

    2016-12-15

    This paper introduces a boron meter with improved accuracy compared with other commercially available boron meters. Its design includes a new fitting function and a multi-detector. In pressurized water reactors (PWRs) in Korea, many boron meters have been used to continuously monitor boron concentration in reactor coolant. However, it is difficult to use the boron meters in practice because the measurement uncertainty is high. For this reason, there has been a strong demand for improvement in their accuracy. In this work, a boron meter evaluation model was developed, and two approaches were considered to improve the boron meter accuracy: the first approach uses a new fitting function and the second approach uses a multi-detector. With the new fitting function, the boron concentration error was decreased from 3.30 ppm to 0.73 ppm. With the multi-detector, the count signals were contaminated with noise such as field measurement data, and analyses were repeated 1,000 times to obtain average and standard deviations of the boron concentration errors. Finally, using the new fitting formulation and multi-detector together, the average error was decreased from 5.95 ppm to 1.83 ppm and its standard deviation was decreased from 0.64 ppm to 0.26 ppm. This result represents a great improvement of the boron meter accuracy.

  4. A model to improve the accuracy of US Poison Center data collection.

    Science.gov (United States)

    Krenzelok, E P; Reynolds, K M; Dart, R C; Green, J L

    2014-01-01

    Over 2 million human exposure calls are reported annually to United States regional poison information centers. All exposures are documented electronically and submitted to the American Association of Poison Control Center's National Poison Data System. This database represents the largest data source available on the epidemiology of pharmaceutical and non-pharmaceutical poisoning exposures. The accuracy of these data is critical; however, research has demonstrated that inconsistencies and inaccuracies exist. This study outlines the methods and results of a training program that was developed and implemented to enhance the quality of data collection using acetaminophen exposures as a model. Eleven poison centers were assigned randomly to receive either passive or interactive education to improve medical record documentation. A task force provided recommendations on educational and training strategies and the development of a quality-measurement scorecard to serve as a data collection tool to assess poison center data quality. Poison centers were recruited to participate in the study. Clinical researchers scored the documentation of each exposure record for accuracy. Results. Two thousand two hundred cases were reviewed and assessed for accuracy of data collection. After training, the overall mean quality scores were higher for both the passive (95.3%; + 1.6% change) and interactive intervention groups (95.3%; + 0.9% change). Data collection accuracy improved modestly for the overall accuracy score and significantly for the substance identification component. There was little difference in accuracy measures between the different training methods. Despite the diversity of poison centers, data accuracy, specifically substance identification data fields, can be improved by developing a standardized, systematic, targeted, and mandatory training process. This process should be considered for training on other important topics, thus enhancing the value of these data in

  5. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  6. Age-related differences in the accuracy of web query-based predictions of influenza-like illness.

    Directory of Open Access Journals (Sweden)

    Alexander Domnich

    Full Text Available Web queries are now widely used for modeling, nowcasting and forecasting influenza-like illness (ILI. However, given that ILI attack rates vary significantly across ages, in terms of both magnitude and timing, little is known about whether the association between ILI morbidity and ILI-related queries is comparable across different age-groups. The present study aimed to investigate features of the association between ILI morbidity and ILI-related query volume from the perspective of age.Since Google Flu Trends is unavailable in Italy, Google Trends was used to identify entry terms that correlated highly with official ILI surveillance data. All-age and age-class-specific modeling was performed by means of linear models with generalized least-square estimation. Hold-out validation was used to quantify prediction accuracy. For purposes of comparison, predictions generated by exponential smoothing were computed.Five search terms showed high correlation coefficients of > .6. In comparison with exponential smoothing, the all-age query-based model correctly predicted the peak time and yielded a higher correlation coefficient with observed ILI morbidity (.978 vs. .929. However, query-based prediction of ILI morbidity was associated with a greater error. Age-class-specific query-based models varied significantly in terms of prediction accuracy. In the 0-4 and 25-44-year age-groups, these did well and outperformed exponential smoothing predictions; in the 15-24 and ≥ 65-year age-classes, however, the query-based models were inaccurate and highly overestimated peak height. In all but one age-class, peak timing predicted by the query-based models coincided with observed timing.The accuracy of web query-based models in predicting ILI morbidity rates could differ among ages. Greater age-specific detail may be useful in flu query-based studies in order to account for age-specific features of the epidemiology of ILI.

  7. Knowing right from wrong in mental arithmetic judgments: calibration of confidence predicts the development of accuracy.

    Science.gov (United States)

    Rinne, Luke F; Mazzocco, Michèle M M

    2014-01-01

    Does knowing when mental arithmetic judgments are right--and when they are wrong--lead to more accurate judgments over time? We hypothesize that the successful detection of errors (and avoidance of false alarms) may contribute to the development of mental arithmetic performance. Insight into error detection abilities can be gained by examining the "calibration" of mental arithmetic judgments-that is, the alignment between confidence in judgments and the accuracy of those judgments. Calibration may be viewed as a measure of metacognitive monitoring ability. We conducted a developmental longitudinal investigation of the relationship between the calibration of children's mental arithmetic judgments and their performance on a mental arithmetic task. Annually between Grades 5 and 8, children completed a problem verification task in which they rapidly judged the accuracy of arithmetic expressions (e.g., 25 + 50 = 75) and rated their confidence in each judgment. Results showed that calibration was strongly related to concurrent mental arithmetic performance, that calibration continued to develop even as mental arithmetic accuracy approached ceiling, that poor calibration distinguished children with mathematics learning disability from both low and typically achieving children, and that better calibration in Grade 5 predicted larger gains in mental arithmetic accuracy between Grades 5 and 8. We propose that good calibration supports the implementation of cognitive control, leading to long-term improvement in mental arithmetic accuracy. Because mental arithmetic "fluency" is critical for higher-level mathematics competence, calibration of confidence in mental arithmetic judgments may represent a novel and important developmental predictor of future mathematics performance.

  8. CNNcon: improved protein contact maps prediction using cascaded neural networks.

    Directory of Open Access Journals (Sweden)

    Wang Ding

    Full Text Available BACKGROUNDS: Despite continuing progress in X-ray crystallography and high-field NMR spectroscopy for determination of three-dimensional protein structures, the number of unsolved and newly discovered sequences grows much faster than that of determined structures. Protein modeling methods can possibly bridge this huge sequence-structure gap with the development of computational science. A grand challenging problem is to predict three-dimensional protein structure from its primary structure (residues sequence alone. However, predicting residue contact maps is a crucial and promising intermediate step towards final three-dimensional structure prediction. Better predictions of local and non-local contacts between residues can transform protein sequence alignment to structure alignment, which can finally improve template based three-dimensional protein structure predictors greatly. METHODS: CNNcon, an improved multiple neural networks based contact map predictor using six sub-networks and one final cascade-network, was developed in this paper. Both the sub-networks and the final cascade-network were trained and tested with their corresponding data sets. While for testing, the target protein was first coded and then input to its corresponding sub-networks for prediction. After that, the intermediate results were input to the cascade-network to finish the final prediction. RESULTS: The CNNcon can accurately predict 58.86% in average of contacts at a distance cutoff of 8 Å for proteins with lengths ranging from 51 to 450. The comparison results show that the present method performs better than the compared state-of-the-art predictors. Particularly, the prediction accuracy keeps steady with the increase of protein sequence length. It indicates that the CNNcon overcomes the thin density problem, with which other current predictors have trouble. This advantage makes the method valuable to the prediction of long length proteins. As a result, the effective

  9. Three-dimensional display improves observer speed and accuracy

    International Nuclear Information System (INIS)

    Nelson, J.A.; Rowberg, A.H.; Kuyper, S.; Choi, H.S.

    1989-01-01

    In an effort to evaluate the potential cost-effectiveness of three-dimensional (3D) display equipment, we compared the speed and accuracy of experienced radiologists identifying in sliced uppercase letters from CT scans with 2D and pseudo-3D display. CT scans of six capital letters were obtained and printed as a 2D display or as a synthesized pseudo-3D display (Pixar). Six observes performed a timed identification task. Radiologists read the 3D display an average of 16 times faster than the 2D, and the average error rate of 2/6 (± 0.6/6) for 2D interpretations was totally eliminated. This degree of improvement in speed and accuracy suggests that the expense of 3D display may be cost-effective in a defined clinical setting

  10. Predictive accuracy of changes in transvaginal sonographic cervical length over time for preterm birth: a systematic review and metaanalysis.

    Science.gov (United States)

    Conde-Agudelo, Agustin; Romero, Roberto

    2015-12-01

    To determine the accuracy of changes in transvaginal sonographic cervical length over time in predicting preterm birth in women with singleton and twin gestations. PubMed, Embase, Cinahl, Lilacs, and Medion (all from inception to June 30, 2015), bibliographies, Google scholar, and conference proceedings. Cohort or cross-sectional studies reporting on the predictive accuracy for preterm birth of changes in cervical length over time. Two reviewers independently selected studies, assessed the risk of bias, and extracted the data. Summary receiver-operating characteristic curves, pooled sensitivities and specificities, and summary likelihood ratios were generated. Fourteen studies met the inclusion criteria, of which 7 provided data on singleton gestations (3374 women) and 8 on twin gestations (1024 women). Among women with singleton gestations, the shortening of cervical length over time had a low predictive accuracy for preterm birth at predictive accuracy for preterm birth at predictive accuracies for preterm birth of cervical length shortening over time and the single initial and/or final cervical length measurement in 8 of 11 studies that provided data for making these comparisons. In the largest and highest-quality study, a single measurement of cervical length obtained at 24 or 28 weeks of gestation was significantly more predictive of preterm birth than any decrease in cervical length between these gestational ages. Change in transvaginal sonographic cervical length over time is not a clinically useful test to predict preterm birth in women with singleton or twin gestations. A single cervical length measurement obtained between 18 and 24 weeks of gestation appears to be a better test to predict preterm birth than changes in cervical length over time. Published by Elsevier Inc.

  11. Both Reaction Time and Accuracy Measures of Intraindividual Variability Predict Cognitive Performance in Alzheimer's Disease

    Directory of Open Access Journals (Sweden)

    Björn U. Christ

    2018-04-01

    Full Text Available Dementia researchers around the world prioritize the urgent need for sensitive measurement tools that can detect cognitive and functional change at the earliest stages of Alzheimer's disease (AD. Sensitive indicators of underlying neural pathology assist in the early detection of cognitive change and are thus important for the evaluation of early-intervention clinical trials. One method that may be particularly well-suited to help achieve this goal involves the quantification of intraindividual variability (IIV in cognitive performance. The current study aimed to directly compare two methods of estimating IIV (fluctuations in accuracy-based scores vs. those in latency-based scores to predict cognitive performance in AD. Specifically, we directly compared the relative sensitivity of reaction time (RT—and accuracy-based estimates of IIV to cognitive compromise. The novelty of the present study, however, centered on the patients we tested [a group of patients with Alzheimer's disease (AD] and the outcome measures we used (a measure of general cognitive function and a measure of episodic memory function. Hence, we compared intraindividual standard deviations (iSDs from two RT tasks and three accuracy-based memory tasks in patients with possible or probable Alzheimer's dementia (n = 23 and matched healthy controls (n = 25. The main analyses modeled the relative contributions of RT vs. accuracy-based measures of IIV toward the prediction of performance on measures of (a overall cognitive functioning, and (b episodic memory functioning. Results indicated that RT-based IIV measures are superior predictors of neurocognitive impairment (as indexed by overall cognitive and memory performance than accuracy-based IIV measures, even after adjusting for the timescale of measurement. However, one accuracy-based IIV measure (derived from a recognition memory test also differentiated patients with AD from controls, and significantly predicted episodic memory

  12. Improving virtual screening predictive accuracy of Human kallikrein 5 inhibitors using machine learning models.

    Science.gov (United States)

    Fang, Xingang; Bagui, Sikha; Bagui, Subhash

    2017-08-01

    The readily available high throughput screening (HTS) data from the PubChem database provides an opportunity for mining of small molecules in a variety of biological systems using machine learning techniques. From the thousands of available molecular descriptors developed to encode useful chemical information representing the characteristics of molecules, descriptor selection is an essential step in building an optimal quantitative structural-activity relationship (QSAR) model. For the development of a systematic descriptor selection strategy, we need the understanding of the relationship between: (i) the descriptor selection; (ii) the choice of the machine learning model; and (iii) the characteristics of the target bio-molecule. In this work, we employed the Signature descriptor to generate a dataset on the Human kallikrein 5 (hK 5) inhibition confirmatory assay data and compared multiple classification models including logistic regression, support vector machine, random forest and k-nearest neighbor. Under optimal conditions, the logistic regression model provided extremely high overall accuracy (98%) and precision (90%), with good sensitivity (65%) in the cross validation test. In testing the primary HTS screening data with more than 200K molecular structures, the logistic regression model exhibited the capability of eliminating more than 99.9% of the inactive structures. As part of our exploration of the descriptor-model-target relationship, the excellent predictive performance of the combination of the Signature descriptor and the logistic regression model on the assay data of the Human kallikrein 5 (hK 5) target suggested a feasible descriptor/model selection strategy on similar targets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Measuring diagnostic and predictive accuracy in disease management: an introduction to receiver operating characteristic (ROC) analysis.

    Science.gov (United States)

    Linden, Ariel

    2006-04-01

    Diagnostic or predictive accuracy concerns are common in all phases of a disease management (DM) programme, and ultimately play an influential role in the assessment of programme effectiveness. Areas, such as the identification of diseased patients, predictive modelling of future health status and costs and risk stratification, are just a few of the domains in which assessment of accuracy is beneficial, if not critical. The most commonly used analytical model for this purpose is the standard 2 x 2 table method in which sensitivity and specificity are calculated. However, there are several limitations to this approach, including the reliance on a single defined criterion or cut-off for determining a true-positive result, use of non-standardized measurement instruments and sensitivity to outcome prevalence. This paper introduces the receiver operator characteristic (ROC) analysis as a more appropriate and useful technique for assessing diagnostic and predictive accuracy in DM. Its advantages include; testing accuracy across the entire range of scores and thereby not requiring a predetermined cut-off point, easily examined visual and statistical comparisons across tests or scores, and independence from outcome prevalence. Therefore the implementation of ROC as an evaluation tool should be strongly considered in the various phases of a DM programme.

  14. Google goes cancer: improving outcome prediction for cancer patients by network-based ranking of marker genes.

    Directory of Open Access Journals (Sweden)

    Christof Winter

    Full Text Available Predicting the clinical outcome of cancer patients based on the expression of marker genes in their tumors has received increasing interest in the past decade. Accurate predictors of outcome and response to therapy could be used to personalize and thereby improve therapy. However, state of the art methods used so far often found marker genes with limited prediction accuracy, limited reproducibility, and unclear biological relevance. To address this problem, we developed a novel computational approach to identify genes prognostic for outcome that couples gene expression measurements from primary tumor samples with a network of known relationships between the genes. Our approach ranks genes according to their prognostic relevance using both expression and network information in a manner similar to Google's PageRank. We applied this method to gene expression profiles which we obtained from 30 patients with pancreatic cancer, and identified seven candidate marker genes prognostic for outcome. Compared to genes found with state of the art methods, such as Pearson correlation of gene expression with survival time, we improve the prediction accuracy by up to 7%. Accuracies were assessed using support vector machine classifiers and Monte Carlo cross-validation. We then validated the prognostic value of our seven candidate markers using immunohistochemistry on an independent set of 412 pancreatic cancer samples. Notably, signatures derived from our candidate markers were independently predictive of outcome and superior to established clinical prognostic factors such as grade, tumor size, and nodal status. As the amount of genomic data of individual tumors grows rapidly, our algorithm meets the need for powerful computational approaches that are key to exploit these data for personalized cancer therapies in clinical practice.

  15. How patients can improve the accuracy of their medical records.

    Science.gov (United States)

    Dullabh, Prashila M; Sondheimer, Norman K; Katsh, Ethan; Evans, Michael A

    2014-01-01

    Assess (1) if patients can improve their medical records' accuracy if effectively engaged using a networked Personal Health Record; (2) workflow efficiency and reliability for receiving and processing patient feedback; and (3) patient feedback's impact on medical record accuracy. Improving medical record' accuracy and associated challenges have been documented extensively. Providing patients with useful access to their records through information technology gives them new opportunities to improve their records' accuracy and completeness. A new approach supporting online contributions to their medication lists by patients of Geisinger Health Systems, an online patient-engagement advocate, revealed this can be done successfully. In late 2011, Geisinger launched an online process for patients to provide electronic feedback on their medication lists' accuracy before a doctor visit. Patient feedback was routed to a Geisinger pharmacist, who reviewed it and followed up with the patient before changing the medication list shared by the patient and the clinicians. The evaluation employed mixed methods and consisted of patient focus groups (users, nonusers, and partial users of the feedback form), semi structured interviews with providers and pharmacists, user observations with patients, and quantitative analysis of patient feedback data and pharmacists' medication reconciliation logs. (1) Patients were eager to provide feedback on their medications and saw numerous advantages. Thirty percent of patient feedback forms (457 of 1,500) were completed and submitted to Geisinger. Patients requested changes to the shared medication lists in 89 percent of cases (369 of 414 forms). These included frequency-or dosage changes to existing prescriptions and requests for new medications (prescriptions and over-the counter). (2) Patients provided useful and accurate online feedback. In a subsample of 107 forms, pharmacists responded positively to 68 percent of patient requests for

  16. Joint analysis of psychiatric disorders increases accuracy of risk prediction for schizophrenia, bipolar disorder, and major depressive disorder

    DEFF Research Database (Denmark)

    Maier, Robert; Moser, Gerhard; Chen, Guo-Bo

    2015-01-01

    Genetic risk prediction has several potential applications in medical research and clinical practice and could be used, for example, to stratify a heterogeneous population of patients by their predicted genetic risk. However, for polygenic traits, such as psychiatric disorders, the accuracy of risk...... number of GWAS datasets of correlated traits, it is a flexible and powerful tool to maximize prediction accuracy. With current sample size, risk predictors are not useful in a clinical setting but already are a valuable research tool, for example in experimental designs comparing cases with high and low...

  17. Climatic associations of British species distributions show good transferability in time but low predictive accuracy for range change.

    Directory of Open Access Journals (Sweden)

    Giovanni Rapacciuolo

    Full Text Available Conservation planners often wish to predict how species distributions will change in response to environmental changes. Species distribution models (SDMs are the primary tool for making such predictions. Many methods are widely used; however, they all make simplifying assumptions, and predictions can therefore be subject to high uncertainty. With global change well underway, field records of observed range shifts are increasingly being used for testing SDM transferability. We used an unprecedented distribution dataset documenting recent range changes of British vascular plants, birds, and butterflies to test whether correlative SDMs based on climate change provide useful approximations of potential distribution shifts. We modelled past species distributions from climate using nine single techniques and a consensus approach, and projected the geographical extent of these models to a more recent time period based on climate change; we then compared model predictions with recent observed distributions in order to estimate the temporal transferability and prediction accuracy of our models. We also evaluated the relative effect of methodological and taxonomic variation on the performance of SDMs. Models showed good transferability in time when assessed using widespread metrics of accuracy. However, models had low accuracy to predict where occupancy status changed between time periods, especially for declining species. Model performance varied greatly among species within major taxa, but there was also considerable variation among modelling frameworks. Past climatic associations of British species distributions retain a high explanatory power when transferred to recent time--due to their accuracy to predict large areas retained by species--but fail to capture relevant predictors of change. We strongly emphasize the need for caution when using SDMs to predict shifts in species distributions: high explanatory power on temporally-independent records

  18. High accuracy satellite drag model (HASDM)

    Science.gov (United States)

    Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent

    The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.

  19. An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping

    Science.gov (United States)

    Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare

    2017-04-01

    represents a 2 to 5-fold increase in efficiency. The 5 km grid reduces the number of model executions further to 1024. However, over the first 25 km the 5 km grid produces errors of up to 13.8 dB when compared to the highly accurate but inefficient 1 km grid. The newly developed adaptive grid generates much smaller errors of less than 0.5 dB while demonstrating high computational efficiency. Our results show that the adaptive grid provides the ability to retain the accuracy of noise level predictions and improve the efficiency of the modelling process. This can help safeguard sensitive marine ecosystems from noise pollution by improving the underwater noise predictions that inform management activities. References Shapiro, G., Chen, F., Thain, R., 2014. The Effect of Ocean Fronts on Acoustic Wave Propagation in a Shallow Sea, Journal of Marine System, 139: 217 - 226. http://dx.doi.org/10.1016/j.jmarsys.2014.06.007.

  20. A Study on Accuracy Improvement of Dual Micro Patterns Using Magnetic Abrasive Deburring

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Dong-Hyun; Kwak, Jae-Seob [Pukyong Nat’l Univ., Busan (Korea, Republic of)

    2016-11-15

    In recent times, the requirement of a micro pattern on the surface of products has been increasing, and high precision in the fabrication of the pattern is required. Hence, in this study, dual micro patterns were fabricated on a cylindrical workpiece, and deburring was performed by magnetic abrasive deburring (MAD) process. A prediction model was developed, and the MAD process was optimized using the response surface method. When the predicted values were compared with the experimental results, the average prediction error was found to be approximately 7%. Experimental verification shows fabrication of high accuracy dual micro pattern and reliability of prediction model.

  1. Improvement of the accuracy of phase observation by modification of phase-shifting electron holography

    International Nuclear Information System (INIS)

    Suzuki, Takahiro; Aizawa, Shinji; Tanigaki, Toshiaki; Ota, Keishin; Matsuda, Tsuyoshi; Tonomura, Akira

    2012-01-01

    We found that the accuracy of the phase observation in phase-shifting electron holography is strongly restricted by time variations of mean intensity and contrast of the holograms. A modified method was developed for correcting these variations. Experimental results demonstrated that the modification enabled us to acquire a large number of holograms, and as a result, the accuracy of the phase observation has been improved by a factor of 5. -- Highlights: ► A modified phase-shifting electron holography was proposed. ► The time variation of mean intensity and contrast of holograms were corrected. ► These corrections lead to a great improvement of the resultant phase accuracy. ► A phase accuracy of about 1/4000 rad was achieved from experimental results.

  2. Improved accuracy of co-morbidity coding over time after the introduction of ICD-10 administrative data.

    Science.gov (United States)

    Januel, Jean-Marie; Luthi, Jean-Christophe; Quan, Hude; Borst, François; Taffé, Patrick; Ghali, William A; Burnand, Bernard

    2011-08-18

    Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges. Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities. For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven. Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system.

  3. Improved Prediction of Blood-Brain Barrier Permeability Through Machine Learning with Combined Use of Molecular Property-Based Descriptors and Fingerprints.

    Science.gov (United States)

    Yuan, Yaxia; Zheng, Fang; Zhan, Chang-Guo

    2018-03-21

    Blood-brain barrier (BBB) permeability of a compound determines whether the compound can effectively enter the brain. It is an essential property which must be accounted for in drug discovery with a target in the brain. Several computational methods have been used to predict the BBB permeability. In particular, support vector machine (SVM), which is a kernel-based machine learning method, has been used popularly in this field. For SVM training and prediction, the compounds are characterized by molecular descriptors. Some SVM models were based on the use of molecular property-based descriptors (including 1D, 2D, and 3D descriptors) or fragment-based descriptors (known as the fingerprints of a molecule). The selection of descriptors is critical for the performance of a SVM model. In this study, we aimed to develop a generally applicable new SVM model by combining all of the features of the molecular property-based descriptors and fingerprints to improve the accuracy for the BBB permeability prediction. The results indicate that our SVM model has improved accuracy compared to the currently available models of the BBB permeability prediction.

  4. Improved prediction of breast cancer outcome by identifying heterogeneous biomarkers.

    Science.gov (United States)

    Choi, Jonghwan; Park, Sanghyun; Yoon, Youngmi; Ahn, Jaegyoon

    2017-11-15

    Identification of genes that can be used to predict prognosis in patients with cancer is important in that it can lead to improved therapy, and can also promote our understanding of tumor progression on the molecular level. One of the common but fundamental problems that render identification of prognostic genes and prediction of cancer outcomes difficult is the heterogeneity of patient samples. To reduce the effect of sample heterogeneity, we clustered data samples using K-means algorithm and applied modified PageRank to functional interaction (FI) networks weighted using gene expression values of samples in each cluster. Hub genes among resulting prioritized genes were selected as biomarkers to predict the prognosis of samples. This process outperformed traditional feature selection methods as well as several network-based prognostic gene selection methods when applied to Random Forest. We were able to find many cluster-specific prognostic genes for each dataset. Functional study showed that distinct biological processes were enriched in each cluster, which seems to reflect different aspect of tumor progression or oncogenesis among distinct patient groups. Taken together, these results provide support for the hypothesis that our approach can effectively identify heterogeneous prognostic genes, and these are complementary to each other, improving prediction accuracy. https://github.com/mathcom/CPR. jgahn@inu.ac.kr. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  5. Accuracy of the Timed Up and Go test for predicting sarcopenia in elderly hospitalized patients.

    Science.gov (United States)

    Martinez, Bruno Prata; Gomes, Isabela Barboza; Oliveira, Carolina Santana de; Ramos, Isis Resende; Rocha, Mônica Diniz Marques; Forgiarini Júnior, Luiz Alberto; Camelier, Fernanda Warken Rosa; Camelier, Aquiles Assunção

    2015-05-01

    The ability of the Timed Up and Go test to predict sarcopenia has not been evaluated previously. The objective of this study was to evaluate the accuracy of the Timed Up and Go test for predicting sarcopenia in elderly hospitalized patients. This cross-sectional study analyzed 68 elderly patients (≥60 years of age) in a private hospital in the city of Salvador-BA, Brazil, between the 1st and 5th day of hospitalization. The predictive variable was the Timed Up and Go test score, and the outcome of interest was the presence of sarcopenia (reduced muscle mass associated with a reduction in handgrip strength and/or weak physical performance in a 6-m gait-speed test). After the descriptive data analyses, the sensitivity, specificity and accuracy of a test using the predictive variable to predict the presence of sarcopenia were calculated. In total, 68 elderly individuals, with a mean age 70.4±7.7 years, were evaluated. The subjects had a Charlson Comorbidity Index score of 5.35±1.97. Most (64.7%) of the subjects had a clinical admission profile; the main reasons for hospitalization were cardiovascular disorders (22.1%), pneumonia (19.1%) and abdominal disorders (10.2%). The frequency of sarcopenia in the sample was 22.1%, and the mean length of time spent performing the Timed Up and Go test was 10.02±5.38 s. A time longer than or equal to a cutoff of 10.85 s on the Timed Up and Go test predicted sarcopenia with a sensitivity of 67% and a specificity of 88.7%. The accuracy of this cutoff for the Timed Up and Go test was good (0.80; IC=0.66-0.94; p=0.002). The Timed Up and Go test was shown to be a predictor of sarcopenia in elderly hospitalized patients.

  6. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    Science.gov (United States)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  7. Evaluation of different operational strategies for lithium ion battery systems connected to a wind turbine for primary frequency regulation and wind power forecast accuracy improvement

    Energy Technology Data Exchange (ETDEWEB)

    Swierczynski, Maciej; Stroe, Daniel Ioan; Stan, Ana Irina; Teodorescu, Remus; Andreasen, Soeren Juhl [Aalborg Univ. (Denmark). Dept. of Energy Technology

    2012-07-01

    High penetration levels of variable wind energy sources can cause problems with their grid integration. Energy storage systems connected to wind turbine/wind power plants can improve predictability of the wind power production and provide ancillary services to the grid. This paper investigates economics of different operational strategies for Li-ion systems connected to wind turbines for wind power forecast accuracy improvement and primary frequency regulation. (orig.)

  8. Improving the accuracy of flood forecasting with transpositions of ensemble NWP rainfall fields considering orographic effects

    Science.gov (United States)

    Yu, Wansik; Nakakita, Eiichi; Kim, Sunmin; Yamaguchi, Kosei

    2016-08-01

    The use of meteorological ensembles to produce sets of hydrological predictions increased the capability to issue flood warnings. However, space scale of the hydrological domain is still much finer than meteorological model, and NWP models have challenges with displacement. The main objective of this study to enhance the transposition method proposed in Yu et al. (2014) and to suggest the post-processing ensemble flood forecasting method for the real-time updating and the accuracy improvement of flood forecasts that considers the separation of the orographic rainfall and the correction of misplaced rain distributions using additional ensemble information through the transposition of rain distributions. In the first step of the proposed method, ensemble forecast rainfalls from a numerical weather prediction (NWP) model are separated into orographic and non-orographic rainfall fields using atmospheric variables and the extraction of topographic effect. Then the non-orographic rainfall fields are examined by the transposition scheme to produce additional ensemble information and new ensemble NWP rainfall fields are calculated by recombining the transposition results of non-orographic rain fields with separated orographic rainfall fields for a generation of place-corrected ensemble information. Then, the additional ensemble information is applied into a hydrologic model for post-flood forecasting with a 6-h interval. The newly proposed method has a clear advantage to improve the accuracy of mean value of ensemble flood forecasting. Our study is carried out and verified using the largest flood event by typhoon 'Talas' of 2011 over the two catchments, which are Futatsuno (356.1 km2) and Nanairo (182.1 km2) dam catchments of Shingu river basin (2360 km2), which is located in the Kii peninsula, Japan.

  9. Sphinx: merging knowledge-based and ab initio approaches to improve protein loop prediction.

    Science.gov (United States)

    Marks, Claire; Nowak, Jaroslaw; Klostermann, Stefan; Georges, Guy; Dunbar, James; Shi, Jiye; Kelm, Sebastian; Deane, Charlotte M

    2017-05-01

    Loops are often vital for protein function, however, their irregular structures make them difficult to model accurately. Current loop modelling algorithms can mostly be divided into two categories: knowledge-based, where databases of fragments are searched to find suitable conformations and ab initio, where conformations are generated computationally. Existing knowledge-based methods only use fragments that are the same length as the target, even though loops of slightly different lengths may adopt similar conformations. Here, we present a novel method, Sphinx, which combines ab initio techniques with the potential extra structural information contained within loops of a different length to improve structure prediction. We show that Sphinx is able to generate high-accuracy predictions and decoy sets enriched with near-native loop conformations, performing better than the ab initio algorithm on which it is based. In addition, it is able to provide predictions for every target, unlike some knowledge-based methods. Sphinx can be used successfully for the difficult problem of antibody H3 prediction, outperforming RosettaAntibody, one of the leading H3-specific ab initio methods, both in accuracy and speed. Sphinx is available at http://opig.stats.ox.ac.uk/webapps/sphinx. deane@stats.ox.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  10. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    Science.gov (United States)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  11. General Theory versus ENA Theory: Comparing Their Predictive Accuracy and Scope.

    Science.gov (United States)

    Ellis, Lee; Hoskin, Anthony; Hartley, Richard; Walsh, Anthony; Widmayer, Alan; Ratnasingam, Malini

    2015-12-01

    General theory attributes criminal behavior primarily to low self-control, whereas evolutionary neuroandrogenic (ENA) theory envisions criminality as being a crude form of status-striving promoted by high brain exposure to androgens. General theory predicts that self-control will be negatively correlated with risk-taking, while ENA theory implies that these two variables should actually be positively correlated. According to ENA theory, traits such as pain tolerance and muscularity will be positively associated with risk-taking and criminality while general theory makes no predictions concerning these relationships. Data from Malaysia and the United States are used to test 10 hypotheses derived from one or both of these theories. As predicted by both theories, risk-taking was positively correlated with criminality in both countries. However, contrary to general theory and consistent with ENA theory, the correlation between self-control and risk-taking was positive in both countries. General theory's prediction of an inverse correlation between low self-control and criminality was largely supported by the U.S. data but only weakly supported by the Malaysian data. ENA theory's predictions of positive correlations between pain tolerance, muscularity, and offending were largely confirmed. For the 10 hypotheses tested, ENA theory surpassed general theory in predictive scope and accuracy. © The Author(s) 2014.

  12. The Improvement of Behavior Recognition Accuracy of Micro Inertial Accelerometer by Secondary Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2014-05-01

    Full Text Available Behaviors of “still”, “walking”, “running”, “jumping”, “upstairs” and “downstairs” can be recognized by micro inertial accelerometer of low cost. By using the features as inputs to the well-trained BP artificial neural network which is selected as classifier, those behaviors can be recognized. But the experimental results show that the recognition accuracy is not satisfactory. This paper presents secondary recognition algorithm and combine it with BP artificial neural network to improving the recognition accuracy. The Algorithm is verified by the Android mobile platform, and the recognition accuracy can be improved more than 8 %. Through extensive testing statistic analysis, the recognition accuracy can reach 95 % through BP artificial neural network and the secondary recognition, which is a reasonable good result from practical point of view.

  13. Improvement of the accuracy of phase observation by modification of phase-shifting electron holography

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Takahiro; Aizawa, Shinji; Tanigaki, Toshiaki [Advanced Science Institute, RIKEN, Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Ota, Keishin, E-mail: ota@microphase.co.jp [Microphase Co., Ltd., Onigakubo 1147-9, Tsukuba, Ibaragi 300-2651 (Japan); Matsuda, Tsuyoshi [Japan Science and Technology Agency, Kawaguchi-shi, Saitama 332-0012 (Japan); Tonomura, Akira [Advanced Science Institute, RIKEN, Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Okinawa Institute of Science and Technology, Graduate University, Kunigami, Okinawa 904-0495 (Japan); Central Research Laboratory, Hitachi, Ltd., Hatoyama, Saitama 350-0395 (Japan)

    2012-07-15

    We found that the accuracy of the phase observation in phase-shifting electron holography is strongly restricted by time variations of mean intensity and contrast of the holograms. A modified method was developed for correcting these variations. Experimental results demonstrated that the modification enabled us to acquire a large number of holograms, and as a result, the accuracy of the phase observation has been improved by a factor of 5. -- Highlights: Black-Right-Pointing-Pointer A modified phase-shifting electron holography was proposed. Black-Right-Pointing-Pointer The time variation of mean intensity and contrast of holograms were corrected. Black-Right-Pointing-Pointer These corrections lead to a great improvement of the resultant phase accuracy. Black-Right-Pointing-Pointer A phase accuracy of about 1/4000 rad was achieved from experimental results.

  14. A fuzzy set approach to assess the predictive accuracy of land use simulations

    NARCIS (Netherlands)

    van Vliet, J.; Hagen-Zanker, A.; Hurkens, J.; van van Delden, H.

    2013-01-01

    The predictive accuracy of land use models is frequently assessed by comparing two data sets: the simulated land use map and the observed land use map at the end of the simulation period. A common statistic for this is Kappa, which expresses the agreement between two categorical maps, corrected for

  15. Accuracy in Orbital Propagation: A Comparison of Predictive Software Models

    Science.gov (United States)

    2017-06-01

    30] M. Lane and K. Cranford, "An improved analytical drag theory for the artificial satellite problem," American Institute of Aeronautics and...which have a foundation in similar theory . Since their first operational use, both propagators have incorporated updated theory and mathematical...propagators should therefore utilize the most current TLE data available to avoid accuracy errors. 14. SUBJECT TERMS orbital mechanics , orbital

  16. Better estimation of protein-DNA interaction parameters improve prediction of functional sites

    Directory of Open Access Journals (Sweden)

    O'Flanagan Ruadhan A

    2008-12-01

    Full Text Available Abstract Background Characterizing transcription factor binding motifs is a common bioinformatics task. For transcription factors with variable binding sites, we need to get many suboptimal binding sites in our training dataset to get accurate estimates of free energy penalties for deviating from the consensus DNA sequence. One procedure to do that involves a modified SELEX (Systematic Evolution of Ligands by Exponential Enrichment method designed to produce many such sequences. Results We analyzed low stringency SELEX data for E. coli Catabolic Activator Protein (CAP, and we show here that appropriate quantitative analysis improves our ability to predict in vitro affinity. To obtain large number of sequences required for this analysis we used a SELEX SAGE protocol developed by Roulet et al. The sequences obtained from here were subjected to bioinformatic analysis. The resulting bioinformatic model characterizes the sequence specificity of the protein more accurately than those sequence specificities predicted from previous analysis just by using a few known binding sites available in the literature. The consequences of this increase in accuracy for prediction of in vivo binding sites (and especially functional ones in the E. coli genome are also discussed. We measured the dissociation constants of several putative CAP binding sites by EMSA (Electrophoretic Mobility Shift Assay and compared the affinities to the bioinformatics scores provided by methods like the weight matrix method and QPMEME (Quadratic Programming Method of Energy Matrix Estimation trained on known binding sites as well as on the new sites from SELEX SAGE data. We also checked predicted genome sites for conservation in the related species S. typhimurium. We found that bioinformatics scores based on SELEX SAGE data does better in terms of prediction of physical binding energies as well as in detecting functional sites. Conclusion We think that training binding site detection

  17. Algorithm 589. SICEDR: a FORTRAN subroutine for improving the accuracy of computed matrix eigenvalues

    International Nuclear Information System (INIS)

    Dongarra, J.J.

    1982-01-01

    SICEDR is a FORTRAN subroutine for improving the accuracy of a computed real eigenvalue and improving or computing the associated eigenvector. It is first used to generate information during the determination of the eigenvalues by the Schur decomposition technique. In particular, the Schur decomposition technique results in an orthogonal matrix Q and an upper quasi-triangular matrix T, such that A = QTQ/sup T/. Matrices A, Q, and T and the approximate eigenvalue, say lambda, are then used in the improvement phase. SICEDR uses an iterative method similar to iterative improvement for linear systems to improve the accuracy of lambda and improve or compute the eigenvector x in O(n 2 ) work, where n is the order of the matrix A

  18. Best Practices for Mudweight Window Generation and Accuracy Assessment between Seismic Based Pore Pressure Prediction Methodologies for a Near-Salt Field in Mississippi Canyon, Gulf of Mexico

    Science.gov (United States)

    Mannon, Timothy Patrick, Jr.

    Improving well design has and always will be the primary goal in drilling operations in the oil and gas industry. Oil and gas plays are continuing to move into increasingly hostile drilling environments, including near and/or sub-salt proximities. The ability to reduce the risk and uncertainly involved in drilling operations in unconventional geologic settings starts with improving the techniques for mudweight window modeling. To address this issue, an analysis of wellbore stability and well design improvement has been conducted. This study will show a systematic approach to well design by focusing on best practices for mudweight window projection for a field in Mississippi Canyon, Gulf of Mexico. The field includes depleted reservoirs and is in close proximity of salt intrusions. Analysis of offset wells has been conducted in the interest of developing an accurate picture of the subsurface environment by making connections between depth, non-productive time (NPT) events, and mudweights used. Commonly practiced petrophysical methods of pore pressure, fracture pressure, and shear failure gradient prediction have been applied to key offset wells in order to enhance the well design for two proposed wells. For the first time in the literature, the accuracy of the commonly accepted, seismic interval velocity based and the relatively new, seismic frequency based methodologies for pore pressure prediction are qualitatively and quantitatively compared for accuracy. Accuracy standards will be based on the agreement of the seismic outputs to pressure data obtained while drilling and petrophysically based pore pressure outputs for each well. The results will show significantly higher accuracy for the seismic frequency based approach in wells that were in near/sub-salt environments and higher overall accuracy for all of the wells in the study as a whole.

  19. Improvement on the accuracy of beam bugs in linear induction accelerator

    International Nuclear Information System (INIS)

    Xie Yutong; Dai Zhiyong; Han Qing

    2002-01-01

    In linear induction accelerator the resistive wall monitors known as 'beam bugs' have been used as essential diagnostics of beam current and location. The author presents a new method that can improve the accuracy of these beam bugs used for beam position measurements. With a fine beam simulation set, this method locates the beam position with an accuracy of 0.02 mm and thus can scale the beam bugs very well. Experiment results prove that the precision of beam position measurements can reach submillimeter degree

  20. Training readers to improve their accuracy in grading Crohn's disease activity on MRI

    International Nuclear Information System (INIS)

    Tielbeek, Jeroen A.W.; Bipat, Shandra; Boellaard, Thierry N.; Nio, C.Y.; Stoker, Jaap

    2014-01-01

    To prospectively evaluate if training with direct feedback improves grading accuracy of inexperienced readers for Crohn's disease activity on magnetic resonance imaging (MRI). Thirty-one inexperienced readers assessed 25 cases as a baseline set. Subsequently, all readers received training and assessed 100 cases with direct feedback per case, randomly assigned to four sets of 25 cases. The cases in set 4 were identical to the baseline set. Grading accuracy, understaging, overstaging, mean reading times and confidence scores (scale 0-10) were compared between baseline and set 4, and between the four consecutive sets with feedback. Proportions of grading accuracy, understaging and overstaging per set were compared using logistic regression analyses. Mean reading times and confidence scores were compared by t-tests. Grading accuracy increased from 66 % (95 % CI, 56-74 %) at baseline to 75 % (95 % CI, 66-81 %) in set 4 (P = 0.003). Understaging decreased from 15 % (95 % CI, 9-23 %) to 7 % (95 % CI, 3-14 %) (P < 0.001). Overstaging did not change significantly (20 % vs 19 %). Mean reading time decreased from 6 min 37 s to 4 min 35 s (P < 0.001). Mean confidence increased from 6.90 to 7.65 (P < 0.001). During training, overall grading accuracy, understaging, mean reading times and confidence scores improved gradually. Inexperienced readers need training with at least 100 cases to achieve the literature reported grading accuracy of 75 %. (orig.)

  1. Improving Accuracy of Intrusion Detection Model Using PCA and optimized SVM

    Directory of Open Access Journals (Sweden)

    Sumaiya Thaseen Ikram

    2016-06-01

    Full Text Available Intrusion detection is very essential for providing security to different network domains and is mostly used for locating and tracing the intruders. There are many problems with traditional intrusion detection models (IDS such as low detection capability against unknown network attack, high false alarm rate and insufficient analysis capability. Hence the major scope of the research in this domain is to develop an intrusion detection model with improved accuracy and reduced training time. This paper proposes a hybrid intrusiondetection model by integrating the principal component analysis (PCA and support vector machine (SVM. The novelty of the paper is the optimization of kernel parameters of the SVM classifier using automatic parameter selection technique. This technique optimizes the punishment factor (C and kernel parameter gamma (γ, thereby improving the accuracy of the classifier and reducing the training and testing time. The experimental results obtained on the NSL KDD and gurekddcup dataset show that the proposed technique performs better with higher accuracy, faster convergence speed and better generalization. Minimum resources are consumed as the classifier input requires reduced feature set for optimum classification. A comparative analysis of hybrid models with the proposed model is also performed.

  2. Use of in situ volumetric water content at field capacity to improve prediction of soil water retention properties

    OpenAIRE

    Al Majou , Hassan; Bruand , Ary; Duval , Odile

    2008-01-01

    International audience; Use of in situ volumetric water content at field capacity to improve prediction of soil water retention properties. Most pedotransfer functions (PTFs) developed over the last three decades to generate water retention characteristics use soil texture, bulk density and organic carbon content as predictors. Despite of the high number of PTFs published, most being class- or continuous-PTFs, accuracy of prediction remains limited. In this study, we compared the performance ...

  3. Accuracy of serum uric acid as a predictive test for maternal complications in pre-eclampsia: Bivariate meta-analysis and decision analysis

    NARCIS (Netherlands)

    Koopmans, Corine M.; van Pampus, Maria G.; Groen, Henk; Aarnoudse, Jan G.; van den Berg, Paul P.; Mol, Ben W. J.

    2009-01-01

    The aim of this study is to determine the accuracy and clinical value of serum uric acid in predicting maternal complications in women with pre-eclampsia. An existing meta-analysis on the subject was updated. The accuracy of serum uric acid for the prediction of maternal complications was assessed

  4. Accuracy of serum uric acid as a predictive test for maternal complications in pre-eclampsia : Bivariate meta-analysis and decision analysis

    NARCIS (Netherlands)

    Koopmans, C.M.; van Pampus, Maria; Groen, H.; Aarnoudse, J.G.; van den Berg, P.P.; Mol, B.W.J.

    The aim of this study is to determine the accuracy and clinical value of serum uric acid in predicting maternal complications in women with pre-eclampsia. An existing meta-analysis on the subject was updated. The accuracy of serum uric acid for the prediction of maternal complications was assessed

  5. Evaluating the predictive accuracy and the clinical benefit of a nomogram aimed to predict survival in node-positive prostate cancer patients: External validation on a multi-institutional database.

    Science.gov (United States)

    Bianchi, Lorenzo; Schiavina, Riccardo; Borghesi, Marco; Bianchi, Federico Mineo; Briganti, Alberto; Carini, Marco; Terrone, Carlo; Mottrie, Alex; Gacci, Mauro; Gontero, Paolo; Imbimbo, Ciro; Marchioro, Giansilvio; Milanese, Giulio; Mirone, Vincenzo; Montorsi, Francesco; Morgia, Giuseppe; Novara, Giacomo; Porreca, Angelo; Volpe, Alessandro; Brunocilla, Eugenio

    2018-04-06

    To assess the predictive accuracy and the clinical value of a recent nomogram predicting cancer-specific mortality-free survival after surgery in pN1 prostate cancer patients through an external validation. We evaluated 518 prostate cancer patients treated with radical prostatectomy and pelvic lymph node dissection with evidence of nodal metastases at final pathology, at 10 tertiary centers. External validation was carried out using regression coefficients of the previously published nomogram. The performance characteristics of the model were assessed by quantifying predictive accuracy, according to the area under the curve in the receiver operating characteristic curve and model calibration. Furthermore, we systematically analyzed the specificity, sensitivity, positive predictive value and negative predictive value for each nomogram-derived probability cut-off. Finally, we implemented decision curve analysis, in order to quantify the nomogram's clinical value in routine practice. External validation showed inferior predictive accuracy as referred to in the internal validation (65.8% vs 83.3%, respectively). The discrimination (area under the curve) of the multivariable model was 66.7% (95% CI 60.1-73.0%) by testing with receiver operating characteristic curve analysis. The calibration plot showed an overestimation throughout the range of predicted cancer-specific mortality-free survival rates probabilities. However, in decision curve analysis, the nomogram's use showed a net benefit when compared with the scenarios of treating all patients or none. In an external setting, the nomogram showed inferior predictive accuracy and suboptimal calibration characteristics as compared to that reported in the original population. However, decision curve analysis showed a clinical net benefit, suggesting a clinical implication to correctly manage pN1 prostate cancer patients after surgery. © 2018 The Japanese Urological Association.

  6. Investigation of the interpolation method to improve the distributed strain measurement accuracy in optical frequency domain reflectometry systems.

    Science.gov (United States)

    Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang

    2018-02-20

    We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.

  7. Improvement of injury severity prediction (ISP) of AACN during on-site triage using vehicle deformation pattern for car-to-car (C2C) side impacts.

    Science.gov (United States)

    Pal, Chinmoy; Hirayama, Shigeru; Narahari, Sangolla; Jeyabharath, Manoharan; Prakash, Gopinath; Kulothungan, Vimalathithan; Combest, John

    2018-02-28

    The Advanced Automatic Crash Notification (AACN) system needs to predict injury accurately, to provide appropriate treatment for seriously injured occupants involved in motor vehicle crashes. This study investigates the possibility of improving the accuracy of the AACN system, using vehicle deformation parameters in car-to-car (C2C) side impacts. This study was based on car-to-car (C2C) crash data from NASS-CDS, CY 2004-2014. Variables from Kononen's algorithm (published in 2011) were used to build a "base model" for this study. Two additional variables, intrusion magnitude and max deformation location, are added to Kononen's algorithm variables (age, belt usage, number of events, and delta-v) to build a "proposed model." This proposed model operates in two stages: In the first stage, the AACN system uses Kononen's variables and predicts injury severity, based on which emergency medical services (EMS) is dispatched; in the second stage, the EMS team conveys deformation-related information, for accurate prediction of serious injury. Logistic regression analysis reveals that the vehicle deformation location and intrusion magnitude are significant parameters in predicting the level of injury. The percentage of serious injury decreases as the deformation location shifts away from the driver sitting position. The proposed model can improve the sensitivity (serious injury correctly predicted as serious) from 50% to 63%, and overall prediction accuracy increased from 83.5% to 85.9%. The proposed method can improve the accuracy of injury prediction in side-impact collisions. Similar opportunities exist for other crash modes also.

  8. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    Science.gov (United States)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  9. Contrast-enhanced spectral mammography improves diagnostic accuracy in the symptomatic setting.

    Science.gov (United States)

    Tennant, S L; James, J J; Cornford, E J; Chen, Y; Burrell, H C; Hamilton, L J; Girio-Fragkoulakis, C

    2016-11-01

    To assess the diagnostic accuracy of contrast-enhanced spectral mammography (CESM), and gauge its "added value" in the symptomatic setting. A retrospective multi-reader review of 100 consecutive CESM examinations was performed. Anonymised low-energy (LE) images were reviewed and given a score for malignancy. At least 3 weeks later, the entire examination (LE and recombined images) was reviewed. Histopathology data were obtained for all cases. Differences in performance were assessed using receiver operator characteristic (ROC) analysis. Sensitivity, specificity, and lesion size (versus MRI or histopathology) differences were calculated. Seventy-three percent of cases were malignant at final histology, 27% were benign following standard triple assessment. ROC analysis showed improved overall performance of CESM over LE alone, with area under the curve of 0.93 versus 0.83 (p<0.025). CESM showed increased sensitivity (95% versus 84%, p<0.025) and specificity (81% versus 63%, p<0.025) compared to LE alone, with all five readers showing improved accuracy. Tumour size estimation at CESM was significantly more accurate than LE alone, the latter tending to undersize lesions. In 75% of cases, CESM was deemed a useful or significant aid to diagnosis. CESM provides immediately available, clinically useful information in the symptomatic clinic in patients with suspicious palpable abnormalities. Radiologist sensitivity, specificity, and size accuracy for breast cancer detection and staging are all improved using CESM as the primary mammographic investigation. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  10. Breast calcifications. A standardized mammographic reporting and data system to improve positive predictive value

    International Nuclear Information System (INIS)

    Perugini, G.; Bonzanini, B.; Valentino, C.

    1999-01-01

    The purpose of this work is to investigate the usefulness of a standardized reporting and data system in improving the positive predictive value of mammography in breast calcifications. Using the Breast Imaging Reporting and Data System lexicon developed by the American College of Radiology, it is defined 5 descriptive categories of breast calcifications and classified diagnostic suspicion of malignancy on a 3-grade scale (low, intermediate and high). Two radiologists reviewed 117 mammographic studies selected from those of the patients submitted to surgical biopsy for mammographically detected calcifications from January 1993 to December 1997, and classified them according to the above criteria. The positive predictive value was calculated for all examinations and for the stratified groups. Defining a standardized system for assessing and describing breast calcifications helps improve the diagnostic accuracy of mammography in clinical practice [it

  11. Atomic-accuracy prediction of protein loop structures through an RNA-inspired Ansatz.

    Directory of Open Access Journals (Sweden)

    Rhiju Das

    Full Text Available Consistently predicting biopolymer structure at atomic resolution from sequence alone remains a difficult problem, even for small sub-segments of large proteins. Such loop prediction challenges, which arise frequently in comparative modeling and protein design, can become intractable as loop lengths exceed 10 residues and if surrounding side-chain conformations are erased. Current approaches, such as the protein local optimization protocol or kinematic inversion closure (KIC Monte Carlo, involve stages that coarse-grain proteins, simplifying modeling but precluding a systematic search of all-atom configurations. This article introduces an alternative modeling strategy based on a 'stepwise ansatz', recently developed for RNA modeling, which posits that any realistic all-atom molecular conformation can be built up by residue-by-residue stepwise enumeration. When harnessed to a dynamic-programming-like recursion in the Rosetta framework, the resulting stepwise assembly (SWA protocol enables enumerative sampling of a 12 residue loop at a significant but achievable cost of thousands of CPU-hours. In a previously established benchmark, SWA recovers crystallographic conformations with sub-Angstrom accuracy for 19 of 20 loops, compared to 14 of 20 by KIC modeling with a comparable expenditure of computational power. Furthermore, SWA gives high accuracy results on an additional set of 15 loops highlighted in the biological literature for their irregularity or unusual length. Successes include cis-Pro touch turns, loops that pass through tunnels of other side-chains, and loops of lengths up to 24 residues. Remaining problem cases are traced to inaccuracies in the Rosetta all-atom energy function. In five additional blind tests, SWA achieves sub-Angstrom accuracy models, including the first such success in a protein/RNA binding interface, the YbxF/kink-turn interaction in the fourth 'RNA-puzzle' competition. These results establish all-atom enumeration as

  12. A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems

    Directory of Open Access Journals (Sweden)

    Qingzhou Mao

    2015-06-01

    Full Text Available In environments that are hostile to Global Navigation Satellites Systems (GNSS, the precision achieved by a mobile light detection and ranging (LiDAR system (MLS can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS. This paper proposes a novel least squares collocation (LSC-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, the proposed LSC-based method effectively corrects these errors using LiDAR control points, thereby improving the accuracy of the MLS. This method is also applied to the calibration of misalignment between the laser scanner and the POS. Several datasets from different scenarios have been adopted in order to evaluate the effectiveness of the proposed method. The results from experiments indicate that this method would represent a significant improvement in terms of the accuracy of the MLS in environments that are essentially hostile to GNSS and is also effective regarding the calibration of misalignment.

  13. Improving a two-equation eddy-viscosity turbulence model to predict the aerodynamic performance of thick wind turbine airfoils

    Science.gov (United States)

    Bangga, Galih; Kusumadewi, Tri; Hutomo, Go; Sabila, Ahmad; Syawitri, Taurista; Setiadi, Herlambang; Faisal, Muhamad; Wiranegara, Raditya; Hendranata, Yongki; Lastomo, Dwi; Putra, Louis; Kristiadi, Stefanus

    2018-03-01

    Numerical simulations for relatively thick airfoils are carried out in the present studies. An attempt to improve the accuracy of the numerical predictions is done by adjusting the turbulent viscosity of the eddy-viscosity Menter Shear-Stress-Transport (SST) model. The modification involves the addition of a damping factor on the wall-bounded flows incorporating the ratio of the turbulent kinetic energy to its specific dissipation rate for separation detection. The results are compared with available experimental data and CFD simulations using the original Menter SST model. The present model improves the lift polar prediction even though the stall angle is still overestimated. The improvement is caused by the better prediction of separated flow under a strong adverse pressure gradient. The results show that the Reynolds stresses are damped near the wall causing variation of the logarithmic velocity profiles.

  14. Accuracy of the Timed Up and Go test for predicting sarcopenia in elderly hospitalized patients

    Directory of Open Access Journals (Sweden)

    Bruno Prata Martinez

    2015-05-01

    Full Text Available OBJECTIVES: The ability of the Timed Up and Go test to predict sarcopenia has not been evaluated previously. The objective of this study was to evaluate the accuracy of the Timed Up and Go test for predicting sarcopenia in elderly hospitalized patients. METHODS: This cross-sectional study analyzed 68 elderly patients (≥60 years of age in a private hospital in the city of Salvador-BA, Brazil, between the 1st and 5th day of hospitalization. The predictive variable was the Timed Up and Go test score, and the outcome of interest was the presence of sarcopenia (reduced muscle mass associated with a reduction in handgrip strength and/or weak physical performance in a 6-m gait-speed test. After the descriptive data analyses, the sensitivity, specificity and accuracy of a test using the predictive variable to predict the presence of sarcopenia were calculated. RESULTS: In total, 68 elderly individuals, with a mean age 70.4±7.7 years, were evaluated. The subjects had a Charlson Comorbidity Index score of 5.35±1.97. Most (64.7% of the subjects had a clinical admission profile; the main reasons for hospitalization were cardiovascular disorders (22.1%, pneumonia (19.1% and abdominal disorders (10.2%. The frequency of sarcopenia in the sample was 22.1%, and the mean length of time spent performing the Timed Up and Go test was 10.02±5.38 s. A time longer than or equal to a cutoff of 10.85 s on the Timed Up and Go test predicted sarcopenia with a sensitivity of 67% and a specificity of 88.7%. The accuracy of this cutoff for the Timed Up and Go test was good (0.80; IC=0.66-0.94; p=0.002. CONCLUSION: The Timed Up and Go test was shown to be a predictor of sarcopenia in elderly hospitalized patients.

  15. The accuracy with which adults who do not stutter predict stuttering-related communication attitudes.

    Science.gov (United States)

    Logan, Kenneth J; Willis, Julie R

    2011-12-01

    The purpose of this study was to examine the extent to which adults who do not stutter can predict communication-related attitudes of adults who do stutter. 40 participants (mean age of 22.5 years) evaluated speech samples from an adult with mild stuttering and an adult with severe stuttering via audio-only (n=20) or audio-visual (n=20) modes to predict how the adults had responded on the S24 scale of communication attitudes. Participants correctly predicted which speaker had the more favorable S24 score, and the predicted scores were significantly different between the severity conditions. Across the four subgroups, predicted S24 scores differed from actual scores by 4-9 points. Predicted values were greater than the actual values for 3 of 4 subgroups, but still relatively positive in relation to the S24 norm sample. Stimulus presentation mode interacted with stuttering severity to affect prediction accuracy. The participants predicted the speakers' negative self-attributions more accurately than their positive self-attributions. Findings suggest that adults who do not stutter estimate the communication-related attitudes of specific adults who stutter in a manner that is generally accurate, though, in some conditions, somewhat less favorable than the speaker's actual ratings. At a group level, adults who do not stutter demonstrate the ability to discern minimal versus average levels of attitudinal impact for speakers who stutter. The participants' complex prediction patterns are discussed in relation to stereotype accuracy and classic views of negative stereotyping. The reader will be able to (a) summarize main findings on research related to listeners' attitudes toward people who stutter, (b) describe the extent to which people who do not stutter can predict the communication attitudes of people who do stutter; and (c) discuss how findings from the present study relate to previous findings on stereotypes about people who stutter. Copyright © 2011 Elsevier Inc

  16. Geometric Positioning Accuracy Improvement of ZY-3 Satellite Imagery Based on Statistical Learning Theory

    Directory of Open Access Journals (Sweden)

    Niangang Jiao

    2018-05-01

    Full Text Available With the increasing demand for high-resolution remote sensing images for mapping and monitoring the Earth’s environment, geometric positioning accuracy improvement plays a significant role in the image preprocessing step. Based on the statistical learning theory, we propose a new method to improve the geometric positioning accuracy without ground control points (GCPs. Multi-temporal images from the ZY-3 satellite are tested and the bias-compensated rational function model (RFM is applied as the block adjustment model in our experiment. An easy and stable weight strategy and the fast iterative shrinkage-thresholding (FIST algorithm which is widely used in the field of compressive sensing are improved and utilized to define the normal equation matrix and solve it. Then, the residual errors after traditional block adjustment are acquired and tested with the newly proposed inherent error compensation model based on statistical learning theory. The final results indicate that the geometric positioning accuracy of ZY-3 satellite imagery can be improved greatly with our proposed method.

  17. Does imprint cytology improve the accuracy of transrectal prostate needle biopsy?

    Science.gov (United States)

    Sayar, Hamide; Bulut, Burak Besir; Bahar, Abdulkadir Yasir; Bahar, Mustafa Remzi; Seringec, Nurten; Resim, Sefa; Çıralık, Harun

    2015-02-01

    To evaluate the accuracy of imprint cytology of core needle biopsy specimens in the diagnosis of prostate cancer. Between December 24, 2011 and May 9, 2013, patients with an abnormal DRE and/or serum PSA level of >2.5 ng/mL underwent transrectal prostate needle biopsy. Samples with positive imprint cytology but negative initial histologic exam underwent repeat sectioning and histological examination. 1,262 transrectal prostate needle biopsy specimens were evaluated from 100 patients. Malignant imprint cytology was found in 236 specimens (18.7%), 197 (15.6%) of which were confirmed by histologic examination, giving an initial 3.1% (n = 39) rate of discrepant results by imprint cytology. Upon repeat sectioning and histologic examination of these 39 biopsy samples, 14 (1.1% of the original specimens) were then diagnosed as malignant, 3 (0.2%) as atypical small acinar proliferation (ASAP), and 5 (0.4%) as high-grade prostatic intraepithelial neoplasia (HGPIN). Overall, 964 (76.4%) specimens were negative for malignancy by imprint cytology. Seven (0.6%) specimens were benign by cytology but malignant cells were found on histological evaluation. On imprint cytology examination, nonmalignant but abnormal findings were seen in 62 specimens (4.9%). These were all due to benign processes. After reexamination, the accuracy, sensitivity, specificity, positive predictive value, negative predictive value, false-positive rate, false-negative rate of imprint preparations were 98.1, 96.9, 98.4, 92.8, 99.3, 1.6, 3.1%, respectively. Imprint cytology is valuable tool for evaluating TRUS-guided core needle biopsy specimens from the prostate. Use of imprint cytology in combination with histopathology increases diagnostic accuracy when compared with histopathologic assessment alone. © 2014 Wiley Periodicals, Inc.

  18. Continuous electroencephalography predicts delayed cerebral ischemia after subarachnoid hemorrhage: A prospective study of diagnostic accuracy.

    Science.gov (United States)

    Rosenthal, Eric S; Biswal, Siddharth; Zafar, Sahar F; O'Connor, Kathryn L; Bechek, Sophia; Shenoy, Apeksha V; Boyle, Emily J; Shafi, Mouhsin M; Gilmore, Emily J; Foreman, Brandon P; Gaspard, Nicolas; Leslie-Mazwi, Thabele M; Rosand, Jonathan; Hoch, Daniel B; Ayata, Cenk; Cash, Sydney S; Cole, Andrew J; Patel, Aman B; Westover, M Brandon

    2018-04-16

    Delayed cerebral ischemia (DCI) is a common, disabling complication of subarachnoid hemorrhage (SAH). Preventing DCI is a key focus of neurocritical care, but interventions carry risk and cannot be applied indiscriminately. Although retrospective studies have identified continuous electroencephalographic (cEEG) measures associated with DCI, no study has characterized the accuracy of cEEG with sufficient rigor to justify using it to triage patients to interventions or clinical trials. We therefore prospectively assessed the accuracy of cEEG for predicting DCI, following the Standards for Reporting Diagnostic Accuracy Studies. We prospectively performed cEEG in nontraumatic, high-grade SAH patients at a single institution. The index test consisted of clinical neurophysiologists prospectively reporting prespecified EEG alarms: (1) decreasing relative alpha variability, (2) decreasing alpha-delta ratio, (3) worsening focal slowing, or (4) late appearing epileptiform abnormalities. The diagnostic reference standard was DCI determined by blinded, adjudicated review. Primary outcome measures were sensitivity and specificity of cEEG for subsequent DCI, determined by multistate survival analysis, adjusted for baseline risk. One hundred three of 227 consecutive patients were eligible and underwent cEEG monitoring (7.7-day mean duration). EEG alarms occurred in 96.2% of patients with and 19.6% without subsequent DCI (1.9-day median latency, interquartile range = 0.9-4.1). Among alarm subtypes, late onset epileptiform abnormalities had the highest predictive value. Prespecified EEG findings predicted DCI among patients with low (91% sensitivity, 83% specificity) and high (95% sensitivity, 77% specificity) baseline risk. cEEG accurately predicts DCI following SAH and may help target therapies to patients at highest risk of secondary brain injury. Ann Neurol 2018. © 2018 American Neurological Association.

  19. Improved imputation accuracy of rare and low-frequency variants using population-specific high-coverage WGS-based imputation reference panel.

    Science.gov (United States)

    Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit

    2017-06-01

    Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies.

  20. Prediction of miscarriage in women with viable intrauterine pregnancy-A systematic review and diagnostic accuracy meta-analysis.

    Science.gov (United States)

    Pillai, Rekha N; Konje, Justin C; Richardson, Matthew; Tincello, Douglas G; Potdar, Neelam

    2018-01-01

    Both ultrasound and biochemical markers either alone or in combination have been described in the literature for the prediction of miscarriage. We performed this systematic review and meta-analysis to determine the best combination of biochemical, ultrasound and demographic markers to predict miscarriage in women with viable intrauterine pregnancy. The electronic database search included Medline (1946-June 2017), Embase (1980-June 2017), CINAHL (1981-June 2017) and Cochrane library. Key MESH and Boolean terms were used for the search. Data extraction and collection was performed based on the eligibility criteria by two authors independently. Quality assessment of the individual studies was done using QUADAS 2 (Quality Assessment for Diagnostic Accuracy Studies-2: A Revised Tool) and statistical analysis performed using the Cochrane systematic review manager 5.3 and STATA vs.13.0. Due to the diversity of the combinations used for prediction in the included papers it was not possible to perform a meta-analysis on combination markers. Therefore, we proceeded to perform a meta-analysis on ultrasound markers alone to determine the best marker that can help to improve the diagnostic accuracy of predicting miscarriage in women with viable intrauterine pregnancy. The systematic review identified 18 eligible studies for the quantitative meta-analysis with a total of 5584 women. Among the ultrasound scan markers, fetal bradycardia (n=10 studies, n=1762 women) on hierarchical summary receiver operating characteristic showed sensitivity of 68.41%, specificity of 97.84%, positive likelihood ratio of 31.73 (indicating a large effect on increasing the probability of predicting miscarriage) and negative likelihood ratio of 0.32. In studies for women with threatened miscarriage (n=5 studies, n=771 women) fetal bradycardia showed further increase in sensitivity (84.18%) for miscarriage prediction. Although there is gestational age dependent variation in the fetal heart rate, a plot

  1. Improvement of Diagnostic Accuracy by Standardization in Diuretic Renal Scan

    International Nuclear Information System (INIS)

    Hyun, In Young; Lee, Dong Soo; Lee, Kyung Han; Chung, June Key; Lee, Myung Chul; Koh, Chang Soon; Kim, Kwang Myung; Choi, Hwang; Choi, Yong

    1995-01-01

    We evaluated diagnostic accuracy of diuretic renal scan with standardization in 45 children(107 hydronephrotic kidneys) with 91 diuretic assessments. Sensitivity was 100% specificity was 78%, and accuracy was 84% in 49 hydronephrotic kidneys with standardization. Diuretic renal scan without standardization, sensitivity was 100%, specificity was 38%, and accuracy was 57% in 58 hydronephrotic kidneys. The false-positive results were observed in 25 cases without standardization, and in 8 cases with standardization. In duretic renal scans without standardization, the causes of false-positive results were 10 early injection of lasix before mixing of radioactivity in loplsty, 6 extrarenal pelvis, and 3 immature kidneys of false-positive results were 2 markedly dilated systems postpyeloplsty, 2 etrarenal pevis, 1 immature kidney of neonate , and 2 severe renal dysfunction, 1 vesicoureteral, reflux. In diuretic renal scan without standardization the false-positive results by inadequate study were common, but false-positive results by inadequate study were not found after standardization. The false-positive results by dilated pelvo-calyceal systems postpyeloplsty, extrarenal pelvis, and immature kidneys of, neonates were not dissolved after standardization. In conclusion, diagnostic accuracy of diuretic renal scan with standardization was useful in children with renal outflow tract obstruction by improving specificity significantly.

  2. Prediction of pre-eclampsia: a protocol for systematic reviews of test accuracy

    Directory of Open Access Journals (Sweden)

    Khan Khalid S

    2006-10-01

    Full Text Available Abstract Background Pre-eclampsia, a syndrome of hypertension and proteinuria, is a major cause of maternal and perinatal morbidity and mortality. Accurate prediction of pre-eclampsia is important, since high risk women could benefit from intensive monitoring and preventive treatment. However, decision making is currently hampered due to lack of precise and up to date comprehensive evidence summaries on estimates of risk of developing pre-eclampsia. Methods/Design A series of systematic reviews and meta-analyses will be undertaken to determine, among women in early pregnancy, the accuracy of various tests (history, examinations and investigations for predicting pre-eclampsia. We will search Medline, Embase, Cochrane Library, MEDION, citation lists of review articles and eligible primary articles and will contact experts in the field. Reviewers working independently will select studies, extract data, and assess study validity according to established criteria. Language restrictions will not be applied. Bivariate meta-analysis of sensitivity and specificity will be considered for tests whose studies allow generation of 2 × 2 tables. Discussion The results of the test accuracy reviews will be integrated with results of effectiveness reviews of preventive interventions to assess the impact of test-intervention combinations for prevention of pre-eclampsia.

  3. Improving substructure identification accuracy of shear structures using virtual control system

    Science.gov (United States)

    Zhang, Dongyu; Yang, Yang; Wang, Tingqiang; Li, Hui

    2018-02-01

    Substructure identification is a powerful tool to identify the parameters of a complex structure. Previously, the authors developed an inductive substructure identification method for shear structures. The identification error analysis showed that the identification accuracy of this method is significantly influenced by the magnitudes of two key structural responses near a certain frequency; if these responses are unfavorable, the method cannot provide accurate estimation results. In this paper, a novel method is proposed to improve the substructure identification accuracy by introducing a virtual control system (VCS) into the structure. A virtual control system is a self-balanced system, which consists of some control devices and a set of self-balanced forces. The self-balanced forces counterbalance the forces that the control devices apply on the structure. The control devices are combined with the structure to form a controlled structure used to replace the original structure in the substructure identification; and the self-balance forces are treated as known external excitations to the controlled structure. By optimally tuning the VCS’s parameters, the dynamic characteristics of the controlled structure can be changed such that the original structural responses become more favorable for the substructure identification and, thus, the identification accuracy is improved. A numerical example of 6-story shear structure is utilized to verify the effectiveness of the VCS based controlled substructure identification method. Finally, shake table tests are conducted on a 3-story structural model to verify the efficacy of the VCS to enhance the identification accuracy of the structural parameters.

  4. Improvement of vision measurement accuracy using Zernike moment based edge location error compensation model

    International Nuclear Information System (INIS)

    Cui, J W; Tan, J B; Zhou, Y; Zhang, H

    2007-01-01

    This paper presents the Zernike moment based model developed to compensate edge location errors for further improvement of the vision measurement accuracy by compensating the slight changes resulting from sampling and establishing mathematic expressions for subpixel location of theoretical and actual edges which are either vertical to or at an angle with X-axis. Experimental results show that the proposed model can be used to achieve a vision measurement accuracy of up to 0.08 pixel while the measurement uncertainty is less than 0.36μm. It is therefore concluded that as a model which can be used to achieve a significant improvement of vision measurement accuracy, the proposed model is especially suitable for edge location of images with low contrast

  5. PREDICTIVE ACCURACY OF TRANSCEREBELLAR DIAMETER IN COMPARISON WITH OTHER FOETAL BIOMETRIC PARAMETERS FOR GESTATIONAL AGE ESTIMATION AMONG PREGNANT NIGERIAN WOMEN.

    Science.gov (United States)

    Adeyekun, A A; Orji, M O

    2014-04-01

    To compare the predictive accuracy of foetal trans-cerebellar diameter (TCD) with those of other biometric parameters in the estimation of gestational age (GA). A cross-sectional study. The University of Benin Teaching Hospital, Nigeria. Four hundred and fifty healthy singleton pregnant women, between 14-42 weeks gestation. Trans-cerebellar diameter (TCD), biparietal diameter (BPD), femur length (FL), abdominal circumference (AC) values across the gestational age range studied. Correlation and predictive values of TCD compared to those of other biometric parameters. The range of values for TCD was 11.9 - 59.7mm (mean = 34.2 ± 14.1mm). TCD correlated more significantly with menstrual age compared with other biometric parameters (r = 0.984, p = 0.000). TCD had a higher predictive accuracy of 96.9% ± 12 days), BPD (93.8% ± 14.1 days). AC (92.7% ± 15.3 days). TCD has a stronger predictive accuracy for gestational age compared to other routinely used foetal biometric parameters among Nigerian Africans.

  6. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    Directory of Open Access Journals (Sweden)

    Santana Isabel

    2011-08-01

    Full Text Available Abstract Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI, but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing.

  7. Evaluation of scanning 2D barcoded vaccines to improve data accuracy of vaccines administered.

    Science.gov (United States)

    Daily, Ashley; Kennedy, Erin D; Fierro, Leslie A; Reed, Jenica Huddleston; Greene, Michael; Williams, Warren W; Evanson, Heather V; Cox, Regina; Koeppl, Patrick; Gerlach, Ken

    2016-11-11

    Accurately recording vaccine lot number, expiration date, and product identifiers, in patient records is an important step in improving supply chain management and patient safety in the event of a recall. These data are being encoded on two-dimensional (2D) barcodes on most vaccine vials and syringes. Using electronic vaccine administration records, we evaluated the accuracy of lot number and expiration date entered using 2D barcode scanning compared to traditional manual or drop-down list entry methods. We analyzed 128,573 electronic records of vaccines administered at 32 facilities. We compared the accuracy of records entered using 2D barcode scanning with those entered using traditional methods using chi-square tests and multilevel logistic regression. When 2D barcodes were scanned, lot number data accuracy was 1.8 percentage points higher (94.3-96.1%, Pmanufacturer, month vaccine was administered, and vaccine type were associated with variation in accuracy for both lot number and expiration date. Two-dimensional barcode scanning shows promise for improving data accuracy of vaccine lot number and expiration date records. Adapting systems to further integrate with 2D barcoding could help increase adoption of 2D barcode scanning technology. Published by Elsevier Ltd.

  8. Algorithms and parameters for improved accuracy in physics data libraries

    International Nuclear Information System (INIS)

    Batič, M; Hoff, G; Pia, M G; Saracco, P; Han, M; Kim, C H; Hauf, S; Kuster, M; Seo, H

    2012-01-01

    Recent efforts for the improvement of the accuracy of physics data libraries used in particle transport are summarized. Results are reported about a large scale validation analysis of atomic parameters used by major Monte Carlo systems (Geant4, EGS, MCNP, Penelope etc.); their contribution to the accuracy of simulation observables is documented. The results of this study motivated the development of a new atomic data management software package, which optimizes the provision of state-of-the-art atomic parameters to physics models. The effect of atomic parameters on the simulation of radioactive decay is illustrated. Ideas and methods to deal with physics models applicable to different energy ranges in the production of data libraries, rather than at runtime, are discussed.

  9. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    Directory of Open Access Journals (Sweden)

    Ahmed Elsaadany

    2014-01-01

    Full Text Available Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake and the second is devoted to drift correction (canard based-correction fuze. The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  10. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    Science.gov (United States)

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  11. Prostate Health Index improves multivariable risk prediction of aggressive prostate cancer.

    Science.gov (United States)

    Loeb, Stacy; Shin, Sanghyuk S; Broyles, Dennis L; Wei, John T; Sanda, Martin; Klee, George; Partin, Alan W; Sokoll, Lori; Chan, Daniel W; Bangma, Chris H; van Schaik, Ron H N; Slawin, Kevin M; Marks, Leonard S; Catalona, William J

    2017-07-01

    To examine the use of the Prostate Health Index (PHI) as a continuous variable in multivariable risk assessment for aggressive prostate cancer in a large multicentre US study. The study population included 728 men, with prostate-specific antigen (PSA) levels of 2-10 ng/mL and a negative digital rectal examination, enrolled in a prospective, multi-site early detection trial. The primary endpoint was aggressive prostate cancer, defined as biopsy Gleason score ≥7. First, we evaluated whether the addition of PHI improves the performance of currently available risk calculators (the Prostate Cancer Prevention Trial [PCPT] and European Randomised Study of Screening for Prostate Cancer [ERSPC] risk calculators). We also designed and internally validated a new PHI-based multivariable predictive model, and created a nomogram. Of 728 men undergoing biopsy, 118 (16.2%) had aggressive prostate cancer. The PHI predicted the risk of aggressive prostate cancer across the spectrum of values. Adding PHI significantly improved the predictive accuracy of the PCPT and ERSPC risk calculators for aggressive disease. A new model was created using age, previous biopsy, prostate volume, PSA and PHI, with an area under the curve of 0.746. The bootstrap-corrected model showed good calibration with observed risk for aggressive prostate cancer and had net benefit on decision-curve analysis. Using PHI as part of multivariable risk assessment leads to a significant improvement in the detection of aggressive prostate cancer, potentially reducing harms from unnecessary prostate biopsy and overdiagnosis. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  12. Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model

    Science.gov (United States)

    Wang, Qijie

    2015-08-01

    The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.

  13. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    Directory of Open Access Journals (Sweden)

    Qingzhong Cai

    2016-06-01

    Full Text Available An inertial navigation system (INS has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs using common turntables, has a great application potential in future atomic gyro INSs.

  14. Forecasting space weather: Can new econometric methods improve accuracy?

    Science.gov (United States)

    Reikard, Gordon

    2011-06-01

    Space weather forecasts are currently used in areas ranging from navigation and communication to electric power system operations. The relevant forecast horizons can range from as little as 24 h to several days. This paper analyzes the predictability of two major space weather measures using new time series methods, many of them derived from econometrics. The data sets are the A p geomagnetic index and the solar radio flux at 10.7 cm. The methods tested include nonlinear regressions, neural networks, frequency domain algorithms, GARCH models (which utilize the residual variance), state transition models, and models that combine elements of several techniques. While combined models are complex, they can be programmed using modern statistical software. The data frequency is daily, and forecasting experiments are run over horizons ranging from 1 to 7 days. Two major conclusions stand out. First, the frequency domain method forecasts the A p index more accurately than any time domain model, including both regressions and neural networks. This finding is very robust, and holds for all forecast horizons. Combining the frequency domain method with other techniques yields a further small improvement in accuracy. Second, the neural network forecasts the solar flux more accurately than any other method, although at short horizons (2 days or less) the regression and net yield similar results. The neural net does best when it includes measures of the long-term component in the data.

  15. Biomarkers improve mortality prediction by prognostic scales in community-acquired pneumonia.

    Science.gov (United States)

    Menéndez, R; Martínez, R; Reyes, S; Mensa, J; Filella, X; Marcos, M A; Martínez, A; Esquinas, C; Ramirez, P; Torres, A

    2009-07-01

    Prognostic scales provide a useful tool to predict mortality in community-acquired pneumonia (CAP). However, the inflammatory response of the host, crucial in resolution and outcome, is not included in the prognostic scales. The aim of this study was to investigate whether information about the initial inflammatory cytokine profile and markers increases the accuracy of prognostic scales to predict 30-day mortality. To this aim, a prospective cohort study in two tertiary care hospitals was designed. Procalcitonin (PCT), C-reactive protein (CRP) and the systemic cytokines tumour necrosis factor alpha (TNFalpha) and interleukins IL6, IL8 and IL10 were measured at admission. Initial severity was assessed by PSI (Pneumonia Severity Index), CURB65 (Confusion, Urea nitrogen, Respiratory rate, Blood pressure, > or = 65 years of age) and CRB65 (Confusion, Respiratory rate, Blood pressure, > or = 65 years of age) scales. A total of 453 hospitalised CAP patients were included. The 36 patients who died (7.8%) had significantly increased levels of IL6, IL8, PCT and CRP. In regression logistic analyses, high levels of CRP and IL6 showed an independent predictive value for predicting 30-day mortality, after adjustment for prognostic scales. Adding CRP to PSI significantly increased the area under the receiver operating characteristic curve (AUC) from 0.80 to 0.85, that of CURB65 from 0.82 to 0.85 and that of CRB65 from 0.79 to 0.85. Adding IL6 or PCT values to CRP did not significantly increase the AUC of any scale. When using two scales (PSI and CURB65/CRB65) and CRP simultaneously the AUC was 0.88. Adding CRP levels to PSI, CURB65 and CRB65 scales improves the 30-day mortality prediction. The highest predictive value is reached with a combination of two scales and CRP. Further validation of that improvement is needed.

  16. Lipoprotein metabolism indicators improve cardiovascular risk prediction.

    Directory of Open Access Journals (Sweden)

    Daniël B van Schalkwijk

    Full Text Available BACKGROUND: Cardiovascular disease risk increases when lipoprotein metabolism is dysfunctional. We have developed a computational model able to derive indicators of lipoprotein production, lipolysis, and uptake processes from a single lipoprotein profile measurement. This is the first study to investigate whether lipoprotein metabolism indicators can improve cardiovascular risk prediction and therapy management. METHODS AND RESULTS: We calculated lipoprotein metabolism indicators for 1981 subjects (145 cases, 1836 controls from the Framingham Heart Study offspring cohort in which NMR lipoprotein profiles were measured. We applied a statistical learning algorithm using a support vector machine to select conventional risk factors and lipoprotein metabolism indicators that contributed to predicting risk for general cardiovascular disease. Risk prediction was quantified by the change in the Area-Under-the-ROC-Curve (ΔAUC and by risk reclassification (Net Reclassification Improvement (NRI and Integrated Discrimination Improvement (IDI. Two VLDL lipoprotein metabolism indicators (VLDLE and VLDLH improved cardiovascular risk prediction. We added these indicators to a multivariate model with the best performing conventional risk markers. Our method significantly improved both CVD prediction and risk reclassification. CONCLUSIONS: Two calculated VLDL metabolism indicators significantly improved cardiovascular risk prediction. These indicators may help to reduce prescription of unnecessary cholesterol-lowering medication, reducing costs and possible side-effects. For clinical application, further validation is required.

  17. Method for Improving Indoor Positioning Accuracy Using Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Seoung-Hyeon Lee

    2016-01-01

    Full Text Available Beacons using bluetooth low-energy (BLE technology have emerged as a new paradigm of indoor positioning service (IPS because of their advantages such as low power consumption, miniaturization, wide signal range, and low cost. However, the beacon performance is poor in terms of the indoor positioning accuracy because of noise, motion, and fading, all of which are characteristics of a bluetooth signal and depend on the installation location. Therefore, it is necessary to improve the accuracy of beacon-based indoor positioning technology by fusing it with existing indoor positioning technology, which uses Wi-Fi, ZigBee, and so forth. This study proposes a beacon-based indoor positioning method using an extended Kalman filter that recursively processes input data including noise. After defining the movement of a smartphone on a flat two-dimensional surface, it was assumed that the beacon signal is nonlinear. Then, the standard deviation and properties of the beacon signal were analyzed. According to the analysis results, an extended Kalman filter was designed and the accuracy of the smartphone’s indoor position was analyzed through simulations and tests. The proposed technique achieved good indoor positioning accuracy, with errors of 0.26 m and 0.28 m from the average x- and y-coordinates, respectively, based solely on the beacon signal.

  18. Accuracy of Endoscopy in Predicting the Depth of Mucosal Injury Following Caustic Ingestion; a Cross-Sectional Study

    Directory of Open Access Journals (Sweden)

    Athena Alipour-Faz

    2017-01-01

    Full Text Available Introduction: Esophagogastroduodenoscopy (EGD is currently considered as the primary method of determining the degree of mucosal injury following caustic ingestion. The present study aimed to evaluate the screening performance characteristics of EGD in predicting the depth of gastrointestinal mucosal injuries following caustic ingestion.Methods: Adult patients who were referred to emergency department due to ingestion of corrosive materials, over a 7-year period, were enrolled to this diagnostic accuracy study. Sensitivity, specificity, positive and negative predictive values as well as negative and positive likelihood ratios of EGD in predicting the depth of mucosal injury was calculated using pathologic findings as the gold standard.Results: 54 cases with the mean age of 35 ± 11.2 years were enrolled (59.25% male. Primary endoscopic results defined 28 (51.85% cases as second grade and 26 (48.14% as third grade of mucosal injury. On the other hand, pathologic findings reported 21 (38.88% patients as first grade, 14 (25.92% as second, and 19 patients (35.18% as third grade. Sensitivity and specificity of endoscopy for determining grade II tissue injury were 50.00 (23.04-76.96 and 47.50 (31.51-63.87, respectively. These measures were 100.00 (82.35-100 and 80.00 (63.06-91.56, respectively for grade III. Accuracy of EGD was 87.03% for grade III and 48.14% for grade II.Conclusion: Based on the findings of the present study, endoscopic grading of caustic related mucosal injury based on the Zargar’s classification has good accuracy in predicting grade III (87% and fail accuracy in grade II injuries (48%. It seems that we should be cautious in planning treatment for these patients solely based on endoscopic results. 

  19. Microbiogical data, but not procalcitonin improve the accuracy of the clinical pulmonary infection score.

    Science.gov (United States)

    Jung, Boris; Embriaco, Nathalie; Roux, François; Forel, Jean-Marie; Demory, Didier; Allardet-Servent, Jérôme; Jaber, Samir; La Scola, Bernard; Papazian, Laurent

    2010-05-01

    Early and adequate treatment of ventilator-associated pneumonia (VAP) is mandatory to improve the outcome. The aim of this study was to evaluate, in medical ICU patients, the respective and combined impact of the Clinical Pulmonary Infection Score (CPIS), broncho-alveolar lavage (BAL) gram staining, endotracheal aspirate and a biomarker (procalcitonin) for the early diagnosis of VAP. Prospective, observational study A medical intensive care unit in a teaching hospital. Over an 8-month period, we prospectively included 57 patients suspected of having 86 episodes of VAP. The day of suspicion, a BAL as well as alveolar and serum procalcitonin determinations and evaluation of CPIS were performed. Of 86 BAL performed, 48 were considered positive (cutoff of 10(4) cfu ml(-1)). We found no differences in alveolar or serum procalcitonin between VAP and non-VAP patients. Including procalcitonin in the CPIS score did not increase its accuracy (55%) for the diagnosis of VAP. The best tests to predict VAP were modified CPIS (threshold at 6) combined with microbiological data. Indeed, both routinely twice weekly performed endotracheal aspiration at a threshold of 10(5) cfu ml(-1) and BAL gram staining improved pre-test diagnostic accuracy of VAP (77 and 66%, respectively). This study showed that alveolar procalcitonin performed by BAL does not help the clinician to identify VAP. It confirmed that serum procalcitonin is not an accurate marker of VAP. In contrast, microbiological resources available at the time of VAP suspicion (BAL gram staining, last available endotracheal aspirate) combined or not with CPIS are helpful in distinguishing VAP diagnosed by BAL from patients with a negative BAL.

  20. Quantifying and comparing dynamic predictive accuracy of joint models for longitudinal marker and time-to-event in presence of censoring and competing risks.

    Science.gov (United States)

    Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène

    2015-03-01

    Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.

  1. Accuracy of genomic selection in European maize elite breeding populations.

    Science.gov (United States)

    Zhao, Yusheng; Gowda, Manje; Liu, Wenxin; Würschum, Tobias; Maurer, Hans P; Longin, Friedrich H; Ranc, Nicolas; Reif, Jochen C

    2012-03-01

    Genomic selection is a promising breeding strategy for rapid improvement of complex traits. The objective of our study was to investigate the prediction accuracy of genomic breeding values through cross validation. The study was based on experimental data of six segregating populations from a half-diallel mating design with 788 testcross progenies from an elite maize breeding program. The plants were intensively phenotyped in multi-location field trials and fingerprinted with 960 SNP markers. We used random regression best linear unbiased prediction in combination with fivefold cross validation. The prediction accuracy across populations was higher for grain moisture (0.90) than for grain yield (0.58). The accuracy of genomic selection realized for grain yield corresponds to the precision of phenotyping at unreplicated field trials in 3-4 locations. As for maize up to three generations are feasible per year, selection gain per unit time is high and, consequently, genomic selection holds great promise for maize breeding programs.

  2. Accuracy of Genomic Prediction in Synthetic Populations Depending on the Number of Parents, Relatedness, and Ancestral Linkage Disequilibrium.

    Science.gov (United States)

    Schopp, Pascal; Müller, Dominik; Technow, Frank; Melchinger, Albrecht E

    2017-01-01

    Synthetics play an important role in quantitative genetic research and plant breeding, but few studies have investigated the application of genomic prediction (GP) to these populations. Synthetics are generated by intermating a small number of parents ([Formula: see text] and thereby possess unique genetic properties, which make them especially suited for systematic investigations of factors contributing to the accuracy of GP. We generated synthetics in silico from [Formula: see text]2 to 32 maize (Zea mays L.) lines taken from an ancestral population with either short- or long-range linkage disequilibrium (LD). In eight scenarios differing in relatedness of the training and prediction sets and in the types of data used to calculate the relationship matrix (QTL, SNPs, tag markers, and pedigree), we investigated the prediction accuracy (PA) of Genomic best linear unbiased prediction (GBLUP) and analyzed contributions from pedigree relationships captured by SNP markers, as well as from cosegregation and ancestral LD between QTL and SNPs. The effects of training set size [Formula: see text] and marker density were also studied. Sampling few parents ([Formula: see text]) generates substantial sample LD that carries over into synthetics through cosegregation of alleles at linked loci. For fixed [Formula: see text], [Formula: see text] influences PA most strongly. If the training and prediction set are related, using [Formula: see text] parents yields high PA regardless of ancestral LD because SNPs capture pedigree relationships and Mendelian sampling through cosegregation. As [Formula: see text] increases, ancestral LD contributes more information, while other factors contribute less due to lower frequencies of closely related individuals. For unrelated prediction sets, only ancestral LD contributes information and accuracies were poor and highly variable for [Formula: see text] due to large sample LD. For large [Formula: see text], achieving moderate accuracy requires

  3. Bottom-up coarse-grained models with predictive accuracy and transferability for both structural and thermodynamic properties of heptane-toluene mixtures.

    Science.gov (United States)

    Dunn, Nicholas J H; Noid, W G

    2016-05-28

    This work investigates the promise of a "bottom-up" extended ensemble framework for developing coarse-grained (CG) models that provide predictive accuracy and transferability for describing both structural and thermodynamic properties. We employ a force-matching variational principle to determine system-independent, i.e., transferable, interaction potentials that optimally model the interactions in five distinct heptane-toluene mixtures. Similarly, we employ a self-consistent pressure-matching approach to determine a system-specific pressure correction for each mixture. The resulting CG potentials accurately reproduce the site-site rdfs, the volume fluctuations, and the pressure equations of state that are determined by all-atom (AA) models for the five mixtures. Furthermore, we demonstrate that these CG potentials provide similar accuracy for additional heptane-toluene mixtures that were not included their parameterization. Surprisingly, the extended ensemble approach improves not only the transferability but also the accuracy of the calculated potentials. Additionally, we observe that the required pressure corrections strongly correlate with the intermolecular cohesion of the system-specific CG potentials. Moreover, this cohesion correlates with the relative "structure" within the corresponding mapped AA ensemble. Finally, the appendix demonstrates that the self-consistent pressure-matching approach corresponds to minimizing an appropriate relative entropy.

  4. Subjective evaluation of the accuracy of video imaging prediction following orthognathic surgery in Chinese patients

    NARCIS (Netherlands)

    Chew, Ming Tak; Koh, Chay Hui; Sandham, John; Wong, Hwee Bee

    Purpose: The aims of this retrospective study were to assess the subjective accuracy of predictions generated by a computer imaging software in Chinese patients who had undergone orthognathic surgery and to determine the influence of initial dysgnathia and complexity of the surgical procedure on

  5. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    Science.gov (United States)

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level

  6. Improved Helicopter Rotor Performance Prediction through Loose and Tight CFD/CSD Coupling

    Science.gov (United States)

    Ickes, Jacob C.

    Helicopters and other Vertical Take-Off or Landing (VTOL) vehicles exhibit an interesting combination of structural dynamic and aerodynamic phenomena which together drive the rotor performance. The combination of factors involved make simulating the rotor a challenging and multidisciplinary effort, and one which is still an active area of interest in the industry because of the money and time it could save during design. Modern tools allow the prediction of rotorcraft physics from first principles. Analysis of the rotor system with this level of accuracy provides the understanding necessary to improve its performance. There has historically been a divide between the comprehensive codes which perform aeroelastic rotor simulations using simplified aerodynamic models, and the very computationally intensive Navier-Stokes Computational Fluid Dynamics (CFD) solvers. As computer resources become more available, efforts have been made to replace the simplified aerodynamics of the comprehensive codes with the more accurate results from a CFD code. The objective of this work is to perform aeroelastic rotorcraft analysis using first-principles simulations for both fluids and structural predictions using tools available at the University of Toledo. Two separate codes are coupled together in both loose coupling (data exchange on a periodic interval) and tight coupling (data exchange each time step) schemes. To allow the coupling to be carried out in a reliable and efficient way, a Fluid-Structure Interaction code was developed which automatically performs primary functions of loose and tight coupling procedures. Flow phenomena such as transonics, dynamic stall, locally reversed flow on a blade, and Blade-Vortex Interaction (BVI) were simulated in this work. Results of the analysis show aerodynamic load improvement due to the inclusion of the CFD-based airloads in the structural dynamics analysis of the Computational Structural Dynamics (CSD) code. Improvements came in the form

  7. Assessing Predictive Properties of Genome-Wide Selection in Soybeans

    Directory of Open Access Journals (Sweden)

    Alencar Xavier

    2016-08-01

    Full Text Available Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr. We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set.

  8. Assessing Predictive Properties of Genome-Wide Selection in Soybeans.

    Science.gov (United States)

    Xavier, Alencar; Muir, William M; Rainey, Katy Martin

    2016-08-09

    Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr). We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set. Copyright © 2016 Xavie et al.

  9. Improving the accuracy of admitted subacute clinical costing: an action research approach.

    Science.gov (United States)

    Hakkennes, Sharon; Arblaster, Ross; Lim, Kim

    2017-08-01

    Objective The aim of the present study was to determine whether action research could be used to improve the breadth and accuracy of clinical costing data in an admitted subacute setting Methods The setting was a 100-bed in-patient rehabilitation centre. Using a pre-post study design all admitted subacute separations during the 2011-12 financial year were eligible for inclusion. An action research framework aimed at improving clinical costing methodology was developed and implemented. Results In all, 1499 separations were included in the study. A medical record audit of a random selection of 80 separations demonstrated that the use of an action research framework was effective in improving the breadth and accuracy of the costing data. This was evidenced by a significant increase in the average number of activities costed, a reduction in the average number of activities incorrectly costed and a reduction in the average number of activities missing from the costing, per episode of care. Conclusions Engaging clinicians and cost centre managers was effective in facilitating the development of robust clinical costing data in an admitted subacute setting. Further investigation into the value of this approach across other care types and healthcare services is warranted. What is known about this topic? Accurate clinical costing data is essential for informing price models used in activity-based funding. In Australia, there is currently a lack of robust admitted subacute cost data to inform the price model for this care type. What does this paper add? The action research framework presented in this study was effective in improving the breadth and accuracy of clinical costing data in an admitted subacute setting. What are the implications for practitioners? To improve clinical costing practices, health services should consider engaging key stakeholders, including clinicians and cost centre managers, in reviewing clinical costing methodology. Robust clinical costing data has

  10. Optical vector network analyzer with improved accuracy based on polarization modulation and polarization pulling.

    Science.gov (United States)

    Li, Wei; Liu, Jian Guo; Zhu, Ning Hua

    2015-04-15

    We report a novel optical vector network analyzer (OVNA) with improved accuracy based on polarization modulation and stimulated Brillouin scattering (SBS) assisted polarization pulling. The beating between adjacent higher-order optical sidebands which are generated because of the nonlinearity of an electro-optic modulator (EOM) introduces considerable error to the OVNA. In our scheme, the measurement error is significantly reduced by removing the even-order optical sidebands using polarization discrimination. The proposed approach is theoretically analyzed and experimentally verified. The experimental results show that the accuracy of the OVNA is greatly improved compared to a conventional OVNA.

  11. Hybrid Indoor-Based WLAN-WSN Localization Scheme for Improving Accuracy Based on Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Zahid Farid

    2016-01-01

    Full Text Available In indoor environments, WiFi (RSS based localization is sensitive to various indoor fading effects and noise during transmission, which are the main causes of localization errors that affect its accuracy. Keeping in view those fading effects, positioning systems based on a single technology are ineffective in performing accurate localization. For this reason, the trend is toward the use of hybrid positioning systems (combination of two or more wireless technologies in indoor/outdoor localization scenarios for getting better position accuracy. This paper presents a hybrid technique to implement indoor localization that adopts fingerprinting approaches in both WiFi and Wireless Sensor Networks (WSNs. This model exploits machine learning, in particular Artificial Natural Network (ANN techniques, for position calculation. The experimental results show that the proposed hybrid system improved the accuracy, reducing the average distance error to 1.05 m by using ANN. Applying Genetic Algorithm (GA based optimization technique did not incur any further improvement to the accuracy. Compared to the performance of GA optimization, the nonoptimized ANN performed better in terms of accuracy, precision, stability, and computational time. The above results show that the proposed hybrid technique is promising for achieving better accuracy in real-world positioning applications.

  12. Improving Accuracy of Influenza-Associated Hospitalization Rate Estimates

    Science.gov (United States)

    Reed, Carrie; Kirley, Pam Daily; Aragon, Deborah; Meek, James; Farley, Monica M.; Ryan, Patricia; Collins, Jim; Lynfield, Ruth; Baumbach, Joan; Zansky, Shelley; Bennett, Nancy M.; Fowler, Brian; Thomas, Ann; Lindegren, Mary L.; Atkinson, Annette; Finelli, Lyn; Chaves, Sandra S.

    2015-01-01

    Diagnostic test sensitivity affects rate estimates for laboratory-confirmed influenza–associated hospitalizations. We used data from FluSurv-NET, a national population-based surveillance system for laboratory-confirmed influenza hospitalizations, to capture diagnostic test type by patient age and influenza season. We calculated observed rates by age group and adjusted rates by test sensitivity. Test sensitivity was lowest in adults >65 years of age. For all ages, reverse transcription PCR was the most sensitive test, and use increased from 65 years. After 2009, hospitalization rates adjusted by test sensitivity were ≈15% higher for children 65 years of age. Test sensitivity adjustments improve the accuracy of hospitalization rate estimates. PMID:26292017

  13. Long Range Aircraft Trajectory Prediction

    OpenAIRE

    Magister, Tone

    2009-01-01

    The subject of the paper is the improvement of the aircraft future trajectory prediction accuracy for long-range airborne separation assurance. The strategic planning of safe aircraft flights and effective conflict avoidance tactics demand timely and accurate conflict detection based upon future four–dimensional airborne traffic situation prediction which is as accurate as each aircraft flight trajectory prediction. The improved kinematics model of aircraft relative flight considering flight ...

  14. Improving salt marsh digital elevation model accuracy with full-waveform lidar and nonparametric predictive modeling

    Science.gov (United States)

    Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.

    2018-03-01

    Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.

  15. CRISPR-Cas9-mediated saturated mutagenesis screen predicts clinical drug resistance with improved accuracy.

    Science.gov (United States)

    Ma, Leyuan; Boucher, Jeffrey I; Paulsen, Janet; Matuszewski, Sebastian; Eide, Christopher A; Ou, Jianhong; Eickelberg, Garrett; Press, Richard D; Zhu, Lihua Julie; Druker, Brian J; Branford, Susan; Wolfe, Scot A; Jensen, Jeffrey D; Schiffer, Celia A; Green, Michael R; Bolon, Daniel N

    2017-10-31

    Developing tools to accurately predict the clinical prevalence of drug-resistant mutations is a key step toward generating more effective therapeutics. Here we describe a high-throughput CRISPR-Cas9-based saturated mutagenesis approach to generate comprehensive libraries of point mutations at a defined genomic location and systematically study their effect on cell growth. As proof of concept, we mutagenized a selected region within the leukemic oncogene BCR-ABL1 Using bulk competitions with a deep-sequencing readout, we analyzed hundreds of mutations under multiple drug conditions and found that the effects of mutations on growth in the presence or absence of drug were critical for predicting clinically relevant resistant mutations, many of which were cancer adaptive in the absence of drug pressure. Using this approach, we identified all clinically isolated BCR-ABL1 mutations and achieved a prediction score that correlated highly with their clinical prevalence. The strategy described here can be broadly applied to a variety of oncogenes to predict patient mutations and evaluate resistance susceptibility in the development of new therapeutics. Published under the PNAS license.

  16. Astudy on accuracy of predicted breeding value for body weight at eighth week of age in Khorasan native chickens

    Directory of Open Access Journals (Sweden)

    faeze ghorbani

    2015-12-01

    Full Text Available Introduction: Genetic resources in any country are valuable materials which needed to be conserved for a sustainable agriculture. An animal phenotype is generally affected by genetic and environmental factors. To increase mean performance in a population under consideration not only environmental conditions, but also genetic potential of the animals should be improved. Although, environmental improvement could increase the level of animals’ production in a more rapid way, it is not a permanent and non-cumulative progress. In any breeding schemes prediction breeding value of the candidate animals is needed to be obtained with a high precision and accuracy for making a remarkable genetic gain for the traits over the time. The main objective of the present research was to study accuracy of predicted breeding value for body weight at eighth week of age in indigenous chickens of Khorasan Razavi province. Materials and methods: A set of 47,000 body weight (at the age of eight weeks records belonging to 47,000 head of male and female chicks (progeny of 753 sires and 5,154 dams collected during seven generations (2006-2012 was used. The data were obtained in Khorasan Razavi native chicken breeding center. An animal model was applied for analyzing the records. In the model, contemporary group of generation*hatch*sex (GHS as a fixed effect, weight at birth as a covariable, as well as direct and maternal additive genetic random effects were taken into account. In an initial analysis using SAS software, all fixed and covariate factors included in the model were detected to be significant for the trait. All additive genetic relationships among the animals in the pedigree file (47,880 animals were accounted for. Variance and covariance components of direct and maternal additive genetic effects were estimated through restricted maximum likelihood (REML method. Breeding value of the animals was obtained by best linear unbiased prediction (BLUP. Selection

  17. Procalcitonin Improves the Glasgow Prognostic Score for Outcome Prediction in Emergency Patients with Cancer: A Cohort Study

    Directory of Open Access Journals (Sweden)

    Anna Christina Rast

    2015-01-01

    Full Text Available The Glasgow Prognostic Score (GPS is useful for predicting long-term mortality in cancer patients. Our aim was to validate the GPS in ED patients with different cancer-related urgency and investigate whether biomarkers would improve its accuracy. We followed consecutive medical patients presenting with a cancer-related medical urgency to a tertiary care hospital in Switzerland. Upon admission, we measured procalcitonin (PCT, white blood cell count, urea, 25-hydroxyvitamin D, corrected calcium, C-reactive protein, and albumin and calculated the GPS. Of 341 included patients (median age 68 years, 61% males, 81 (23.8% died within 30 days after admission. The GPS showed moderate prognostic accuracy (AUC 0.67 for mortality. Among the different biomarkers, PCT provided the highest prognostic accuracy (odds ratio 1.6 (95% confidence interval 1.3 to 1.9, P<0.001, AUC 0.69 and significantly improved the GPS to a combined AUC of 0.74 (P=0.007. Considering all investigated biomarkers, the AUC increased to 0.76 (P<0.001. The GPS performance was significantly improved by the addition of PCT and other biomarkers for risk stratification in ED cancer patients. The benefit of early risk stratification by the GPS in combination with biomarkers from different pathways should be investigated in further interventional trials.

  18. Improvement of the accuracy of noise measurements by the two-amplifier correlation method.

    Science.gov (United States)

    Pellegrini, B; Basso, G; Fiori, G; Macucci, M; Maione, I A; Marconcini, P

    2013-10-01

    We present a novel method for device noise measurement, based on a two-channel cross-correlation technique and a direct "in situ" measurement of the transimpedance of the device under test (DUT), which allows improved accuracy with respect to what is available in the literature, in particular when the DUT is a nonlinear device. Detailed analytical expressions for the total residual noise are derived, and an experimental investigation of the increased accuracy provided by the method is performed.

  19. Improving the accuracy of Laplacian estimation with novel multipolar concentric ring electrodes

    Science.gov (United States)

    Ding, Quan; Besio, Walter G.

    2015-01-01

    Conventional electroencephalography with disc electrodes has major drawbacks including poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting its use. Concentric ring electrodes, consisting of several elements including the central disc and a number of concentric rings, are a promising alternative with potential to improve all of the aforementioned aspects significantly. In our previous work, the tripolar concentric ring electrode was successfully used in a wide range of applications demonstrating its superiority to conventional disc electrode, in particular, in accuracy of Laplacian estimation. This paper takes the next step toward further improving the Laplacian estimation with novel multipolar concentric ring electrodes by completing and validating a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2n. An explicit formula based on inversion of a square Vandermonde matrix is derived to make computation of multipolar Laplacian more efficient. To confirm the analytic result of the accuracy of Laplacian estimate increasing with the increase of n and to assess the significance of this gain in accuracy for practical applications finite element method model analysis has been performed. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were directly compared and obtained results suggest the significance of the increase in Laplacian accuracy caused by increase of n. PMID:26693200

  20. The paradox of verbal autopsy in cause of death assignment: symptom question unreliability but predictive accuracy.

    Science.gov (United States)

    Serina, Peter; Riley, Ian; Hernandez, Bernardo; Flaxman, Abraham D; Praveen, Devarsetty; Tallo, Veronica; Joshi, Rohina; Sanvictores, Diozele; Stewart, Andrea; Mooney, Meghan D; Murray, Christopher J L; Lopez, Alan D

    2016-01-01

    We believe that it is important that governments understand the reliability of the mortality data which they have at their disposable to guide policy debates. In many instances, verbal autopsy (VA) will be the only source of mortality data for populations, yet little is known about how the accuracy of VA diagnoses is affected by the reliability of the symptom responses. We previously described the effect of the duration of time between death and VA administration on VA validity. In this paper, using the same dataset, we assess the relationship between the reliability and completeness of symptom responses and the reliability and accuracy of cause of death (COD) prediction. The study was based on VAs in the Population Health Metrics Research Consortium (PHMRC) VA Validation Dataset from study sites in Bohol and Manila, Philippines and Andhra Pradesh, India. The initial interview was repeated within 3-52 months of death. Question responses were assessed for reliability and completeness between the two survey rounds. COD was predicted by Tariff Method. A sample of 4226 VAs was collected for 2113 decedents, including 1394 adults, 349 children, and 370 neonates. Mean question reliability was unexpectedly low ( kappa  = 0.447): 42.5 % of responses positive at the first interview were negative at the second, and 47.9 % of responses positive at the second had been negative at the first. Question reliability was greater for the short form of the PHMRC instrument ( kappa  = 0.497) and when analyzed at the level of the individual decedent ( kappa  = 0.610). Reliability at the level of the individual decedent was associated with COD predictive reliability and predictive accuracy. Families give coherent accounts of events leading to death but the details vary from interview to interview for the same case. Accounts are accurate but inconsistent; different subsets of symptoms are identified on each occasion. However, there are sufficient accurate and consistent

  1. Comparison between the accuracies of a new discretization method and an improved Fourier method to evaluate heat transfers between soil and atmosphere

    International Nuclear Information System (INIS)

    Hechinger, E.; Raffy, M.; Becker, F.

    1982-01-01

    To improve and evaluate the accuracy of Fourier methods for the analysis of the energy exchanges between soil and atmosphere, we have developed first a Fourier method that takes into account the nonneutrality corrections and the time variation of the air temperature and which improves the linearization procedures and, second a new discretization method that does not imply any linearization. The Fourier method, which gives the exact solution of an approximated problem, turns out to have the same order of accuracy as the discretization method, which gives an approximate solution of the exact problem. These methods reproduce the temperatures and fluxes predicted by the Tergra model as well as another set of experimental surface temperatures. In its present form, the Fourier method leads to results that become less accurate (mainly for low wind speeds) under certain conditions, namely, as the amplitude of the daily variation of the air and surface temperatures and their differences increase and as the relative humidities of the air at about 2 m and at the soil surface differ. Nevertheless, the results may be considered as generally satisfactory. Possible improvements of the Fourier model are discussed

  2. Real time shear wave elastography in chronic liver diseases: Accuracy for predicting liver fibrosis, in comparison with serum markers

    Science.gov (United States)

    Jeong, Jae Yoon; Kim, Tae Yeob; Sohn, Joo Hyun; Kim, Yongsoo; Jeong, Woo Kyoung; Oh, Young-Ha; Yoo, Kyo-Sang

    2014-01-01

    AIM: To evaluate the correlation between liver stiffness measurement (LSM) by real-time shear wave elastography (SWE) and liver fibrosis stage and the accuracy of LSM for predicting significant and advanced fibrosis, in comparison with serum markers. METHODS: We consecutively analyzed 70 patients with various chronic liver diseases. Liver fibrosis was staged from F0 to F4 according to the Batts and Ludwig scoring system. Significant and advanced fibrosis was defined as stage F ≥ 2 and F ≥ 3, respectively. The accuracy of prediction for fibrosis was analyzed using receiver operating characteristic curves. RESULTS: Seventy patients, 15 were belonged to F0-F1 stage, 20 F2, 13 F3 and 22 F4. LSM was increased with progression of fibrosis stage (F0-F1: 6.77 ± 1.72, F2: 9.98 ± 3.99, F3: 15.80 ± 7.73, and F4: 22.09 ± 10.09, P < 0.001). Diagnostic accuracies of LSM for prediction of F ≥ 2 and F ≥ 3 were 0.915 (95%CI: 0.824-0.968, P < 0.001) and 0.913 (95%CI: 0.821-0.967, P < 0.001), respectively. The cut-off values of LSM for prediction of F ≥ 2 and F ≥ 3 were 8.6 kPa with 78.2% sensitivity and 93.3% specificity and 10.46 kPa with 88.6% sensitivity and 80.0% specificity, respectively. However, there were no significant differences between LSM and serum hyaluronic acid and type IV collagen in diagnostic accuracy. CONCLUSION: SWE showed a significant correlation with the severity of liver fibrosis and was useful and accurate to predict significant and advanced fibrosis, comparable with serum markers. PMID:25320528

  3. Predictability of extreme weather events for NE U.S.: improvement of the numerical prediction using a Bayesian regression approach

    Science.gov (United States)

    Yang, J.; Astitha, M.; Anagnostou, E. N.; Hartman, B.; Kallos, G. B.

    2015-12-01

    Weather prediction accuracy has become very important for the Northeast U.S. given the devastating effects of extreme weather events in the recent years. Weather forecasting systems are used towards building strategies to prevent catastrophic losses for human lives and the environment. Concurrently, weather forecast tools and techniques have evolved with improved forecast skill as numerical prediction techniques are strengthened by increased super-computing resources. In this study, we examine the combination of two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) by utilizing a Bayesian regression approach to improve the prediction of extreme weather events for NE U.S. The basic concept behind the Bayesian regression approach is to take advantage of the strengths of two atmospheric modeling systems and, similar to the multi-model ensemble approach, limit their weaknesses which are related to systematic and random errors in the numerical prediction of physical processes. The first part of this study is focused on retrospective simulations of seventeen storms that affected the region in the period 2004-2013. Optimal variances are estimated by minimizing the root mean square error and are applied to out-of-sample weather events. The applicability and usefulness of this approach are demonstrated by conducting an error analysis based on in-situ observations from meteorological stations of the National Weather Service (NWS) for wind speed and wind direction, and NCEP Stage IV radar data, mosaicked from the regional multi-sensor for precipitation. The preliminary results indicate a significant improvement in the statistical metrics of the modeled-observed pairs for meteorological variables using various combinations of the sixteen events as predictors of the seventeenth. This presentation will illustrate the implemented methodology and the obtained results for wind speed, wind direction and precipitation, as well as set the research steps that will be

  4. Improving Intensity-Based Lung CT Registration Accuracy Utilizing Vascular Information

    Directory of Open Access Journals (Sweden)

    Kunlin Cao

    2012-01-01

    Full Text Available Accurate pulmonary image registration is a challenging problem when the lungs have a deformation with large distance. In this work, we present a nonrigid volumetric registration algorithm to track lung motion between a pair of intrasubject CT images acquired at different inflation levels and introduce a new vesselness similarity cost that improves intensity-only registration. Volumetric CT datasets from six human subjects were used in this study. The performance of four intensity-only registration algorithms was compared with and without adding the vesselness similarity cost function. Matching accuracy was evaluated using landmarks, vessel tree, and fissure planes. The Jacobian determinant of the transformation was used to reveal the deformation pattern of local parenchymal tissue. The average matching error for intensity-only registration methods was on the order of 1 mm at landmarks and 1.5 mm on fissure planes. After adding the vesselness preserving cost function, the landmark and fissure positioning errors decreased approximately by 25% and 30%, respectively. The vesselness cost function effectively helped improve the registration accuracy in regions near thoracic cage and near the diaphragm for all the intensity-only registration algorithms tested and also helped produce more consistent and more reliable patterns of regional tissue deformation.

  5. High-accuracy CFD prediction methods for fluid and structure temperature fluctuations at T-junction for thermal fatigue evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Qian, Shaoxiang, E-mail: qian.shaoxiang@jgc.com [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kanamaru, Shinichiro [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kasahara, Naoto [Nuclear Engineering and Management, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-07-15

    Highlights: • Numerical methods for accurate prediction of thermal loading were proposed. • Predicted fluid temperature fluctuation (FTF) intensity is close to the experiment. • Predicted structure temperature fluctuation (STF) range is close to the experiment. • Predicted peak frequencies of FTF and STF also agree well with the experiment. • CFD results show the proposed numerical methods are of sufficiently high accuracy. - Abstract: Temperature fluctuations generated by the mixing of hot and cold fluids at a T-junction, which is widely used in nuclear power and process plants, can cause thermal fatigue failure. The conventional methods for evaluating thermal fatigue tend to provide insufficient accuracy, because they were developed based on limited experimental data and a simplified one-dimensional finite element analysis (FEA). CFD/FEA coupling analysis is expected as a useful tool for the more accurate evaluation of thermal fatigue. The present paper aims to verify the accuracy of proposed numerical methods of simulating fluid and structure temperature fluctuations at a T-junction for thermal fatigue evaluation. The dynamic Smagorinsky model (DSM) is used for large eddy simulation (LES) sub-grid scale (SGS) turbulence model, and a hybrid scheme (HS) is adopted for the calculation of convective terms in the governing equations. Also, heat transfer between fluid and structure is calculated directly through thermal conduction by creating a mesh with near wall resolution (NWR) by allocating grid points within the thermal boundary sub-layer. The simulation results show that the distribution of fluid temperature fluctuation intensity and the range of structure temperature fluctuation are remarkably close to the experimental results. Moreover, the peak frequencies of power spectrum density (PSD) of both fluid and structure temperature fluctuations also agree well with the experimental results. Therefore, the numerical methods used in the present paper are

  6. Accuracy Enhancement with Processing Error Prediction and Compensation of a CNC Flame Cutting Machine Used in Spatial Surface Operating Conditions

    Directory of Open Access Journals (Sweden)

    Shenghai Hu

    2017-04-01

    Full Text Available This study deals with the precision performance of the CNC flame-cutting machine used in spatial surface operating conditions and presents an accuracy enhancement method based on processing error modeling prediction and real-time compensation. Machining coordinate systems and transformation matrix models were established for the CNC flame processing system considering both geometric errors and thermal deformation effects. Meanwhile, prediction and compensation models were constructed related to the actual cutting situation. Focusing on the thermal deformation elements, finite element analysis was used to measure the testing data of thermal errors, the grey system theory was applied to optimize the key thermal points, and related thermal dynamics models were carried out to achieve high-precision prediction values. Comparison experiments between the proposed method and the teaching method were conducted on the processing system after performing calibration. The results showed that the proposed method is valid and the cutting quality could be improved by more than 30% relative to the teaching method. Furthermore, the proposed method can be used under any working condition by making a few adjustments to the prediction and compensation models.

  7. Prediction of RNA secondary structure using generalized centroid estimators.

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Sato, Kengo; Mituyama, Toutai; Asai, Kiyoshi

    2009-02-15

    Recent studies have shown that the methods for predicting secondary structures of RNAs on the basis of posterior decoding of the base-pairing probabilities has an advantage with respect to prediction accuracy over the conventionally utilized minimum free energy methods. However, there is room for improvement in the objective functions presented in previous studies, which are maximized in the posterior decoding with respect to the accuracy measures for secondary structures. We propose novel estimators which improve the accuracy of secondary structure prediction of RNAs. The proposed estimators maximize an objective function which is the weighted sum of the expected number of the true positives and that of the true negatives of the base pairs. The proposed estimators are also improved versions of the ones used in previous works, namely CONTRAfold for secondary structure prediction from a single RNA sequence and McCaskill-MEA for common secondary structure prediction from multiple alignments of RNA sequences. We clarify the relations between the proposed estimators and the estimators presented in previous works, and theoretically show that the previous estimators include additional unnecessary terms in the evaluation measures with respect to the accuracy. Furthermore, computational experiments confirm the theoretical analysis by indicating improvement in the empirical accuracy. The proposed estimators represent extensions of the centroid estimators proposed in Ding et al. and Carvalho and Lawrence, and are applicable to a wide variety of problems in bioinformatics. Supporting information and the CentroidFold software are available online at: http://www.ncrna.org/software/centroidfold/.

  8. Fission product model for BWR analysis with improved accuracy in high burnup

    International Nuclear Information System (INIS)

    Ikehara, Tadashi; Yamamoto, Munenari; Ando, Yoshihira

    1998-01-01

    A new fission product (FP) chain model has been studied to be used in a BWR lattice calculation. In attempting to establish the model, two requirements, i.e. the accuracy in predicting burnup reactivity and the easiness in practical application, are simultaneously considered. The resultant FP model consists of 81 explicit FP nuclides and two lumped pseudo nuclides having the absorption cross sections independent of burnup history and fuel composition. For the verification, extensive numerical tests covering over a wide range of operational conditions and fuel compositions have been carried out. The results indicate that the estimated errors in burnup reactivity are within 0.1%Δk for exposures up to 100GWd/t. It is concluded that the present model can offer a high degree of accuracy for FP representation in BWR lattice calculation. (author)

  9. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method

    Directory of Open Access Journals (Sweden)

    HosseiniAliabadi S. J.

    2015-06-01

    Full Text Available Background: The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. Objective: A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Method: Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. Result: The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. Conclusion: This system can be utilized in large scale environmental monitoring with a higher accuracy

  10. Phishtest: Measuring the Impact of Email Headers on the Predictive Accuracy of Machine Learning Techniques

    Science.gov (United States)

    Tout, Hicham

    2013-01-01

    The majority of documented phishing attacks have been carried by email, yet few studies have measured the impact of email headers on the predictive accuracy of machine learning techniques in detecting email phishing attacks. Research has shown that the inclusion of a limited subset of email headers as features in training machine learning…

  11. A comparison of accuracy validation methods for genomic and pedigree-based predictions of swine litter size traits using Large White and simulated data.

    Science.gov (United States)

    Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T

    2018-02-01

    The objective of this study was to compare and determine the optimal validation method when comparing accuracy from single-step GBLUP (ssGBLUP) to traditional pedigree-based BLUP. Field data included six litter size traits. Simulated data included ten replicates designed to mimic the field data in order to determine the method that was closest to the true accuracy. Data were split into training and validation sets. The methods used were as follows: (i) theoretical accuracy derived from the prediction error variance (PEV) of the direct inverse (iLHS), (ii) approximated accuracies from the accf90(GS) program in the BLUPF90 family of programs (Approx), (iii) correlation between predictions and the single-step GEBVs from the full data set (GEBV Full ), (iv) correlation between predictions and the corrected phenotypes of females from the full data set (Y c ), (v) correlation from method iv divided by the square root of the heritability (Y ch ) and (vi) correlation between sire predictions and the average of their daughters' corrected phenotypes (Y cs ). Accuracies from iLHS increased from 0.27 to 0.37 (37%) in the Large White. Approximation accuracies were very consistent and close in absolute value (0.41 to 0.43). Both iLHS and Approx were much less variable than the corrected phenotype methods (ranging from 0.04 to 0.27). On average, simulated data showed an increase in accuracy from 0.34 to 0.44 (29%) using ssGBLUP. Both iLHS and Y ch approximated the increase well, 0.30 to 0.46 and 0.36 to 0.45, respectively. GEBV Full performed poorly in both data sets and is not recommended. Results suggest that for within-breed selection, theoretical accuracy using PEV was consistent and accurate. When direct inversion is infeasible to get the PEV, correlating predictions to the corrected phenotypes divided by the square root of heritability is adequate given a large enough validation data set. © 2017 Blackwell Verlag GmbH.

  12. Assessment of the predictive accuracy of five in silico prediction tools, alone or in combination, and two metaservers to classify long QT syndrome gene mutations.

    Science.gov (United States)

    Leong, Ivone U S; Stuckey, Alexander; Lai, Daniel; Skinner, Jonathan R; Love, Donald R

    2015-05-13

    Long QT syndrome (LQTS) is an autosomal dominant condition predisposing to sudden death from malignant arrhythmia. Genetic testing identifies many missense single nucleotide variants of uncertain pathogenicity. Establishing genetic pathogenicity is an essential prerequisite to family cascade screening. Many laboratories use in silico prediction tools, either alone or in combination, or metaservers, in order to predict pathogenicity; however, their accuracy in the context of LQTS is unknown. We evaluated the accuracy of five in silico programs and two metaservers in the analysis of LQTS 1-3 gene variants. The in silico tools SIFT, PolyPhen-2, PROVEAN, SNPs&GO and SNAP, either alone or in all possible combinations, and the metaservers Meta-SNP and PredictSNP, were tested on 312 KCNQ1, KCNH2 and SCN5A gene variants that have previously been characterised by either in vitro or co-segregation studies as either "pathogenic" (283) or "benign" (29). The accuracy, sensitivity, specificity and Matthews Correlation Coefficient (MCC) were calculated to determine the best combination of in silico tools for each LQTS gene, and when all genes are combined. The best combination of in silico tools for KCNQ1 is PROVEAN, SNPs&GO and SIFT (accuracy 92.7%, sensitivity 93.1%, specificity 100% and MCC 0.70). The best combination of in silico tools for KCNH2 is SIFT and PROVEAN or PROVEAN, SNPs&GO and SIFT. Both combinations have the same scores for accuracy (91.1%), sensitivity (91.5%), specificity (87.5%) and MCC (0.62). In the case of SCN5A, SNAP and PROVEAN provided the best combination (accuracy 81.4%, sensitivity 86.9%, specificity 50.0%, and MCC 0.32). When all three LQT genes are combined, SIFT, PROVEAN and SNAP is the combination with the best performance (accuracy 82.7%, sensitivity 83.0%, specificity 80.0%, and MCC 0.44). Both metaservers performed better than the single in silico tools; however, they did not perform better than the best performing combination of in silico

  13. Bottom-up coarse-grained models with predictive accuracy and transferability for both structural and thermodynamic properties of heptane-toluene mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, Nicholas J. H.; Noid, W. G., E-mail: wnoid@chem.psu.edu [Department of Chemistry, The Pennsylvania State University, University Park, Pennsylvania 16802 (United States)

    2016-05-28

    This work investigates the promise of a “bottom-up” extended ensemble framework for developing coarse-grained (CG) models that provide predictive accuracy and transferability for describing both structural and thermodynamic properties. We employ a force-matching variational principle to determine system-independent, i.e., transferable, interaction potentials that optimally model the interactions in five distinct heptane-toluene mixtures. Similarly, we employ a self-consistent pressure-matching approach to determine a system-specific pressure correction for each mixture. The resulting CG potentials accurately reproduce the site-site rdfs, the volume fluctuations, and the pressure equations of state that are determined by all-atom (AA) models for the five mixtures. Furthermore, we demonstrate that these CG potentials provide similar accuracy for additional heptane-toluene mixtures that were not included their parameterization. Surprisingly, the extended ensemble approach improves not only the transferability but also the accuracy of the calculated potentials. Additionally, we observe that the required pressure corrections strongly correlate with the intermolecular cohesion of the system-specific CG potentials. Moreover, this cohesion correlates with the relative “structure” within the corresponding mapped AA ensemble. Finally, the appendix demonstrates that the self-consistent pressure-matching approach corresponds to minimizing an appropriate relative entropy.

  14. Improving 3D structure prediction from chemical shift data

    Energy Technology Data Exchange (ETDEWEB)

    Schot, Gijs van der [Utrecht University, Computational Structural Biology, Bijvoet Center for Biomolecular Research, Faculty of Science-Chemistry (Netherlands); Zhang, Zaiyong [Technische Universitaet Muenchen, Biomolecular NMR and Munich Center for Integrated Protein Science, Department Chemie (Germany); Vernon, Robert [University of Washington, Department of Biochemistry (United States); Shen, Yang [National Institutes of Health, Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases (United States); Vranken, Wim F. [VIB, Department of Structural Biology (Belgium); Baker, David [University of Washington, Department of Biochemistry (United States); Bonvin, Alexandre M. J. J., E-mail: a.m.j.j.bonvin@uu.nl [Utrecht University, Computational Structural Biology, Bijvoet Center for Biomolecular Research, Faculty of Science-Chemistry (Netherlands); Lange, Oliver F., E-mail: oliver.lange@tum.de [Technische Universitaet Muenchen, Biomolecular NMR and Munich Center for Integrated Protein Science, Department Chemie (Germany)

    2013-09-15

    We report advances in the calculation of protein structures from chemical shift nuclear magnetic resonance data alone. Our previously developed method, CS-Rosetta, assembles structures from a library of short protein fragments picked from a large library of protein structures using chemical shifts and sequence information. Here we demonstrate that combination of a new and improved fragment picker and the iterative sampling algorithm RASREC yield significant improvements in convergence and accuracy. Moreover, we introduce improved criteria for assessing the accuracy of the models produced by the method. The method was tested on 39 proteins in the 50-100 residue size range and yields reliable structures in 70 % of the cases. All structures that passed the reliability filter were accurate (<2 A RMSD from the reference)

  15. Geopositioning with a quadcopter: Extracted feature locations and predicted accuracy without a priori sensor attitude information

    Science.gov (United States)

    Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron

    2017-05-01

    This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.

  16. Artificial neural network prediction of ischemic tissue fate in acute stroke imaging

    Science.gov (United States)

    Huang, Shiliang; Shen, Qiang; Duong, Timothy Q

    2010-01-01

    Multimodal magnetic resonance imaging of acute stroke provides predictive value that can be used to guide stroke therapy. A flexible artificial neural network (ANN) algorithm was developed and applied to predict ischemic tissue fate on three stroke groups: 30-, 60-minute, and permanent middle cerebral artery occlusion in rats. Cerebral blood flow (CBF), apparent diffusion coefficient (ADC), and spin–spin relaxation time constant (T2) were acquired during the acute phase up to 3 hours and again at 24 hours followed by histology. Infarct was predicted on a pixel-by-pixel basis using only acute (30-minute) stroke data. In addition, neighboring pixel information and infarction incidence were also incorporated into the ANN model to improve prediction accuracy. Receiver-operating characteristic analysis was used to quantify prediction accuracy. The major findings were the following: (1) CBF alone poorly predicted the final infarct across three experimental groups; (2) ADC alone adequately predicted the infarct; (3) CBF+ADC improved the prediction accuracy; (4) inclusion of neighboring pixel information and infarction incidence further improved the prediction accuracy; and (5) prediction was more accurate for permanent occlusion, followed by 60- and 30-minute occlusion. The ANN predictive model could thus provide a flexible and objective framework for clinicians to evaluate stroke treatment options on an individual patient basis. PMID:20424631

  17. Predictive accuracy of backpropagation neural network ...

    Indian Academy of Sciences (India)

    incorporated into the BP model for high accuracy management purpose of irrigation water, which relies on accurate values of ET ... as seen from the recent food crisis demonstra- tion in most .... layers by using Geographical Information System.

  18. Improvement of life prediction accuracy by introduction of strain-rate effect into modified ductility exhaustion method

    International Nuclear Information System (INIS)

    Takahashi, Yukio

    1994-01-01

    It is important to use a reliable creep-fatigue damage evaluation method to prevent failures due to creep-fatigue damage accumulated during operation life in the structural design for fast breeder reactor plants. In this study, slow strain-rate fatigue tests were conducted for SUS316 steel for fast breeder application (316FR) and the improvement of creep-fatigue life estimation method was proposed based on test results. Main results can be summarized as follows: (1) In the slow strain-rate fatigue tests, life reduction caused by creep damage was observed as in the case of strain-hold creep-fatigue tests. (2) Strain-rate dependency of creep damage was introduced into the modified ductility exhaustion method previously proposed by the author. Good agreement of predicted lives with observed lives was achieved for SUS304 and 316FR steels with the method proposed here. (author)

  19. The use of patient factors to improve the prediction of operative duration using laparoscopic cholecystectomy.

    Science.gov (United States)

    Thiels, Cornelius A; Yu, Denny; Abdelrahman, Amro M; Habermann, Elizabeth B; Hallbeck, Susan; Pasupathy, Kalyan S; Bingener, Juliane

    2017-01-01

    Reliable prediction of operative duration is essential for improving patient and care team satisfaction, optimizing resource utilization and reducing cost. Current operative scheduling systems are unreliable and contribute to costly over- and underestimation of operative time. We hypothesized that the inclusion of patient-specific factors would improve the accuracy in predicting operative duration. We reviewed all elective laparoscopic cholecystectomies performed at a single institution between 01/2007 and 06/2013. Concurrent procedures were excluded. Univariate analysis evaluated the effect of age, gender, BMI, ASA, laboratory values, smoking, and comorbidities on operative duration. Multivariable linear regression models were constructed using the significant factors (p historical surgeon-specific and procedure-specific operative duration. External validation was done using the ACS-NSQIP database (n = 11,842). A total of 1801 laparoscopic cholecystectomy patients met inclusion criteria. Female sex was associated with reduced operative duration (-7.5 min, p < 0.001 vs. male sex) while increasing BMI (+5.1 min BMI 25-29.9, +6.9 min BMI 30-34.9, +10.4 min BMI 35-39.9, +17.0 min BMI 40 + , all p < 0.05 vs. normal BMI), increasing ASA (+7.4 min ASA III, +38.3 min ASA IV, all p < 0.01 vs. ASA I), and elevated liver function tests (+7.9 min, p < 0.01 vs. normal) were predictive of increased operative duration on univariate analysis. A model was then constructed using these predictive factors. The traditional surgical scheduling system was poorly predictive of actual operative duration (R 2  = 0.001) compared to the patient factors model (R 2  = 0.08). The model remained predictive on external validation (R 2  = 0.14).The addition of surgeon as a variable in the institutional model further improved predictive ability of the model (R 2  = 0.18). The use of routinely available pre-operative patient factors improves the prediction of operative

  20. Accuracy assessment of pharmacogenetically predictive warfarin dosing algorithms in patients of an academic medical center anticoagulation clinic.

    Science.gov (United States)

    Shaw, Paul B; Donovan, Jennifer L; Tran, Maichi T; Lemon, Stephenie C; Burgwinkle, Pamela; Gore, Joel

    2010-08-01

    The objectives of this retrospective cohort study are to evaluate the accuracy of pharmacogenetic warfarin dosing algorithms in predicting therapeutic dose and to determine if this degree of accuracy warrants the routine use of genotyping to prospectively dose patients newly started on warfarin. Seventy-one patients of an outpatient anticoagulation clinic at an academic medical center who were age 18 years or older on a stable, therapeutic warfarin dose with international normalized ratio (INR) goal between 2.0 and 3.0, and cytochrome P450 isoenzyme 2C9 (CYP2C9) and vitamin K epoxide reductase complex subunit 1 (VKORC1) genotypes available between January 1, 2007 and September 30, 2008 were included. Six pharmacogenetic warfarin dosing algorithms were identified from the medical literature. Additionally, a 5 mg fixed dose approach was evaluated. Three algorithms, Zhu et al. (Clin Chem 53:1199-1205, 2007), Gage et al. (J Clin Ther 84:326-331, 2008), and International Warfarin Pharmacogenetic Consortium (IWPC) (N Engl J Med 360:753-764, 2009) were similar in the primary accuracy endpoints with mean absolute error (MAE) ranging from 1.7 to 1.8 mg/day and coefficient of determination R (2) from 0.61 to 0.66. However, the Zhu et al. algorithm severely over-predicted dose (defined as >or=2x or >or=2 mg/day more than actual dose) in twice as many (14 vs. 7%) patients as Gage et al. 2008 and IWPC 2009. In conclusion, the algorithms published by Gage et al. 2008 and the IWPC 2009 were the two most accurate pharmacogenetically based equations available in the medical literature in predicting therapeutic warfarin dose in our study population. However, the degree of accuracy demonstrated does not support the routine use of genotyping to prospectively dose all patients newly started on warfarin.

  1. a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data

    Science.gov (United States)

    Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.

    2018-05-01

    In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.

  2. Sensitivity, specificity, predictive value and accuracy of ultrasonography in pregnancy rate prediction in Sahelian goats after progesterone impregnated sponge synchronization

    Directory of Open Access Journals (Sweden)

    Justin Kouamo

    2014-09-01

    Full Text Available Aim: This study was aimed to evaluate the sensitivity, specificity, predictive value and accuracy of ultrasonography in pregnancy rate (PR prediction in Sahelian goats after progesterone impregnated sponge synchronization within the framework of caprine artificial insemination (AI program in Fatick (Senegal. Materials and Methods: Of 193 candidate goats in AI program, 167 were selected (day 50 in six villages. Estrus was synchronized by progesterone impregnated sponges installed for 11 days. Two days before the time of sponge removal (day 4, each goat was treated with 500 IU of equine chorionic gonadotropin and 50 μg of dcloprostenol. All goats were inseminated (day 0 with alpine goat semen from France at 45±3 h after sponge removal (day 2. Real-time B-mode ultrasonography was performed at day 50, day 13, day 0, day 40 and day 60 post-AI. Results: Selection rate, estrus response rate, AI rate, PR at days 40 and days 60 were 86.53%; 71.85%; 83.34%; 51% and 68% (p<0.05 respectively. Value of sensitivity, specificity, positive and negative predictive value, accuracy, total conformity, conformity of correct positive, conformity of correct negative and discordance of pregnancy diagnosis by trans-abdominal ultrasonography (TU were 98.03%; 63.26%; 73.52%; 3.12%; 81%; 81%; 50%; 31% and 19%, respectively. Conclusion: These results indicate that the TU can be performed in goats under traditional condition and emphasized the importance of re-examination of goats with negative or doubtful TU diagnoses performed at day 40 post-AI.

  3. Accuracy of some simple models for predicting particulate interception and retention in agricultural systems

    International Nuclear Information System (INIS)

    Pinder, J.E. III; McLeod, K.W.; Adriano, D.C.

    1989-01-01

    The accuracy of three radionuclide transfer models for predicting the interception and retention of airborne particles by agricultural crops was tested using Pu-bearing aerosols released to the atmosphere from nuclear fuel facilities on the U.S. Department of Energy's Savannah River Plant, near Aiken, SC. The models evaluated were: (1) NRC, the model defined in U.S. Nuclear Regulatory Guide 1.109; (2) FOOD, a model similar to the NRC model that also predicts concentrations in grains; and (3) AGNS, a model developed from the NRC model for the southeastern United States. Plutonium concentrations in vegetation and grain were predicted from measured deposition rates and compared to concentrations observed in the field. Crops included wheat, soybeans, corn and cabbage. Although predictions of the three models differed by less than a factor of 4, they showed different abilities to predict concentrations observed in the field. The NRC and FOOD models consistently underpredicted the observed Pu concentrations for vegetation. The AGNS model was a more accurate predictor of Pu concentrations for vegetation. Both the FOOD and AGNS models accurately predicted the Pu concentrations for grains

  4. Numerical prediction of cavitating flow around a hydrofoil using pans and improved shear stress transport k-omega model

    Directory of Open Access Journals (Sweden)

    Zhang De-Sheng

    2015-01-01

    Full Text Available The prediction accuracies of partially-averaged Navier-Stokes model and improved shear stress transport k-ω turbulence model for simulating the unsteady cavitating flow around the hydrofoil were discussed in this paper. Numerical results show that the two turbulence models can effectively reproduce the cavitation evolution process. The numerical prediction for the cycle time of cavitation inception, development, detachment, and collapse agrees well with the experimental data. It is found that the vortex pair induced by the interaction between the re-entrant jet and mainstream is responsible for the instability of the cavitation shedding flow.

  5. Accuracy of eosinophils and eosinophil cationic protein to predict steroid improvement in asthma

    NARCIS (Netherlands)

    Meijer, RJ; Postma, DS; Kauffman, HF; Arends, LR; Koeter, GH; Kerstjens, HAM

    Background There is a large variability in clinical response to corticosteroid treatment in patients with asthma. Several markers of inflammation like eosinophils and eosinophil cationic protein (ECP), as well as exhaled nitric oxide (NO), are good candidates to predict clinical response. Aim We

  6. A Critical Analysis and Validation of the Accuracy of Wave Overtopping Prediction Formulae for OWECs

    Directory of Open Access Journals (Sweden)

    David Gallach-Sánchez

    2018-01-01

    Full Text Available The development of wave energy devices is growing in recent years. One type of device is the overtopping wave energy converter (OWEC, for which the knowledge of the wave overtopping rates is a basic and crucial aspect in their design. In particular, the most interesting range to study is for OWECs with steep slopes to vertical walls, and with very small freeboards and zero freeboards where the overtopping rate is maximized, and which can be generalized as steep low-crested structures. Recently, wave overtopping prediction formulae have been published for this type of structures, although their accuracy has not been fully assessed, as the overtopping data available in this range is scarce. We performed a critical analysis of the overtopping prediction formulae for steep low-crested structures and the validation of the accuracy of these formulae, based on new overtopping data for steep low-crested structures obtained at Ghent University. This paper summarizes the existing knowledge about average wave overtopping, describes the physical model tests performed, analyses the results and compares them to existing prediction formulae. The new dataset extends the wave overtopping data towards vertical walls and zero freeboard structures. In general, the new dataset validated the more recent overtopping formulae focused on steep slopes with small freeboards, although the formulae are underpredicting the average overtopping rates for very small and zero relative crest freeboards.

  7. Improvement of prediction ability for genomic selection of dairy cattle by including dominance effects.

    Directory of Open Access Journals (Sweden)

    Chuanyu Sun

    Full Text Available Dominance may be an important source of non-additive genetic variance for many traits of dairy cattle. However, nearly all prediction models for dairy cattle have included only additive effects because of the limited number of cows with both genotypes and phenotypes. The role of dominance in the Holstein and Jersey breeds was investigated for eight traits: milk, fat, and protein yields; productive life; daughter pregnancy rate; somatic cell score; fat percent and protein percent. Additive and dominance variance components were estimated and then used to estimate additive and dominance effects of single nucleotide polymorphisms (SNPs. The predictive abilities of three models with both additive and dominance effects and a model with additive effects only were assessed using ten-fold cross-validation. One procedure estimated dominance values, and another estimated dominance deviations; calculation of the dominance relationship matrix was different for the two methods. The third approach enlarged the dataset by including cows with genotype probabilities derived using genotyped ancestors. For yield traits, dominance variance accounted for 5 and 7% of total variance for Holsteins and Jerseys, respectively; using dominance deviations resulted in smaller dominance and larger additive variance estimates. For non-yield traits, dominance variances were very small for both breeds. For yield traits, including additive and dominance effects fit the data better than including only additive effects; average correlations between estimated genetic effects and phenotypes showed that prediction accuracy increased when both effects rather than just additive effects were included. No corresponding gains in prediction ability were found for non-yield traits. Including cows with derived genotype probabilities from genotyped ancestors did not improve prediction accuracy. The largest additive effects were located on chromosome 14 near DGAT1 for yield traits for both

  8. Acute imaging does not improve ASTRAL score's accuracy despite having a prognostic value.

    OpenAIRE

    Ntaios, G.; Papavasileiou, V.; Faouzi, M.; Vanacker, P.; Wintermark, M.; Michel, P.

    2014-01-01

    BACKGROUND: The ASTRAL score was recently shown to reliably predict three-month functional outcome in patients with acute ischemic stroke. AIM: The study aims to investigate whether information from multimodal imaging increases ASTRAL score's accuracy. METHODS: All patients registered in the ASTRAL registry until March 2011 were included. In multivariate logistic-regression analyses, we added covariates derived from parenchymal, vascular, and perfusion imaging to the 6-parameter model o...

  9. Cadastral Positioning Accuracy Improvement: a Case Study in Malaysia

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Omar, K. M.; Abdullah, N. M.; Yatim, M. H. M.

    2016-09-01

    Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM). With the growth of spatial based technology especially Geographical Information System (GIS), DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI) in cadastral database modernization.

  10. Global Optimization of Ventricular Myocyte Model to Multi-Variable Objective Improves Predictions of Drug-Induced Torsades de Pointes

    Directory of Open Access Journals (Sweden)

    Trine Krogh-Madsen

    2017-12-01

    Full Text Available In silico cardiac myocyte models present powerful tools for drug safety testing and for predicting phenotypical consequences of ion channel mutations, but their accuracy is sometimes limited. For example, several models describing human ventricular electrophysiology perform poorly when simulating effects of long QT mutations. Model optimization represents one way of obtaining models with stronger predictive power. Using a recent human ventricular myocyte model, we demonstrate that model optimization to clinical long QT data, in conjunction with physiologically-based bounds on intracellular calcium and sodium concentrations, better constrains model parameters. To determine if the model optimized to congenital long QT data better predicts risk of drug-induced long QT arrhythmogenesis, in particular Torsades de Pointes risk, we tested the optimized model against a database of known arrhythmogenic and non-arrhythmogenic ion channel blockers. When doing so, the optimized model provided an improved risk assessment. In particular, we demonstrate an elimination of false-positive outcomes generated by the baseline model, in which simulations of non-torsadogenic drugs, in particular verapamil, predict action potential prolongation. Our results underscore the importance of currents beyond those directly impacted by a drug block in determining torsadogenic risk. Our study also highlights the need for rich data in cardiac myocyte model optimization and substantiates such optimization as a method to generate models with higher accuracy of predictions of drug-induced cardiotoxicity.

  11. New polymorphic tetranucleotide microsatellites improve scoring accuracy in the bottlenose dolphin Tursiops aduncus

    NARCIS (Netherlands)

    Nater, Alexander; Kopps, Anna M.; Kruetzen, Michael

    We isolated and characterized 19 novel tetranucleotide microsatellite markers in the Indo-Pacific bottlenose dolphin (Tursiops aduncus) in order to improve genotyping accuracy in applications like large-scale population-wide paternity and relatedness assessments. One hundred T. aduncus from Shark

  12. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  13. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  14. Improved Accuracy of Myocardial Perfusion SPECT for the Detection of Coronary Artery Disease by Utilizing a Support Vector Machines Algorithm

    Science.gov (United States)

    Arsanjani, Reza; Xu, Yuan; Dey, Damini; Fish, Matthews; Dorbala, Sharmila; Hayes, Sean; Berman, Daniel; Germano, Guido; Slomka, Piotr

    2012-01-01

    We aimed to improve the diagnostic accuracy of automatic myocardial perfusion SPECT (MPS) interpretation analysis for prediction of coronary artery disease (CAD) by integrating several quantitative perfusion and functional variables for non-corrected (NC) data by support vector machines (SVM), a computer method for machine learning. Methods 957 rest/stress 99mtechnetium gated MPS NC studies from 623 consecutive patients with correlating invasive coronary angiography and 334 with low likelihood of CAD (LLK < 5% ) were assessed. Patients with stenosis ≥ 50% in left main or ≥ 70% in all other vessels were considered abnormal. Total perfusion deficit (TPD) was computed automatically. In addition, ischemic changes (ISCH) and ejection fraction changes (EFC) between stress and rest were derived by quantitative software. The SVM was trained using a group of 125 pts (25 LLK, 25 0-, 25 1-, 25 2- and 25 3-vessel CAD) using above quantitative variables and second order polynomial fitting. The remaining patients (N = 832) were categorized based on probability estimates, with CAD defined as (probability estimate ≥ 0.50). The diagnostic accuracy of SVM was also compared to visual segmental scoring by two experienced readers. Results Sensitivity of SVM (84%) was significantly better than ISCH (75%, p < 0.05) and EFC (31%, p < 0.05). Specificity of SVM (88%) was significantly better than that of TPD (78%, p < 0.05) and EFC (77%, p < 0.05). Diagnostic accuracy of SVM (86%) was significantly better than TPD (81%), ISCH (81%), or EFC (46%) (p < 0.05 for all). The Receiver-operator-characteristic area-under-the-curve (ROC-AUC) for SVM (0.92) was significantly better than TPD (0.90), ISCH (0.87), and EFC (0.60) (p < 0.001 for all). Diagnostic accuracy of SVM was comparable to the overall accuracy of both visual readers (85% vs. 84%, p < 0.05). ROC-AUC for SVM (0.92) was significantly better than that of both visual readers (0.87 and 0.88, p < 0.03). Conclusion Computational

  15. Exploiting Deep Matching and SAR Data for the Geo-Localization Accuracy Improvement of Optical Satellite Images

    Directory of Open Access Journals (Sweden)

    Nina Merkle

    2017-06-01

    Full Text Available Improving the geo-localization of optical satellite images is an important pre-processing step for many remote sensing tasks like monitoring by image time series or scene analysis after sudden events. These tasks require geo-referenced and precisely co-registered multi-sensor data. Images captured by the high resolution synthetic aperture radar (SAR satellite TerraSAR-X exhibit an absolute geo-location accuracy within a few decimeters. These images represent therefore a reliable source to improve the geo-location accuracy of optical images, which is in the order of tens of meters. In this paper, a deep learning-based approach for the geo-localization accuracy improvement of optical satellite images through SAR reference data is investigated. Image registration between SAR and optical images requires few, but accurate and reliable matching points. These are derived from a Siamese neural network. The network is trained using TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe, in order to learn the two-dimensional spatial shifts between optical and SAR image patches. Results confirm that accurate and reliable matching points can be generated with higher matching accuracy and precision with respect to state-of-the-art approaches.

  16. Improving genomic prediction for Danish Jersey using a joint Danish-US reference population

    DEFF Research Database (Denmark)

    Su, Guosheng; Nielsen, Ulrik Sander; Wiggans, G

    Accuracy of genomic prediction depends on the information in the reference population. Achieving an adequate sized reference population is a challenge for genomic prediction in small cattle populations. One way to increase the size of reference population is to combine reference data from different...... populations. The objective of this study was to assess the gain of genomic prediction accuracy when including US Jersey bulls in the Danish Jersey reference population. The data included 1,262 Danish progeny-tested bulls and 1,157 US progeny-tested bulls. Genomic breeding values (GEBV) were predicted using...... a GBLUP model from the Danish reference population and the joint Danish-US reference population. The traits in the analysis were milk yield, fat yield, protein yield, fertility, mastitis, longevity, body conformation, feet & legs, and longevity. Eight of the nine traits benefitted from the inclusion of US...

  17. An NMR-based scoring function improves the accuracy of binding pose predictions by docking by two orders of magnitude

    Energy Technology Data Exchange (ETDEWEB)

    Orts, Julien [EMBL, Structure and Computational Biology Unit (Germany); Bartoschek, Stefan [Industriepark Hoechst, Sanofi-Aventis Deutschland GmbH, R and D LGCR/Parallel Synthesis and Natural Products (Germany); Griesinger, Christian [Max Planck Institute for Biophysical Chemistry (Germany); Monecke, Peter [Industriepark Hoechst, Sanofi-Aventis Deutschland GmbH, R and D LGCR/Structure, Design and Informatics (Germany); Carlomagno, Teresa, E-mail: teresa.carlomagno@embl.de [EMBL, Structure and Computational Biology Unit (Germany)

    2012-01-15

    Low-affinity ligands can be efficiently optimized into high-affinity drug leads by structure based drug design when atomic-resolution structural information on the protein/ligand complexes is available. In this work we show that the use of a few, easily obtainable, experimental restraints improves the accuracy of the docking experiments by two orders of magnitude. The experimental data are measured in nuclear magnetic resonance spectra and consist of protein-mediated NOEs between two competitively binding ligands. The methodology can be widely applied as the data are readily obtained for low-affinity ligands in the presence of non-labelled receptor at low concentration. The experimental inter-ligand NOEs are efficiently used to filter and rank complex model structures that have been pre-selected by docking protocols. This approach dramatically reduces the degeneracy and inaccuracy of the chosen model in docking experiments, is robust with respect to inaccuracy of the structural model used to represent the free receptor and is suitable for high-throughput docking campaigns.

  18. MO-DE-210-05: Improved Accuracy of Liver Feature Motion Estimation in B-Mode Ultrasound for Image-Guided Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    O’Shea, T; Bamber, J; Harris, E [The Institute of Cancer Research & Royal Marsden, Sutton and London (United Kingdom)

    2015-06-15

    Purpose: In similarity-measure based motion estimation incremental tracking (or template update) is challenging due to quantization, bias and accumulation of tracking errors. A method is presented which aims to improve the accuracy of incrementally tracked liver feature motion in long ultrasound sequences. Methods: Liver ultrasound data from five healthy volunteers under free breathing were used (15 to 17 Hz imaging rate, 2.9 to 5.5 minutes in length). A normalised cross-correlation template matching algorithm was implemented to estimate tissue motion. Blood vessel motion was manually annotated for comparison with three tracking code implementations: (i) naive incremental tracking (IT), (ii) IT plus a similarity threshold (ST) template-update method and (iii) ST coupled with a prediction-based state observer, known as the alpha-beta filter (ABST). Results: The ABST method produced substantial improvements in vessel tracking accuracy for two-dimensional vessel motion ranging from 7.9 mm to 40.4 mm (with mean respiratory period: 4.0 ± 1.1 s). The mean and 95% tracking errors were 1.6 mm and 1.4 mm, respectively (compared to 6.2 mm and 9.1 mm, respectively for naive incremental tracking). Conclusions: High confidence in the output motion estimation data is required for ultrasound-based motion estimation for radiation therapy beam tracking and gating. The method presented has potential for monitoring liver vessel translational motion in high frame rate B-mode data with the required accuracy. This work is support by Cancer Research UK Programme Grant C33589/A19727.

  19. Improvement of User's Accuracy Through Classification of Principal Component Images and Stacked Temporal Images

    Institute of Scientific and Technical Information of China (English)

    Nilanchal Patel; Brijesh Kumar Kaushal

    2010-01-01

    The classification accuracy of the various categories on the classified remotely sensed images are usually evaluated by two different measures of accuracy, namely, producer's accuracy (PA) and user's accuracy (UA). The PA of a category indicates to what extent the reference pixels of the category are correctly classified, whereas the UA ora category represents to what extent the other categories are less misclassified into the category in question. Therefore, the UA of the various categories determines the reliability of their interpretation on the classified image and is more important to the analyst than the PA. The present investigation has been performed in order to determine ifthere occurs improvement in the UA of the various categories on the classified image of the principal components of the original bands and on the classified image of the stacked image of two different years. We performed the analyses using the IRS LISS Ⅲ images of two different years, i.e., 1996 and 2009, that represent the different magnitude of urbanization and the stacked image of these two years pertaining to Ranchi area, Jharkhand, India, with a view to assessing the impacts of urbanization on the UA of the different categories. The results of the investigation demonstrated that there occurs significant improvement in the UA of the impervious categories in the classified image of the stacked image, which is attributable to the aggregation of the spectral information from twice the number of bands from two different years. On the other hand, the classified image of the principal components did not show any improvement in the UA as compared to the original images.

  20. OXBench: A benchmark for evaluation of protein multiple sequence alignment accuracy

    Directory of Open Access Journals (Sweden)

    Searle Stephen MJ

    2003-10-01

    Full Text Available Abstract Background The alignment of two or more protein sequences provides a powerful guide in the prediction of the protein structure and in identifying key functional residues, however, the utility of any prediction is completely dependent on the accuracy of the alignment. In this paper we describe a suite of reference alignments derived from the comparison of protein three-dimensional structures together with evaluation measures and software that allow automatically generated alignments to be benchmarked. We test the OXBench benchmark suite on alignments generated by the AMPS multiple alignment method, then apply the suite to compare eight different multiple alignment algorithms. The benchmark shows the current state-of-the art for alignment accuracy and provides a baseline against which new alignment algorithms may be judged. Results The simple hierarchical multiple alignment algorithm, AMPS, performed as well as or better than more modern methods such as CLUSTALW once the PAM250 pair-score matrix was replaced by a BLOSUM series matrix. AMPS gave an accuracy in Structurally Conserved Regions (SCRs of 89.9% over a set of 672 alignments. The T-COFFEE method on a data set of families with http://www.compbio.dundee.ac.uk. Conclusions The OXBench suite of reference alignments, evaluation software and results database provide a convenient method to assess progress in sequence alignment techniques. Evaluation measures that were dependent on comparison to a reference alignment were found to give good discrimination between methods. The STAMP Sc Score which is independent of a reference alignment also gave good discrimination. Application of OXBench in this paper shows that with the exception of T-COFFEE, the majority of the improvement in alignment accuracy seen since 1985 stems from improved pair-score matrices rather than algorithmic refinements. The maximum theoretical alignment accuracy obtained by pooling results over all methods was 94

  1. IFE Target Injection Tracking and Position Prediction Update

    International Nuclear Information System (INIS)

    Petzoldt, Ronald W.; Jonestrask, Kevin

    2005-01-01

    To achieve high gain in an inertial fusion energy power plant, driver beams must hit direct drive targets with ±20 μm accuracy (±100 μm for indirect drive). Targets will have to be tracked with even greater accuracy. The conceptual design for our tracking system, which predicts target arrival position and timing based on position measurements outside of the reaction chamber was previously described. The system has been built and has begun tracking targets at the first detector station. Additional detector stations are being modified for increased field of view. After three tracking stations are operational, position predictions at the final station will be compared to position measurements at that station as a measure of target position prediction accuracy.The as-installed design will be described together with initial target tracking and position prediction accuracy results. Design modifications that allow for improved accuracy and/or in-chamber target tracking will also be presented

  2. Regression trees for predicting mortality in patients with cardiovascular disease: What improvement is achieved by using ensemble-based methods?

    Science.gov (United States)

    Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V

    2012-01-01

    In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999

  3. Prediction of renal function (GFR) from cystatin C and creatinine in children: Body cell mass increases accuracy of the estimate

    DEFF Research Database (Denmark)

    Andersen, Trine Borup; Jødal, Lars; Bøgsted, Martin

    using robust regression in a forward, stepwise procedure. GFR (mL/min) was the dependent variable. The accuracy and precision of the prediction model were compared to other prediction models from the literature, using k-fold cross-validation. Local constants and coefficients were calculated for all...

  4. Improving decision speed, accuracy and group cohesion through early information gathering in house-hunting ants.

    Science.gov (United States)

    Stroeymeyt, Nathalie; Giurfa, Martin; Franks, Nigel R

    2010-09-29

    Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations, but these studies did not consider the possible effects of previous experience and/or knowledge of individuals on such trade-offs. In this study, we investigated how previous knowledge of their environment may affect emigration speed, nest choice and colony cohesion in emigrations of the house-hunting ant Temnothorax albipennis, a collective decision-making process subject to a classical speed-accuracy trade-off. Colonies allowed to explore a high quality nest site for one week before they were forced to emigrate found that nest and accepted it faster than emigrating naïve colonies. This resulted in increased speed in single choice emigrations and higher colony cohesion in binary choice emigrations. Additionally, colonies allowed to explore both high and low quality nest sites for one week prior to emigration remained more cohesive, made more accurate decisions and emigrated faster than emigrating naïve colonies. These results show that colonies gather and store information about available nest sites while their nest is still intact, and later retrieve and use this information when they need to emigrate. This improves colony performance. Early gathering of information for later use is therefore an effective strategy allowing T. albipennis colonies to improve simultaneously all aspects of the decision-making process--i.e. speed, accuracy and cohesion--and partly circumvent the speed-accuracy trade-off classically observed during emigrations. These findings should be taken into account

  5. Improved accuracy of multiple ncRNA alignment by incorporating structural information into a MAFFT-based framework

    Directory of Open Access Journals (Sweden)

    Toh Hiroyuki

    2008-04-01

    Full Text Available Abstract Background Structural alignment of RNAs is becoming important, since the discovery of functional non-coding RNAs (ncRNAs. Recent studies, mainly based on various approximations of the Sankoff algorithm, have resulted in considerable improvement in the accuracy of pairwise structural alignment. In contrast, for the cases with more than two sequences, the practical merit of structural alignment remains unclear as compared to traditional sequence-based methods, although the importance of multiple structural alignment is widely recognized. Results We took a different approach from a straightforward extension of the Sankoff algorithm to the multiple alignments from the viewpoints of accuracy and time complexity. As a new option of the MAFFT alignment program, we developed a multiple RNA alignment framework, X-INS-i, which builds a multiple alignment with an iterative method incorporating structural information through two components: (1 pairwise structural alignments by an external pairwise alignment method such as SCARNA or LaRA and (2 a new objective function, Four-way Consistency, derived from the base-pairing probability of every sub-aligned group at every multiple alignment stage. Conclusion The BRAliBASE benchmark showed that X-INS-i outperforms other methods currently available in the sum-of-pairs score (SPS criterion. As a basis for predicting common secondary structure, the accuracy of the present method is comparable to or rather higher than those of the current leading methods such as RNA Sampler. The X-INS-i framework can be used for building a multiple RNA alignment from any combination of algorithms for pairwise RNA alignment and base-pairing probability. The source code is available at the webpage found in the Availability and requirements section.

  6. To compare the accuracy of Prayer's sign and Mallampatti test in predicting difficult intubation in Diabetic patients

    International Nuclear Information System (INIS)

    Baig, M. M. A.; Khan, F. H.

    2014-01-01

    Objective: To determine the accuracy of Prayer's sign and Mallampatti test in predicting difficult endotracheal intubation in diabetic patients. Methods: The cross-sectional study was performed at Aga Khan University Hospital, Karachi, over a period from January 2009 to April 2010, and comprised 357 patients who required endotracheal intubation for elective surgical procedures. Prayer's sign and Mallampatti tests were performed for the assessment of airway by trained observers. Ease or difficulty of laryngoscopy after the patient was fully anaesthetised with standard technique were observed and laryngoscopic view of first attempt was rated according to Cormack-Lehan grade of intubation. SPSS 15 was used for statistical analysis. Results: Of the 357 patients, 125(35%) were classified as difficult to intubate. Prayer's sign showed significantly lower accuracy, positive and negative predictive values than Mallampatti test. The sensitivity of Prayer's sign was lower 29.6 (95% Confidence Interval, 21.9-38.5) than Mallampatti test 79.3 (95% confidence interval, 70.8-85.7) while specificity of both the tests was not found to be significantly different. Conclusion: Prayer's sign is not acceptable as a single best bedside test for prediction of difficult intubation. (author)

  7. A simple algorithm improves mass accuracy to 50-100 ppm for delayed extraction linear MALDI-TOF mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Hack, Christopher A.; Benner, W. Henry

    2001-10-31

    A simple mathematical technique for improving mass calibration accuracy of linear delayed extraction matrix assisted laser desorption ionization time-of-flight mass spectrometry (DE MALDI-TOF MS) spectra is presented. The method involves fitting a parabola to a plot of Dm vs. mass data where Dm is the difference between the theoretical mass of calibrants and the mass obtained from a linear relationship between the square root of m/z and ion time of flight. The quadratic equation that describes the parabola is then used to correct the mass of unknowns by subtracting the deviation predicted by the quadratic equation from measured data. By subtracting the value of the parabola at each mass from the calibrated data, the accuracy of mass data points can be improved by factors of 10 or more. This method produces highly similar results whether or not initial ion velocity is accounted for in the calibration equation; consequently, there is no need to depend on that uncertain parameter when using the quadratic correction. This method can be used to correct the internally calibrated masses of protein digest peaks. The effect of nitrocellulose as a matrix additive is also briefly discussed, and it is shown that using nitrocellulose as an additive to a CHCA matrix does not significantly change initial ion velocity but does change the average position of ions relative to the sample electrode at the instant the extraction voltage is applied.

  8. PCA3 and PCA3-Based Nomograms Improve Diagnostic Accuracy in Patients Undergoing First Prostate Biopsy

    Directory of Open Access Journals (Sweden)

    Virginie Vlaeminck-Guillem

    2013-08-01

    Full Text Available While now recognized as an aid to predict repeat prostate biopsy outcome, the urinary PCA3 (prostate cancer gene 3 test has also been recently advocated to predict initial biopsy results. The objective is to evaluate the performance of the PCA3 test in predicting results of initial prostate biopsies and to determine whether its incorporation into specific nomograms reinforces its diagnostic value. A prospective study included 601 consecutive patients addressed for initial prostate biopsy. The PCA3 test was performed before ≥12-core initial prostate biopsy, along with standard risk factor assessment. Diagnostic performance of the PCA3 test was evaluated. The three available nomograms (Hansen’s and Chun’s nomograms, as well as the updated Prostate Cancer Prevention Trial risk calculator; PCPT were applied to the cohort, and their predictive accuracies were assessed in terms of biopsy outcome: the presence of any prostate cancer (PCa and high-grade prostate cancer (HGPCa. The PCA3 score provided significant predictive accuracy. While the PCPT risk calculator appeared less accurate; both Chun’s and Hansen’s nomograms provided good calibration and high net benefit on decision curve analyses. When applying nomogram-derived PCa probability thresholds ≤30%, ≤6% of HGPCa would have been missed, while avoiding up to 48% of unnecessary biopsies. The urinary PCA3 test and PCA3-incorporating nomograms can be considered as reliable tools to aid in the initial biopsy decision.

  9. Prediction of novel pre-microRNAs with high accuracy through boosting and SVM.

    Science.gov (United States)

    Zhang, Yuanwei; Yang, Yifan; Zhang, Huan; Jiang, Xiaohua; Xu, Bo; Xue, Yu; Cao, Yunxia; Zhai, Qian; Zhai, Yong; Xu, Mingqing; Cooke, Howard J; Shi, Qinghua

    2011-05-15

    High-throughput deep-sequencing technology has generated an unprecedented number of expressed short sequence reads, presenting not only an opportunity but also a challenge for prediction of novel microRNAs. To verify the existence of candidate microRNAs, we have to show that these short sequences can be processed from candidate pre-microRNAs. However, it is laborious and time consuming to verify these using existing experimental techniques. Therefore, here, we describe a new method, miRD, which is constructed using two feature selection strategies based on support vector machines (SVMs) and boosting method. It is a high-efficiency tool for novel pre-microRNA prediction with accuracy up to 94.0% among different species. miRD is implemented in PHP/PERL+MySQL+R and can be freely accessed at http://mcg.ustc.edu.cn/rpg/mird/mird.php.

  10. Efficiency Improvement of Kalman Filter for GNSS/INS through One-Step Prediction of P Matrix

    Directory of Open Access Journals (Sweden)

    Qingli Li

    2015-01-01

    Full Text Available To meet the real-time and low power consumption demands in MEMS navigation and guidance field, an improved Kalman filter algorithm for GNSS/INS was proposed in this paper named as one-step prediction of P matrix. Quantitative analysis of field test datasets was made to compare the navigation accuracy with the standard algorithm, which indicated that the degradation caused by the simplified algorithm is small enough compared to the navigation errors of the GNSS/INS system itself. Meanwhile, the computation load and time consumption of the algorithm decreased over 50% by the improved algorithm. The work has special significance for navigation applications that request low power consumption and strict real-time response, such as cellphone, wearable devices, and deeply coupled GNSS/INS systems.

  11. Base Oils Biodegradability Prediction with Data Mining Techniques

    Directory of Open Access Journals (Sweden)

    Malika Trabelsi

    2010-02-01

    Full Text Available In this paper, we apply various data mining techniques including continuous numeric and discrete classification prediction models of base oils biodegradability, with emphasis on improving prediction accuracy. The results show that highly biodegradable oils can be better predicted through numeric models. In contrast, classification models did not uncover a similar dichotomy. With the exception of Memory Based Reasoning and Decision Trees, tested classification techniques achieved high classification prediction. However, the technique of Decision Trees helped uncover the most significant predictors. A simple classification rule derived based on this predictor resulted in good classification accuracy. The application of this rule enables efficient classification of base oils into either low or high biodegradability classes with high accuracy. For the latter, a higher precision biodegradability prediction can be obtained using continuous modeling techniques.

  12. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Science.gov (United States)

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided. PMID:28790910

  13. Hybrid Brain-Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review.

    Science.gov (United States)

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain-computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain-computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  14. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Directory of Open Access Journals (Sweden)

    Keum-Shik Hong

    2017-07-01

    Full Text Available In this article, non-invasive hybrid brain–computer interface (hBCI technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG, due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS, electromyography (EMG, electrooculography (EOG, and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  15. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    Science.gov (United States)

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  16. Improving protein disorder prediction by deep bidirectional long short-term memory recurrent neural networks.

    Science.gov (United States)

    Hanson, Jack; Yang, Yuedong; Paliwal, Kuldip; Zhou, Yaoqi

    2017-03-01

    Capturing long-range interactions between structural but not sequence neighbors of proteins is a long-standing challenging problem in bioinformatics. Recently, long short-term memory (LSTM) networks have significantly improved the accuracy of speech and image classification problems by remembering useful past information in long sequential events. Here, we have implemented deep bidirectional LSTM recurrent neural networks in the problem of protein intrinsic disorder prediction. The new method, named SPOT-Disorder, has steadily improved over a similar method using a traditional, window-based neural network (SPINE-D) in all datasets tested without separate training on short and long disordered regions. Independent tests on four other datasets including the datasets from critical assessment of structure prediction (CASP) techniques and >10 000 annotated proteins from MobiDB, confirmed SPOT-Disorder as one of the best methods in disorder prediction. Moreover, initial studies indicate that the method is more accurate in predicting functional sites in disordered regions. These results highlight the usefulness combining LSTM with deep bidirectional recurrent neural networks in capturing non-local, long-range interactions for bioinformatics applications. SPOT-disorder is available as a web server and as a standalone program at: http://sparks-lab.org/server/SPOT-disorder/index.php . j.hanson@griffith.edu.au or yuedong.yang@griffith.edu.au or yaoqi.zhou@griffith.edu.au. Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  17. A systems biology approach to transcription factor binding site prediction.

    Directory of Open Access Journals (Sweden)

    Xiang Zhou

    2010-03-01

    Full Text Available The elucidation of mammalian transcriptional regulatory networks holds great promise for both basic and translational research and remains one the greatest challenges to systems biology. Recent reverse engineering methods deduce regulatory interactions from large-scale mRNA expression profiles and cross-species conserved regulatory regions in DNA. Technical challenges faced by these methods include distinguishing between direct and indirect interactions, associating transcription regulators with predicted transcription factor binding sites (TFBSs, identifying non-linearly conserved binding sites across species, and providing realistic accuracy estimates.We address these challenges by closely integrating proven methods for regulatory network reverse engineering from mRNA expression data, linearly and non-linearly conserved regulatory region discovery, and TFBS evaluation and discovery. Using an extensive test set of high-likelihood interactions, which we collected in order to provide realistic prediction-accuracy estimates, we show that a careful integration of these methods leads to significant improvements in prediction accuracy. To verify our methods, we biochemically validated TFBS predictions made for both transcription factors (TFs and co-factors; we validated binding site predictions made using a known E2F1 DNA-binding motif on E2F1 predicted promoter targets, known E2F1 and JUND motifs on JUND predicted promoter targets, and a de novo discovered motif for BCL6 on BCL6 predicted promoter targets. Finally, to demonstrate accuracy of prediction using an external dataset, we showed that sites matching predicted motifs for ZNF263 are significantly enriched in recent ZNF263 ChIP-seq data.Using an integrative framework, we were able to address technical challenges faced by state of the art network reverse engineering methods, leading to significant improvement in direct-interaction detection and TFBS-discovery accuracy. We estimated the accuracy

  18. Effect of accuracy of wind power prediction on power system operator

    Science.gov (United States)

    Schlueter, R. A.; Sigari, G.; Costi, T.

    1985-01-01

    This research project proposed a modified unit commitment that schedules connection and disconnection of generating units in response to load. A modified generation control is also proposed that controls steam units under automatic generation control, fast responding diesels, gas turbines and hydro units under a feedforward control, and wind turbine array output under a closed loop array control. This modified generation control and unit commitment require prediction of trend wind power variation one hour ahead and the prediction of error in this trend wind power prediction one half hour ahead. An improved meter for predicting trend wind speed variation is developed. Methods for accurately simulating the wind array power from a limited number of wind speed prediction records was developed. Finally, two methods for predicting the error in the trend wind power prediction were developed. This research provides a foundation for testing and evaluating the modified unit commitment and generation control that was developed to maintain operating reliability at a greatly reduced overall production cost for utilities with wind generation capacity.

  19. CADASTRAL POSITIONING ACCURACY IMPROVEMENT: A CASE STUDY IN MALAYSIA

    Directory of Open Access Journals (Sweden)

    N. M. Hashim

    2016-09-01

    Full Text Available Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM. With the growth of spatial based technology especially Geographical Information System (GIS, DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI in cadastral database modernization.

  20. Impact of sampling interval in training data acquisition on intrafractional predictive accuracy of indirect dynamic tumor-tracking radiotherapy.

    Science.gov (United States)

    Mukumoto, Nobutaka; Nakamura, Mitsuhiro; Akimoto, Mami; Miyabe, Yuki; Yokota, Kenji; Matsuo, Yukinori; Mizowaki, Takashi; Hiraoka, Masahiro

    2017-08-01

    To explore the effect of sampling interval of training data acquisition on the intrafractional prediction error of surrogate signal-based dynamic tumor-tracking using a gimbal-mounted linac. Twenty pairs of respiratory motions were acquired from 20 patients (ten lung, five liver, and five pancreatic cancer patients) who underwent dynamic tumor-tracking with the Vero4DRT. First, respiratory motions were acquired as training data for an initial construction of the prediction model before the irradiation. Next, additional respiratory motions were acquired for an update of the prediction model due to the change of the respiratory pattern during the irradiation. The time elapsed prior to the second acquisition of the respiratory motion was 12.6 ± 3.1 min. A four-axis moving phantom reproduced patients' three dimensional (3D) target motions and one dimensional surrogate motions. To predict the future internal target motion from the external surrogate motion, prediction models were constructed by minimizing residual prediction errors for training data acquired at 80 and 320 ms sampling intervals for 20 s, and at 500, 1,000, and 2,000 ms sampling intervals for 60 s using orthogonal kV x-ray imaging systems. The accuracies of prediction models trained with various sampling intervals were estimated based on training data with each sampling interval during the training process. The intrafractional prediction errors for various prediction models were then calculated on intrafractional monitoring images taken for 30 s at the constant sampling interval of a 500 ms fairly to evaluate the prediction accuracy for the same motion pattern. In addition, the first respiratory motion was used for the training and the second respiratory motion was used for the evaluation of the intrafractional prediction errors for the changed respiratory motion to evaluate the robustness of the prediction models. The training error of the prediction model was 1.7 ± 0.7 mm in 3D for all sampling

  1. Accuracy Feedback Improves Word Learning from Context: Evidence from a Meaning-Generation Task

    Science.gov (United States)

    Frishkoff, Gwen A.; Collins-Thompson, Kevyn; Hodges, Leslie; Crossley, Scott

    2016-01-01

    The present study asked whether accuracy feedback on a meaning generation task would lead to improved contextual word learning (CWL). Active generation can facilitate learning by increasing task engagement and memory retrieval, which strengthens new word representations. However, forced generation results in increased errors, which can be…

  2. Sensitivity of Tumor Motion Simulation Accuracy to Lung Biomechanical Modeling Approaches and Parameters

    OpenAIRE

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-01-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional com...

  3. Process improvement methods increase the efficiency, accuracy, and utility of a neurocritical care research repository.

    Science.gov (United States)

    O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor

    2012-08-01

    Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.

  4. Combining specificity determining and conserved residues improves functional site prediction

    Directory of Open Access Journals (Sweden)

    Gelfand Mikhail S

    2009-06-01

    Full Text Available Abstract Background Predicting the location of functionally important sites from protein sequence and/or structure is a long-standing problem in computational biology. Most current approaches make use of sequence conservation, assuming that amino acid residues conserved within a protein family are most likely to be functionally important. Most often these approaches do not consider many residues that act to define specific sub-functions within a family, or they make no distinction between residues important for function and those more relevant for maintaining structure (e.g. in the hydrophobic core. Many protein families bind and/or act on a variety of ligands, meaning that conserved residues often only bind a common ligand sub-structure or perform general catalytic activities. Results Here we present a novel method for functional site prediction based on identification of conserved positions, as well as those responsible for determining ligand specificity. We define Specificity-Determining Positions (SDPs, as those occupied by conserved residues within sub-groups of proteins in a family having a common specificity, but differ between groups, and are thus likely to account for specific recognition events. We benchmark the approach on enzyme families of known 3D structure with bound substrates, and find that in nearly all families residues predicted by SDPsite are in contact with the bound substrate, and that the addition of SDPs significantly improves functional site prediction accuracy. We apply SDPsite to various families of proteins containing known three-dimensional structures, but lacking clear functional annotations, and discusse several illustrative examples. Conclusion The results suggest a better means to predict functional details for the thousands of protein structures determined prior to a clear understanding of molecular function.

  5. Prediction of protein hydration sites from sequence by modular neural networks

    DEFF Research Database (Denmark)

    Ehrlich, L.; Reczko, M.; Bohr, Henrik

    1998-01-01

    The hydration properties of a protein are important determinants of its structure and function. Here, modular neural networks are employed to predict ordered hydration sites using protein sequence information. First, secondary structure and solvent accessibility are predicted from sequence with two...... separate neural networks. These predictions are used as input together with protein sequences for networks predicting hydration of residues, backbone atoms and sidechains. These networks are teined with protein crystal structures. The prediction of hydration is improved by adding information on secondary...... structure and solvent accessibility and, using actual values of these properties, redidue hydration can be predicted to 77% accuracy with a Metthews coefficient of 0.43. However, predicted property data with an accuracy of 60-70% result in less than half the improvement in predictive performance observed...

  6. Predicting Earth orientation changes from global forecasts of atmosphere-hydrosphere dynamics

    Science.gov (United States)

    Dobslaw, Henryk; Dill, Robert

    2018-02-01

    Effective Angular Momentum (EAM) functions obtained from global numerical simulations of atmosphere, ocean, and land surface dynamics are routinely processed by the Earth System Modelling group at Deutsches GeoForschungsZentrum. EAM functions are available since January 1976 with up to 3 h temporal resolution. Additionally, 6 days-long EAM forecasts are routinely published every day. Based on hindcast experiments with 305 individual predictions distributed over 15 months, we demonstrate that EAM forecasts improve the prediction accuracy of the Earth Orientation Parameters at all forecast horizons between 1 and 6 days. At day 6, prediction accuracy improves down to 1.76 mas for the terrestrial pole offset, and 2.6 mas for Δ UT1, which correspond to an accuracy increase of about 41% over predictions published in Bulletin A by the International Earth Rotation and Reference System Service.

  7. Does PACS improve diagnostic accuracy in chest radiograph interpretations in clinical practice?

    International Nuclear Information System (INIS)

    Hurlen, Petter; Borthne, Arne; Dahl, Fredrik A.; Østbye, Truls; Gulbrandsen, Pål

    2012-01-01

    Objectives: To assess the impact of a Picture Archiving and Communication System (PACS) on the diagnostic accuracy of the interpretation of chest radiology examinations in a “real life” radiology setting. Materials and methods: During a period before PACS was introduced to radiologists, when images were still interpreted on film and reported on paper, images and reports were also digitally stored in an image database. The same database was used after the PACS introduction. This provided a unique opportunity to conduct a blinded retrospective study, comparing sensitivity (the main outcome parameter) in the pre and post-PACS periods. We selected 56 digitally stored chest radiograph examinations that were originally read and reported on film, and 66 examinations that were read and reported on screen 2 years after the PACS introduction. Each examination was assigned a random number, and both reports and images were scored independently for pathological findings. The blinded retrospective score for the original reports were then compared with the score for the images (the gold standard). Results: Sensitivity was improved after the PACS introduction. When both certain and uncertain findings were included, this improvement was statistically significant. There were no other statistically significant changes. Conclusion: The result is consistent with prospective studies concluding that diagnostic accuracy is at least not reduced after PACS introduction. The sensitivity may even be improved.

  8. Improved predictions of nuclear reaction rates with the TALYS reaction code for astrophysical applications

    International Nuclear Information System (INIS)

    Goriely, S.; Hilaire, S.; Koning, A.J

    2008-01-01

    Context. Nuclear reaction rates of astrophysical applications are traditionally determined on the basis of Hauser-Feshbach reaction codes. These codes adopt a number of approximations that have never been tested, such as a simplified width fluctuation correction, the neglect of delayed or multiple-particle emission during the electromagnetic decay cascade, or the absence of the pre-equilibrium contribution at increasing incident energies. Aims. The reaction code TALYS has been recently updated to estimate the Maxwellian-averaged reaction rates that are of astrophysical relevance. These new developments enable the reaction rates to be calculated with increased accuracy and reliability and the approximations of previous codes to be investigated. Methods. The TALYS predictions for the thermonuclear rates of relevance to astrophysics are detailed and compared with those derived by widely-used codes for the same nuclear ingredients. Results. It is shown that TALYS predictions may differ significantly from those of previous codes, in particular for nuclei for which no or little nuclear data is available. The pre-equilibrium process is shown to influence the astrophysics rates of exotic neutron-rich nuclei significantly. For the first time, the Maxwellian- averaged (n, 2n) reaction rate is calculated for all nuclei and its competition with the radiative capture rate is discussed. Conclusions. The TALYS code provides a new tool to estimate all nuclear reaction rates of relevance to astrophysics with improved accuracy and reliability. (authors)

  9. Improving the accuracy of acetabular cup implantation using a bulls-eye spirit level.

    Science.gov (United States)

    Macdonald, Duncan; Gupta, Sanjay; Ohly, Nicholas E; Patil, Sanjeev; Meek, R; Mohammed, Aslam

    2011-01-01

    Acetabular introducers have a built-in inclination of 45 degrees to the handle shaft. With patients in the lateral position, surgeons aim to align the introducer shaft vertical to the floor to implant the acetabulum at 45 degrees. We aimed to determine if a bulls-eye spirit level attached to an introducer improved the accuracy of implantation. A small circular bulls-eye spirit level was attached to the handle of an acetabular introducer. A saw bone hemipelvis was fixed to a horizontal, flat surface. A cement substitute was placed in the acetabulum and subjects were asked to implant a polyethylene cup, aiming to obtain an angle of inclination of 45 degrees. Two attempts were made with the spirit level masked and two with it unmasked. The distance of the air bubble from the spirit level's center was recorded by a single assessor. The angle of inclination of the acetabular component was then calculated. Subjects included both orthopedic consultants and trainees. Twenty-five subjects completed the study. Accuracy of acetabular implantation when using the unmasked spirit level improved significantly in all grades of surgeon. With the spirit level masked, 12 out of 50 attempts were accurate at 45 degrees inclination; 11 out of 50 attempts were "open," with greater than 45 degrees of inclination, and 27 were "closed," with less than 45 degrees. With the spirit level visible, all subjects achieved an inclination angle of exactly 45 degrees. A simple device attached to the handle of an acetabular introducer can significantly improve the accuracy of implantation of a cemented cup into a saw bone pelvis in the lateral position.

  10. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    International Nuclear Information System (INIS)

    Wang, Yongbo; Wu, Huapeng; Handroos, Heikki

    2013-01-01

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device

  11. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)

    2013-10-15

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device.

  12. Improving performance of breast cancer risk prediction using a new CAD-based region segmentation scheme

    Science.gov (United States)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Qiu, Yuchen; Zheng, Bin

    2018-02-01

    Objective of this study is to develop and test a new computer-aided detection (CAD) scheme with improved region of interest (ROI) segmentation combined with an image feature extraction framework to improve performance in predicting short-term breast cancer risk. A dataset involving 570 sets of "prior" negative mammography screening cases was retrospectively assembled. In the next sequential "current" screening, 285 cases were positive and 285 cases remained negative. A CAD scheme was applied to all 570 "prior" negative images to stratify cases into the high and low risk case group of having cancer detected in the "current" screening. First, a new ROI segmentation algorithm was used to automatically remove useless area of mammograms. Second, from the matched bilateral craniocaudal view images, a set of 43 image features related to frequency characteristics of ROIs were initially computed from the discrete cosine transform and spatial domain of the images. Third, a support vector machine model based machine learning classifier was used to optimally classify the selected optimal image features to build a CAD-based risk prediction model. The classifier was trained using a leave-one-case-out based cross-validation method. Applying this improved CAD scheme to the testing dataset, an area under ROC curve, AUC = 0.70+/-0.04, which was significantly higher than using the extracting features directly from the dataset without the improved ROI segmentation step (AUC = 0.63+/-0.04). This study demonstrated that the proposed approach could improve accuracy on predicting short-term breast cancer risk, which may play an important role in helping eventually establish an optimal personalized breast cancer paradigm.

  13. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    Directory of Open Access Journals (Sweden)

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  14. Accuracy of the improved quasistatic space-time method checked with experiment

    International Nuclear Information System (INIS)

    Kugler, G.; Dastur, A.R.

    1976-10-01

    Recent experiments performed at the Savannah River Laboratory have made it possible to check the accuracy of numerical methods developed to simulate space-dependent neutron transients. The experiments were specifically designed to emphasize delayed neutron holdback. The CERBERUS code using the IQS (Improved Quasistatic) method has been developed to provide a practical yet accurate tool for spatial kinetics calculations of CANDU reactors. The code was tested on the Savannah River experiments and excellent agreement was obtained. (author)

  15. Two Simple Rules for Improving the Accuracy of Empiric Treatment of Multidrug-Resistant Urinary Tract Infections.

    Science.gov (United States)

    Linsenmeyer, Katherine; Strymish, Judith; Gupta, Kalpana

    2015-12-01

    The emergence of multidrug-resistant (MDR) uropathogens is making the treatment of urinary tract infections (UTIs) more challenging. We sought to evaluate the accuracy of empiric therapy for MDR UTIs and the utility of prior culture data in improving the accuracy of the therapy chosen. The electronic health records from three U.S. Department of Veterans Affairs facilities were retrospectively reviewed for the treatments used for MDR UTIs over 4 years. An MDR UTI was defined as an infection caused by a uropathogen resistant to three or more classes of drugs and identified by a clinician to require therapy. Previous data on culture results, antimicrobial use, and outcomes were captured from records from inpatient and outpatient settings. Among 126 patient episodes of MDR UTIs, the choices of empiric therapy against the index pathogen were accurate in 66 (52%) episodes. For the 95 patient episodes for which prior microbiologic data were available, when empiric therapy was concordant with the prior microbiologic data, the rate of accuracy of the treatment against the uropathogen improved from 32% to 76% (odds ratio, 6.9; 95% confidence interval, 2.7 to 17.1; P tract (GU)-directed agents (nitrofurantoin or sulfa agents) were equally as likely as broad-spectrum agents to be accurate (P = 0.3). Choosing an agent concordant with previous microbiologic data significantly increased the chance of accuracy of therapy for MDR UTIs, even if the previous uropathogen was a different species. Also, GU-directed or broad-spectrum therapy choices were equally likely to be accurate. The accuracy of empiric therapy could be improved by the use of these simple rules. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  16. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  17. An improved method for predicting brittleness of rocks via well logs in tight oil reservoirs

    Science.gov (United States)

    Wang, Zhenlin; Sun, Ting; Feng, Cheng; Wang, Wei; Han, Chuang

    2018-06-01

    There can be no industrial oil production in tight oil reservoirs until fracturing is undertaken. Under such conditions, the brittleness of the rocks is a very important factor. However, it has so far been difficult to predict. In this paper, the selected study area is the tight oil reservoirs in Lucaogou formation, Permian, Jimusaer sag, Junggar basin. According to the transformation of dynamic and static rock mechanics parameters and the correction of confining pressure, an improved method is proposed for quantitatively predicting the brittleness of rocks via well logs in tight oil reservoirs. First, 19 typical tight oil core samples are selected in the study area. Their static Young’s modulus, static Poisson’s ratio and petrophysical parameters are measured. In addition, the static brittleness indices of four other tight oil cores are measured under different confining pressure conditions. Second, the dynamic Young’s modulus, Poisson’s ratio and brittleness index are calculated using the compressional and shear wave velocity. With combination of the measured and calculated results, the transformation model of dynamic and static brittleness index is built based on the influence of porosity and clay content. The comparison of the predicted brittleness indices and measured results shows that the model has high accuracy. Third, on the basis of the experimental data under different confining pressure conditions, the amplifying factor of brittleness index is proposed to correct for the influence of confining pressure on the brittleness index. Finally, the above improved models are applied to formation evaluation via well logs. Compared with the results before correction, the results of the improved models agree better with the experimental data, which indicates that the improved models have better application effects. The brittleness index prediction method of tight oil reservoirs is improved in this research. It is of great importance in the optimization of

  18. Evaluation of accuracy of linear regression models in predicting urban stormwater discharge characteristics.

    Science.gov (United States)

    Madarang, Krish J; Kang, Joo-Hyon

    2014-06-01

    Stormwater runoff has been identified as a source of pollution for the environment, especially for receiving waters. In order to quantify and manage the impacts of stormwater runoff on the environment, predictive models and mathematical models have been developed. Predictive tools such as regression models have been widely used to predict stormwater discharge characteristics. Storm event characteristics, such as antecedent dry days (ADD), have been related to response variables, such as pollutant loads and concentrations. However it has been a controversial issue among many studies to consider ADD as an important variable in predicting stormwater discharge characteristics. In this study, we examined the accuracy of general linear regression models in predicting discharge characteristics of roadway runoff. A total of 17 storm events were monitored in two highway segments, located in Gwangju, Korea. Data from the monitoring were used to calibrate United States Environmental Protection Agency's Storm Water Management Model (SWMM). The calibrated SWMM was simulated for 55 storm events, and the results of total suspended solid (TSS) discharge loads and event mean concentrations (EMC) were extracted. From these data, linear regression models were developed. R(2) and p-values of the regression of ADD for both TSS loads and EMCs were investigated. Results showed that pollutant loads were better predicted than pollutant EMC in the multiple regression models. Regression may not provide the true effect of site-specific characteristics, due to uncertainty in the data. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  19. MultiLoc2: integrating phylogeny and Gene Ontology terms improves subcellular protein localization prediction

    Directory of Open Access Journals (Sweden)

    Kohlbacher Oliver

    2009-09-01

    Full Text Available Abstract Background Knowledge of subcellular localization of proteins is crucial to proteomics, drug target discovery and systems biology since localization and biological function are highly correlated. In recent years, numerous computational prediction methods have been developed. Nevertheless, there is still a need for prediction methods that show more robustness and higher accuracy. Results We extended our previous MultiLoc predictor by incorporating phylogenetic profiles and Gene Ontology terms. Two different datasets were used for training the system, resulting in two versions of this high-accuracy prediction method. One version is specialized for globular proteins and predicts up to five localizations, whereas a second version covers all eleven main eukaryotic subcellular localizations. In a benchmark study with five localizations, MultiLoc2 performs considerably better than other methods for animal and plant proteins and comparably for fungal proteins. Furthermore, MultiLoc2 performs clearly better when using a second dataset that extends the benchmark study to all eleven main eukaryotic subcellular localizations. Conclusion MultiLoc2 is an extensive high-performance subcellular protein localization prediction system. By incorporating phylogenetic profiles and Gene Ontology terms MultiLoc2 yields higher accuracies compared to its previous version. Moreover, it outperforms other prediction systems in two benchmarks studies. MultiLoc2 is available as user-friendly and free web-service, available at: http://www-bs.informatik.uni-tuebingen.de/Services/MultiLoc2.

  20. Validation of the multiplier method for leg-length predictions on a large European cohort and an assessment of the effect of physiological age on predictions.

    Science.gov (United States)

    Aird, J J; Cheesman, C L; Schade, A T; Monsell, F P

    2017-01-01

    The Avon Longitudinal Study of Parents and Children (ALSPAC) prospective cohort was used to determine the accuracy of the Paley multiplier method for predicting leg length. Using menarche as a proxy, physiological age was then used to increase the accuracy of the multiplier. Chronological age was corrected in female patients over the age of eight years with documented date of first menses. Final sub-ischial leg length and predicted final leg length were predicted for all data points. Good correlation was demonstrated between the Paley and ALSPAC data. The average error in prediction depended on the time of assessment, tending to improve as the child got older. It varied from 2.2 cm at the age of seven years to 1.8 cm at the age of 14 years. When chronological age was corrected, the accuracy of multiplier increased. Age correction of 50% improved multiplier predictions by up to 28%. There appears to have been no significant change in growth trajectories of the two populations who were chronologically separated by 40 years. While the Paley data were based on extracting trends from averaged data, the ALSPAC dataset provides descriptive statistics from which it is possible to compare populations and assess the accuracy of the multiplier method. The data suggest that the accuracy improves as the patient gets close to the average skeletal maturity but that results need to be interpreted in conjunction with a radiological assessment of the growth plates. The magnitude of the errors in prediction suggest that when using the multiplier, the clinician must remain vigilant and prepared to perform a contralateral epiphyseodisis if the prediction proves to be wrong. The data suggest a relationship between the multiplier and menarche. There appears to be a factorisation and when accounting for physiological age, one needs to correct by 50% of the difference between chronological and physiological age.

  1. Increasing the predictive accuracy of amyloid-β blood-borne biomarkers in Alzheimer's disease.

    Science.gov (United States)

    Watt, Andrew D; Perez, Keyla A; Faux, Noel G; Pike, Kerryn E; Rowe, Christopher C; Bourgeat, Pierrick; Salvado, Olivier; Masters, Colin L; Villemagne, Victor L; Barnham, Kevin J

    2011-01-01

    Diagnostic measures for Alzheimer's disease (AD) commonly rely on evaluating the levels of amyloid-β (Aβ) peptides within the cerebrospinal fluid (CSF) of affected individuals. These levels are often combined with levels of an additional non-Aβ marker to increase predictive accuracy. Recent efforts to overcome the invasive nature of CSF collection led to the observation of Aβ species within the blood cellular fraction, however, little is known of what additional biomarkers may be found in this membranous fraction. The current study aimed to undertake a discovery-based proteomic investigation of the blood cellular fraction from AD patients (n = 18) and healthy controls (HC; n = 15) using copper immobilized metal affinity capture and Surface Enhanced Laser Desorption/Ionisation Time-Of-Flight Mass Spectrometry. Three candidate biomarkers were observed which could differentiate AD patients from HC (ROC AUC > 0.8). Bivariate pairwise comparisons revealed significant correlations between these markers and measures of AD severity including; MMSE, composite memory, brain amyloid burden, and hippocampal volume. A partial least squares regression model was generated using the three candidate markers along with blood levels of Aβ. This model was able to distinguish AD from HC with high specificity (90%) and sensitivity (77%) and was able to separate individuals with mild cognitive impairment (MCI) who converted to AD from MCI non-converters. While requiring further characterization, these candidate biomarkers reaffirm the potential efficacy of blood-based investigations into neurodegenerative conditions. Furthermore, the findings indicate that the incorporation of non-amyloid markers into predictive models, function to increase the accuracy of the diagnostic potential of Aβ.

  2. An Improved User Selection Algorithm in Multiuser MIMO Broadcast with Channel Prediction

    Science.gov (United States)

    Min, Zhi; Ohtsuki, Tomoaki

    In multiuser MIMO-BC (Multiple-Input Multiple-Output Broadcasting) systems, user selection is important to achieve multiuser diversity. The optimal user selection algorithm is to try all the combinations of users to find the user group that can achieve the multiuser diversity. Unfortunately, the high calculation cost of the optimal algorithm prevents its implementation. Thus, instead of the optimal algorithm, some suboptimal user selection algorithms were proposed based on semiorthogonality of user channel vectors. The purpose of this paper is to achieve multiuser diversity with a small amount of calculation. For this purpose, we propose a user selection algorithm that can improve the orthogonality of a selected user group. We also apply a channel prediction technique to a MIMO-BC system to get more accurate channel information at the transmitter. Simulation results show that the channel prediction can improve the accuracy of channel information for user selections, and the proposed user selection algorithm achieves higher sum rate capacity than the SUS (Semiorthogonal User Selection) algorithm. Also we discuss the setting of the algorithm threshold. As the result of a discussion on the calculation complexity, which uses the number of complex multiplications as the parameter, the proposed algorithm is shown to have a calculation complexity almost equal to that of the SUS algorithm, and they are much lower than that of the optimal user selection algorithm.

  3. Accuracy of Genomic Evaluations of Juvenile Growth Rate in Common Carp (Cyprinus carpio Using Genotyping by Sequencing

    Directory of Open Access Journals (Sweden)

    Christos Palaiokostas

    2018-03-01

    Full Text Available Cyprinids are the most important group of farmed fish globally in terms of production volume, with common carp (Cyprinus carpio being one of the most valuable species of the group. The use of modern selective breeding methods in carp is at a formative stage, implying a large scope for genetic improvement of key production traits. In the current study, a population of 1,425 carp juveniles, originating from a partial factorial cross between 40 sires and 20 dams, was used for investigating the potential of genomic selection (GS for juvenile growth, an exemplar polygenic production trait. RAD sequencing was used to identify and genotype SNP markers for subsequent parentage assignment, construction of a medium density genetic map (12,311 SNPs, genome-wide association study (GWAS, and testing of GS. A moderate heritability was estimated for body length of carp at 120 days (as a proxy of juvenile growth of 0.33 (s.e. 0.05. No genome-wide significant QTL was identified using a single marker GWAS approach. Genomic prediction of breeding values outperformed pedigree-based prediction, resulting in 18% improvement in prediction accuracy. The impact of reduced SNP densities on prediction accuracy was tested by varying minor allele frequency (MAF thresholds, with no drop in prediction accuracy until the MAF threshold is set <0.3 (2,744 SNPs. These results point to the potential for GS to improve economically important traits in common carp breeding programs.

  4. The Bi-Directional Prediction of Carbon Fiber Production Using a Combination of Improved Particle Swarm Optimization and Support Vector Machine.

    Science.gov (United States)

    Xiao, Chuncai; Hao, Kuangrong; Ding, Yongsheng

    2014-12-30

    This paper creates a bi-directional prediction model to predict the performance of carbon fiber and the productive parameters based on a support vector machine (SVM) and improved particle swarm optimization (IPSO) algorithm (SVM-IPSO). In the SVM, it is crucial to select the parameters that have an important impact on the performance of prediction. The IPSO is proposed to optimize them, and then the SVM-IPSO model is applied to the bi-directional prediction of carbon fiber production. The predictive accuracy of SVM is mainly dependent on its parameters, and IPSO is thus exploited to seek the optimal parameters for SVM in order to improve its prediction capability. Inspired by a cell communication mechanism, we propose IPSO by incorporating information of the global best solution into the search strategy to improve exploitation, and we employ IPSO to establish the bi-directional prediction model: in the direction of the forward prediction, we consider productive parameters as input and property indexes as output; in the direction of the backward prediction, we consider property indexes as input and productive parameters as output, and in this case, the model becomes a scheme design for novel style carbon fibers. The results from a set of the experimental data show that the proposed model can outperform the radial basis function neural network (RNN), the basic particle swarm optimization (PSO) method and the hybrid approach of genetic algorithm and improved particle swarm optimization (GA-IPSO) method in most of the experiments. In other words, simulation results demonstrate the effectiveness and advantages of the SVM-IPSO model in dealing with the problem of forecasting.

  5. The Bi-Directional Prediction of Carbon Fiber Production Using a Combination of Improved Particle Swarm Optimization and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Chuncai Xiao

    2014-12-01

    Full Text Available This paper creates a bi-directional prediction model to predict the performance of carbon fiber and the productive parameters based on a support vector machine (SVM and improved particle swarm optimization (IPSO algorithm (SVM-IPSO. In the SVM, it is crucial to select the parameters that have an important impact on the performance of prediction. The IPSO is proposed to optimize them, and then the SVM-IPSO model is applied to the bi-directional prediction of carbon fiber production. The predictive accuracy of SVM is mainly dependent on its parameters, and IPSO is thus exploited to seek the optimal parameters for SVM in order to improve its prediction capability. Inspired by a cell communication mechanism, we propose IPSO by incorporating information of the global best solution into the search strategy to improve exploitation, and we employ IPSO to establish the bi-directional prediction model: in the direction of the forward prediction, we consider productive parameters as input and property indexes as output; in the direction of the backward prediction, we consider property indexes as input and productive parameters as output, and in this case, the model becomes a scheme design for novel style carbon fibers. The results from a set of the experimental data show that the proposed model can outperform the radial basis function neural network (RNN, the basic particle swarm optimization (PSO method and the hybrid approach of genetic algorithm and improved particle swarm optimization (GA-IPSO method in most of the experiments. In other words, simulation results demonstrate the effectiveness and advantages of the SVM-IPSO model in dealing with the problem of forecasting.

  6. Improving behavioral performance under full attention by adjusting response criteria to changes in stimulus predictability.

    Science.gov (United States)

    Katzner, Steffen; Treue, Stefan; Busse, Laura

    2012-09-04

    One of the key features of active perception is the ability to predict critical sensory events. Humans and animals can implicitly learn statistical regularities in the timing of events and use them to improve behavioral performance. Here, we used a signal detection approach to investigate whether such improvements in performance result from changes of perceptual sensitivity or rather from adjustments of a response criterion. In a regular sequence of briefly presented stimuli, human observers performed a noise-limited motion detection task by monitoring the stimulus stream for the appearance of a designated target direction. We manipulated target predictability through the hazard rate, which specifies the likelihood that a target is about to occur, given it has not occurred so far. Analyses of response accuracy revealed that improvements in performance could be accounted for by adjustments of the response criterion; a growing hazard rate was paralleled by an increasing tendency to report the presence of a target. In contrast, the hazard rate did not affect perceptual sensitivity. Consistent with previous research, we also found that reaction time decreases as the hazard rate grows. A simple rise-to-threshold model could well describe this decrease and attribute predictability effects to threshold adjustments rather than changes in information supply. We conclude that, even under conditions of full attention and constant perceptual sensitivity, behavioral performance can be optimized by dynamically adjusting the response criterion to meet ongoing changes in the likelihood of a target.

  7. Key Performance Indicators and Analysts' Earnings Forecast Accuracy: An Application of Content Analysis

    OpenAIRE

    Alireza Dorestani; Zabihollah Rezaee

    2011-01-01

    We examine the association between the extent of change in key performance indicator (KPI) disclosures and the accuracy of forecasts made by analysts. KPIs are regarded as improving both the transparency and relevancy of public financial information. The results of using linear regression models show that contrary to our prediction and the hypothesis of this paper, there is no significant association between the change in non- financial KPI disclosures and the accuracy of analysts' forecasts....

  8. EVALUATING RISK-PREDICTION MODELS USING DATA FROM ELECTRONIC HEALTH RECORDS.

    Science.gov (United States)

    Wang, L E; Shaw, Pamela A; Mathelier, Hansie M; Kimmel, Stephen E; French, Benjamin

    2016-03-01

    The availability of data from electronic health records facilitates the development and evaluation of risk-prediction models, but estimation of prediction accuracy could be limited by outcome misclassification, which can arise if events are not captured. We evaluate the robustness of prediction accuracy summaries, obtained from receiver operating characteristic curves and risk-reclassification methods, if events are not captured (i.e., "false negatives"). We derive estimators for sensitivity and specificity if misclassification is independent of marker values. In simulation studies, we quantify the potential for bias in prediction accuracy summaries if misclassification depends on marker values. We compare the accuracy of alternative prognostic models for 30-day all-cause hospital readmission among 4548 patients discharged from the University of Pennsylvania Health System with a primary diagnosis of heart failure. Simulation studies indicate that if misclassification depends on marker values, then the estimated accuracy improvement is also biased, but the direction of the bias depends on the direction of the association between markers and the probability of misclassification. In our application, 29% of the 1143 readmitted patients were readmitted to a hospital elsewhere in Pennsylvania, which reduced prediction accuracy. Outcome misclassification can result in erroneous conclusions regarding the accuracy of risk-prediction models.

  9. Improving decision speed, accuracy and group cohesion through early information gathering in house-hunting ants.

    Directory of Open Access Journals (Sweden)

    Nathalie Stroeymeyt

    Full Text Available BACKGROUND: Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations, but these studies did not consider the possible effects of previous experience and/or knowledge of individuals on such trade-offs. In this study, we investigated how previous knowledge of their environment may affect emigration speed, nest choice and colony cohesion in emigrations of the house-hunting ant Temnothorax albipennis, a collective decision-making process subject to a classical speed-accuracy trade-off. METHODOLOGY/PRINCIPAL FINDINGS: Colonies allowed to explore a high quality nest site for one week before they were forced to emigrate found that nest and accepted it faster than emigrating naïve colonies. This resulted in increased speed in single choice emigrations and higher colony cohesion in binary choice emigrations. Additionally, colonies allowed to explore both high and low quality nest sites for one week prior to emigration remained more cohesive, made more accurate decisions and emigrated faster than emigrating naïve colonies. CONCLUSIONS/SIGNIFICANCE: These results show that colonies gather and store information about available nest sites while their nest is still intact, and later retrieve and use this information when they need to emigrate. This improves colony performance. Early gathering of information for later use is therefore an effective strategy allowing T. albipennis colonies to improve simultaneously all aspects of the decision-making process--i.e. speed, accuracy and cohesion--and partly circumvent the speed-accuracy trade

  10. Comparing the accuracy of perturbative and variational calculations for predicting fundamental vibrational frequencies of dihalomethanes

    Science.gov (United States)

    Krasnoshchekov, Sergey V.; Schutski, Roman S.; Craig, Norman C.; Sibaev, Marat; Crittenden, Deborah L.

    2018-02-01

    Three dihalogenated methane derivatives (CH2F2, CH2FCl, and CH2Cl2) were used as model systems to compare and assess the accuracy of two different approaches for predicting observed fundamental frequencies: canonical operator Van Vleck vibrational perturbation theory (CVPT) and vibrational configuration interaction (VCI). For convenience and consistency, both methods employ the Watson Hamiltonian in rectilinear normal coordinates, expanding the potential energy surface (PES) as a Taylor series about equilibrium and constructing the wavefunction from a harmonic oscillator product basis. At the highest levels of theory considered here, fourth-order CVPT and VCI in a harmonic oscillator basis with up to 10 quanta of vibrational excitation in conjunction with a 4-mode representation sextic force field (SFF-4MR) computed at MP2/cc-pVTZ with replacement CCSD(T)/aug-cc-pVQZ harmonic force constants, the agreement between computed fundamentals is closer to 0.3 cm-1 on average, with a maximum difference of 1.7 cm-1. The major remaining accuracy-limiting factors are the accuracy of the underlying electronic structure model, followed by the incompleteness of the PES expansion. Nonetheless, computed and experimental fundamentals agree to within 5 cm-1, with an average difference of 2 cm-1, confirming the utility and accuracy of both theoretical models. One exception to this rule is the formally IR-inactive but weakly allowed through Coriolis-coupling H-C-H out-of-plane twisting mode of dichloromethane, whose spectrum we therefore revisit and reassign. We also investigate convergence with respect to order of CVPT, VCI excitation level, and order of PES expansion, concluding that premature truncation substantially decreases accuracy, although VCI(6)/SFF-4MR results are still of acceptable accuracy, and some error cancellation is observed with CVPT2 using a quartic force field.

  11. Improving the Accuracy of NMR Structures of Large Proteins Using Pseudocontact Shifts as Long-Range Restraints

    Energy Technology Data Exchange (ETDEWEB)

    Gaponenko, Vadim [National Cancer Institute, Structural Biophysics Laboratory (United States); Sarma, Siddhartha P. [Indian Institute of Science, Molecular Biophysics Unit (India); Altieri, Amanda S. [National Cancer Institute, Structural Biophysics Laboratory (United States); Horita, David A. [Wake Forest University School of Medicine, Department of Biochemistry (United States); Li, Jess; Byrd, R. Andrew [National Cancer Institute, Structural Biophysics Laboratory (United States)], E-mail: rabyrd@ncifcrf.gov

    2004-03-15

    We demonstrate improved accuracy in protein structure determination for large ({>=}30 kDa), deuterated proteins (e.g. STAT4{sub NT}) via the combination of pseudocontact shifts for amide and methyl protons with the available NOEs in methyl-protonated proteins. The improved accuracy is cross validated by Q-factors determined from residual dipolar couplings measured as a result of magnetic susceptibility alignment. The paramagnet is introduced via binding to thiol-reactive EDTA, and multiple sites can be serially engineered to obtain data from alternative orientations of the paramagnetic anisotropic susceptibility tensor. The technique is advantageous for systems where the target protein has strong interactions with known alignment media.

  12. Predictive accuracy of Edinburgh Postnatal Depression Scale assessment during pregnancy for the risk of developing postpartum depressive symptoms : a prospective cohort study

    NARCIS (Netherlands)

    Meijer, J. L.; Beijers, C.; van Pampus, M. G.; Verbeek, T.; Stolk, R. P.; Milgrom, J.; Bockting, C. L. H.; Burger, H.

    2014-01-01

    ObjectiveTo investigate whether the 10-item Edinburgh Postnatal Depression Scale (EPDS) administered antenatally is accurate in predicting postpartum depressive symptoms, and whether a two-item EPDS has similar predictive accuracy. DesignProspective cohort study. SettingObstetric care in the

  13. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    Science.gov (United States)

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  14. Improved precision and accuracy for microarrays using updated probe set definitions

    Directory of Open Access Journals (Sweden)

    Larsson Ola

    2007-02-01

    Full Text Available Abstract Background Microarrays enable high throughput detection of transcript expression levels. Different investigators have recently introduced updated probe set definitions to more accurately map probes to our current knowledge of genes and transcripts. Results We demonstrate that updated probe set definitions provide both better precision and accuracy in probe set estimates compared to the original Affymetrix definitions. We show that the improved precision mainly depends on the increased number of probes that are integrated into each probe set, but we also demonstrate an improvement when the same number of probes is used. Conclusion Updated probe set definitions does not only offer expression levels that are more accurately associated to genes and transcripts but also improvements in the estimated transcript expression levels. These results give support for the use of updated probe set definitions for analysis and meta-analysis of microarray data.

  15. Improving Odometric Accuracy for an Autonomous Electric Cart.

    Science.gov (United States)

    Toledo, Jonay; Piñeiro, Jose D; Arnay, Rafael; Acosta, Daniel; Acosta, Leopoldo

    2018-01-12

    In this paper, a study of the odometric system for the autonomous cart Verdino, which is an electric vehicle based on a golf cart, is presented. A mathematical model of the odometric system is derived from cart movement equations, and is used to compute the vehicle position and orientation. The inputs of the system are the odometry encoders, and the model uses the wheels diameter and distance between wheels as parameters. With this model, a least square minimization is made in order to get the nominal best parameters. This model is updated, including a real time wheel diameter measurement improving the accuracy of the results. A neural network model is used in order to learn the odometric model from data. Tests are made using this neural network in several configurations and the results are compared to the mathematical model, showing that the neural network can outperform the first proposed model.

  16. GFR prediction from cystatin C and creatinine in children: body cell mass increases accuracy of the estimate

    DEFF Research Database (Denmark)

    Andersen, Trine Borup; Jødal, Lars; Bøgsted, Martin

    ) aged 2-14 years (mean 8.8 years). GFR was 14-147 mL/min/1.73m2 (mean 97 mL/min/1.73m2). BCM was estimated using bioimpedance spectroscopy (Xitron Hydra 4200). Log-transformed data on BCM/CysC, serum creatinine (SCr), body-surface-area (BSA), height x BSA/SCr, serum CysC, weight, sex, age, height, serum....... The present equation also had the highest R2 and the narrowest 95% limits of agreement. CONCLUSION: The new equation predicts GFR with higher accuracy than other equations. Endogenous methods are, however, still not accurate enough to replace exogenous markers when GFR must be determined with high accuracy...

  17. Accuracy improvement in the TDR-based localization of water leaks

    Directory of Open Access Journals (Sweden)

    Andrea Cataldo

    Full Text Available A time domain reflectometry (TDR-based system for the localization of water leaks has been recently developed by the authors. This system, which employs wire-like sensing elements to be installed along the underground pipes, has proven immune to the limitations that affect the traditional, acoustic leak-detection systems.Starting from the positive results obtained thus far, in this work, an improvement of this TDR-based system is proposed. More specifically, the possibility of employing a low-cost, water-absorbing sponge to be placed around the sensing element for enhancing the accuracy in the localization of the leak is addressed.To this purpose, laboratory experiments were carried out mimicking a water leakage condition, and two sensing elements (one embedded in a sponge and one without sponge were comparatively used to identify the position of the leak through TDR measurements. Results showed that, thanks to the water retention capability of the sponge (which maintains the leaked water more localized, the sensing element embedded in the sponge leads to a higher accuracy in the evaluation of the position of the leak. Keywords: Leak localization, TDR, Time domain reflectometry, Water leaks, Underground water pipes

  18. Accuracy of genomic breeding value prediction for intramuscular fat using different genomic relationship matrices in Hanwoo (Korean cattle).

    Science.gov (United States)

    Choi, Taejeong; Lim, Dajeong; Park, Byoungho; Sharma, Aditi; Kim, Jong-Joo; Kim, Sidong; Lee, Seung Hwan

    2017-07-01

    Intramuscular fat is one of the meat quality traits that is considered in the selection strategies for Hanwoo (Korean cattle). Different methods are used to estimate the breeding value of selection candidates. In the present work we focused on accuracy of different genotype relationship matrices as described by forni and pedigree based relationship matrix. The data set included a total of 778 animals that were genotyped for BovineSNP50 BeadChip. Among these 778 animals, 72 animals were sires for 706 reference animals and were used as a validation dataset. Single trait animal model (best linear unbiased prediction and genomic best linear unbiased prediction) was used to estimate the breeding values from genomic and pedigree information. The diagonal elements for the pedigree based coefficients were slightly higher for the genomic relationship matrices (GRM) based coefficients while off diagonal elements were considerably low for GRM based coefficients. The accuracy of breeding value for the pedigree based relationship matrix (A) was 13% while for GRM (GOF, G05, and Yang) it was 0.37, 0.45, and 0.38, respectively. Accuracy of GRM was 1.5 times higher than A in this study. Therefore, genomic information will be more beneficial than pedigree information in the Hanwoo breeding program.

  19. The Accuracy of Urinalysis in Predicting Intra-Abdominal Injury Following Blunt Traumas.

    Science.gov (United States)

    Sabzghabaei, Anita; Shojaee, Majid; Safari, Saeed; Hatamabadi, Hamid Reza; Shirvani, Reza

    2016-01-01

    In cases of blunt abdominal traumas, predicting the possible intra-abdominal injuries is still a challenge for the physicians involved with these patients. Therefore, this study was designed, to evaluate the accuracy of urinalysis in predicting intra-abdominal injuries. Patients aged 15 to 65 years with blunt abdominal trauma who were admitted to emergency departments were enrolled. Abdominopelvic computed tomography (CT) scan with intravenous contrast and urinalysis were requested for all the included patients. Demographic data, trauma mechanism, the results of urinalysis, and the results of abdominopelvic CT scan were gathered. Finally, the correlation between the results of abdominopelvic CT scan, and urinalysis was determined. Urinalysis was considered positive in case of at least one positive value in gross appearance, blood in dipstick, or red blood cell count. 325 patients with blunt abdominal trauma were admitted to the emergency departments (83% male with the mean age of 32.63±17.48 years). Sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios of urinalysis, were 77.9% (95% CI: 69.6-84.4), 58.5% (95% CI: 51.2-65.5), 56% (95% CI: 48.5-63.3), 79.6% (95% CI: 71.8-85.7), 1.27% (95% CI: 1.30-1.57), and 0.25% (95% CI: 0.18-0.36), respectively. The diagnostic value of urinalysis in prediction of blunt traumatic intra-abdominal injuries is low and it seems that it should be considered as an adjuvant diagnostic tool, in conjunction with other sources such as clinical findings and imaging.

  20. Preoperative Measurement of Tibial Resection in Total Knee Arthroplasty Improves Accuracy of Postoperative Limb Alignment Restoration

    Directory of Open Access Journals (Sweden)

    Pei-Hui Wu

    2016-01-01

    Conclusions: Using conventional surgical instruments, preoperative measurement of resection thickness of the tibial plateau on radiographs could improve the accuracy of conventional surgical techniques.

  1. Improving Earth/Prediction Models to Improve Network Processing

    Science.gov (United States)

    Wagner, G. S.

    2017-12-01

    The United States Atomic Energy Detection System (USAEDS) primaryseismic network consists of a relatively small number of arrays andthree-component stations. The relatively small number of stationsin the USAEDS primary network make it both necessary and feasibleto optimize both station and network processing.Station processing improvements include detector tuning effortsthat use Receiver Operator Characteristic (ROC) curves to helpjudiciously set acceptable Type 1 (false) vs. Type 2 (miss) errorrates. Other station processing improvements include the use ofempirical/historical observations and continuous background noisemeasurements to compute time-varying, maximum likelihood probabilityof detection thresholds.The USAEDS network processing software makes extensive use of theazimuth and slowness information provided by frequency-wavenumberanalysis at array sites, and polarization analysis at three-componentsites. Most of the improvements in USAEDS network processing aredue to improvements in the models used to predict azimuth, slowness,and probability of detection. Kriged travel-time, azimuth andslowness corrections-and associated uncertainties-are computedusing a ground truth database. Improvements in station processingand the use of improved models for azimuth, slowness, and probabilityof detection have led to significant improvements in USADES networkprocessing.

  2. The accuracy of survival time prediction for patients with glioma is improved by measuring mitotic spindle checkpoint gene expression.

    Directory of Open Access Journals (Sweden)

    Li Bie

    Full Text Available Identification of gene expression changes that improve prediction of survival time across all glioma grades would be clinically useful. Four Affymetrix GeneChip datasets from the literature, containing data from 771 glioma samples representing all WHO grades and eight normal brain samples, were used in an ANOVA model to screen for transcript changes that correlated with grade. Observations were confirmed and extended using qPCR assays on RNA derived from 38 additional glioma samples and eight normal samples for which survival data were available. RNA levels of eight major mitotic spindle assembly checkpoint (SAC genes (BUB1, BUB1B, BUB3, CENPE, MAD1L1, MAD2L1, CDC20, TTK significantly correlated with glioma grade and six also significantly correlated with survival time. In particular, the level of BUB1B expression was highly correlated with survival time (p<0.0001, and significantly outperformed all other measured parameters, including two standards; WHO grade and MIB-1 (Ki-67 labeling index. Measurement of the expression levels of a small set of SAC genes may complement histological grade and other clinical parameters for predicting survival time.

  3. Improving Robustness of Hydrologic Ensemble Predictions Through Probabilistic Pre- and Post-Processing in Sequential Data Assimilation

    Science.gov (United States)

    Wang, S.; Ancell, B. C.; Huang, G. H.; Baetz, B. W.

    2018-03-01

    Data assimilation using the ensemble Kalman filter (EnKF) has been increasingly recognized as a promising tool for probabilistic hydrologic predictions. However, little effort has been made to conduct the pre- and post-processing of assimilation experiments, posing a significant challenge in achieving the best performance of hydrologic predictions. This paper presents a unified data assimilation framework for improving the robustness of hydrologic ensemble predictions. Statistical pre-processing of assimilation experiments is conducted through the factorial design and analysis to identify the best EnKF settings with maximized performance. After the data assimilation operation, statistical post-processing analysis is also performed through the factorial polynomial chaos expansion to efficiently address uncertainties in hydrologic predictions, as well as to explicitly reveal potential interactions among model parameters and their contributions to the predictive accuracy. In addition, the Gaussian anamorphosis is used to establish a seamless bridge between data assimilation and uncertainty quantification of hydrologic predictions. Both synthetic and real data assimilation experiments are carried out to demonstrate feasibility and applicability of the proposed methodology in the Guadalupe River basin, Texas. Results suggest that statistical pre- and post-processing of data assimilation experiments provide meaningful insights into the dynamic behavior of hydrologic systems and enhance robustness of hydrologic ensemble predictions.

  4. Additional measures do not improve the diagnostic accuracy of the Hospital Admission Risk Profile for detecting downstream quality of life in community-dwelling older people presenting to a hospital emergency department.

    Science.gov (United States)

    Grimmer, K; Milanese, S; Beaton, K; Atlas, A

    2014-01-01

    The Hospital Admission Risk Profile (HARP) instrument is commonly used to assess risk of functional decline when older people are admitted to hospital. HARP has moderate diagnostic accuracy (65%) for downstream decreased scores in activities of daily living. This paper reports the diagnostic accuracy of HARP for downstream quality of life. It also tests whether adding other measures to HARP improves its diagnostic accuracy. One hundred and forty-eight independent community dwelling individuals aged 65 years or older were recruited in the emergency department of one large Australian hospital with a medical problem for which they were discharged without a hospital ward admission. Data, including age, sex, primary language, highest level of education, postcode, living status, requiring care for daily activities, using a gait aid, receiving formal community supports, instrumental activities of daily living in the last week, hospitalization and falls in the last 12 months, and mental state were collected at recruitment. HARP scores were derived from a formula that summed scores assigned to age, activities of daily living, and mental state categories. Physical and mental component scores of a quality of life measure were captured by telephone interview at 1 and 3 months after recruitment. HARP scores are moderately accurate at predicting downstream decline in physical quality of life, but did not predict downstream decline in mental quality of life. The addition of other variables to HARP did not improve its diagnostic accuracy for either measure of quality of life. HARP is a poor predictor of quality of life.

  5. Study on the Accuracy Improvement of the Second-Kind Fredholm Integral Equations by Using the Buffa-Christiansen Functions with MLFMA

    Directory of Open Access Journals (Sweden)

    Yue-Qian Wu

    2016-01-01

    Full Text Available Former works show that the accuracy of the second-kind integral equations can be improved dramatically by using the rotated Buffa-Christiansen (BC functions as the testing functions, and sometimes their accuracy can be even better than the first-kind integral equations. When the rotated BC functions are used as the testing functions, the discretization error of the identity operators involved in the second-kind integral equations can be suppressed significantly. However, the sizes of spherical objects which were analyzed are relatively small. Numerical capability of the method of moments (MoM for solving integral equations with the rotated BC functions is severely limited. Hence, the performance of BC functions for accuracy improvement of electrically large objects is not studied. In this paper, the multilevel fast multipole algorithm (MLFMA is employed to accelerate iterative solution of the magnetic-field integral equation (MFIE. Then a series of numerical experiments are performed to study accuracy improvement of MFIE in perfect electric conductor (PEC cases with the rotated BC as testing functions. Numerical results show that the effect of accuracy improvement by using the rotated BC as the testing functions is greatly different with curvilinear or plane triangular elements but falls off when the size of the object is large.

  6. Multi-population genomic prediction using a multi-task Bayesian learning model.

    Science.gov (United States)

    Chen, Liuhong; Li, Changxi; Miller, Stephen; Schenkel, Flavio

    2014-05-03

    Genomic prediction in multiple populations can be viewed as a multi-task learning problem where tasks are to derive prediction equations for each population and multi-task learning property can be improved by sharing information across populations. The goal of this study was to develop a multi-task Bayesian learning model for multi-population genomic prediction with a strategy to effectively share information across populations. Simulation studies and real data from Holstein and Ayrshire dairy breeds with phenotypes on five milk production traits were used to evaluate the proposed multi-task Bayesian learning model and compare with a single-task model and a simple data pooling method. A multi-task Bayesian learning model was proposed for multi-population genomic prediction. Information was shared across populations through a common set of latent indicator variables while SNP effects were allowed to vary in different populations. Both simulation studies and real data analysis showed the effectiveness of the multi-task model in improving genomic prediction accuracy for the smaller Ayshire breed. Simulation studies suggested that the multi-task model was most effective when the number of QTL was small (n = 20), with an increase of accuracy by up to 0.09 when QTL effects were lowly correlated between two populations (ρ = 0.2), and up to 0.16 when QTL effects were highly correlated (ρ = 0.8). When QTL genotypes were included for training and validation, the improvements were 0.16 and 0.22, respectively, for scenarios of the low and high correlation of QTL effects between two populations. When the number of QTL was large (n = 200), improvement was small with a maximum of 0.02 when QTL genotypes were not included for genomic prediction. Reduction in accuracy was observed for the simple pooling method when the number of QTL was small and correlation of QTL effects between the two populations was low. For the real data, the multi-task model achieved an

  7. Influence of radiation on predictive accuracy in numerical simulations of the thermal environment in industrial buildings with buoyancy-driven natural ventilation

    International Nuclear Information System (INIS)

    Meng, Xiaojing; Wang, Yi; Liu, Tiening; Xing, Xiao; Cao, Yingxue; Zhao, Jiangping

    2016-01-01

    Highlights: • The effects of radiation on predictive accuracy in numerical simulations were studied. • A scaled experimental model with a high-temperature heat source was set up. • Simulation results were discussed considering with and without radiation model. • The buoyancy force and the ventilation rate were investigated. - Abstract: This paper investigates the effects of radiation on predictive accuracy in the numerical simulations of industrial buildings. A scaled experimental model with a high-temperature heat source is set up and the buoyancy-driven natural ventilation performance is presented. Besides predicting ventilation performance in an industrial building, the scaled model in this paper is also used to generate data to validate the numerical simulations. The simulation results show good agreement with the experiment data. The effects of radiation on predictive accuracy in the numerical simulations are studied for both pure convection model and combined convection and radiation model. Detailed results are discussed regarding the temperature and velocity distribution, the buoyancy force and the ventilation rate. The temperature and velocity distributions through the middle plane are presented for the pure convection model and the combined convection and radiation model. It is observed that the overall temperature and velocity magnitude predicted by the simulations for pure convection were significantly greater than those for the combined convection and radiation model. In addition, the Grashof number and the ventilation rate are investigated. The results show that the Grashof number and the ventilation rate are greater for the pure convection model than for the combined convection and radiation model.

  8. Merging Real-Time Channel Sensor Networks with Continental-Scale Hydrologic Models: A Data Assimilation Approach for Improving Accuracy in Flood Depth Predictions

    Directory of Open Access Journals (Sweden)

    Amir Javaheri

    2018-01-01

    Full Text Available This study proposes a framework that (i uses data assimilation as a post processing technique to increase the accuracy of water depth prediction, (ii updates streamflow generated by the National Water Model (NWM, and (iii proposes a scope for updating the initial condition of continental-scale hydrologic models. Predicted flows by the NWM for each stream were converted to the water depth using the Height Above Nearest Drainage (HAND method. The water level measurements from the Iowa Flood Inundation System (a test bed sensor network in this study were converted to water depths and then assimilated into the HAND model using the ensemble Kalman filter (EnKF. The results showed that after assimilating the water depth using the EnKF, for a flood event during 2015, the normalized root mean square error was reduced by 0.50 m (51% for training tributaries. Comparison of the updated modeled water stage values with observations at testing locations showed that the proposed methodology was also effective on the tributaries with no observations. The overall error reduced from 0.89 m to 0.44 m for testing tributaries. The updated depths were then converted to streamflow using rating curves generated by the HAND model. The error between updated flows and observations at United States Geological Survey (USGS station at Squaw Creek decreased by 35%. For future work, updated streamflows could also be used to dynamically update initial conditions in the continental-scale National Water Model.

  9. Toward accountable land use mapping: Using geocomputation to improve classification accuracy and reveal uncertainty

    NARCIS (Netherlands)

    Beekhuizen, J.; Clarke, K.C.

    2010-01-01

    The classification of satellite imagery into land use/cover maps is a major challenge in the field of remote sensing. This research aimed at improving the classification accuracy while also revealing uncertain areas by employing a geocomputational approach. We computed numerous land use maps by

  10. Prediction of Industrial Electric Energy Consumption in Anhui Province Based on GA-BP Neural Network

    Science.gov (United States)

    Zhang, Jiajing; Yin, Guodong; Ni, Youcong; Chen, Jinlan

    2018-01-01

    In order to improve the prediction accuracy of industrial electrical energy consumption, a prediction model of industrial electrical energy consumption was proposed based on genetic algorithm and neural network. The model use genetic algorithm to optimize the weights and thresholds of BP neural network, and the model is used to predict the energy consumption of industrial power in Anhui Province, to improve the prediction accuracy of industrial electric energy consumption in Anhui province. By comparing experiment of GA-BP prediction model and BP neural network model, the GA-BP model is more accurate with smaller number of neurons in the hidden layer.

  11. Accuracy Improvement for Light-Emitting-Diode-Based Colorimeter by Iterative Algorithm

    Science.gov (United States)

    Yang, Pao-Keng

    2011-09-01

    We present a simple algorithm, combining an interpolating method with an iterative calculation, to enhance the resolution of spectral reflectance by removing the spectral broadening effect due to the finite bandwidth of the light-emitting diode (LED) from it. The proposed algorithm can be used to improve the accuracy of a reflective colorimeter using multicolor LEDs as probing light sources and is also applicable to the case when the probing LEDs have different bandwidths in different spectral ranges, to which the powerful deconvolution method cannot be applied.

  12. Correlation of chemical shifts predicted by molecular dynamics simulations for partially disordered proteins

    Energy Technology Data Exchange (ETDEWEB)

    Karp, Jerome M.; Erylimaz, Ertan; Cowburn, David, E-mail: cowburn@cowburnlab.org, E-mail: David.cowburn@einstein.yu.edu [Albert Einstein College of Medicine of Yeshiva University, Department of Biochemistry (United States)

    2015-01-15

    There has been a longstanding interest in being able to accurately predict NMR chemical shifts from structural data. Recent studies have focused on using molecular dynamics (MD) simulation data as input for improved prediction. Here we examine the accuracy of chemical shift prediction for intein systems, which have regions of intrinsic disorder. We find that using MD simulation data as input for chemical shift prediction does not consistently improve prediction accuracy over use of a static X-ray crystal structure. This appears to result from the complex conformational ensemble of the disordered protein segments. We show that using accelerated molecular dynamics (aMD) simulations improves chemical shift prediction, suggesting that methods which better sample the conformational ensemble like aMD are more appropriate tools for use in chemical shift prediction for proteins with disordered regions. Moreover, our study suggests that data accurately reflecting protein dynamics must be used as input for chemical shift prediction in order to correctly predict chemical shifts in systems with disorder.

  13. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    Science.gov (United States)

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  14. Protein Secondary Structure Prediction Using AutoEncoder Network and Bayes Classifier

    Science.gov (United States)

    Wang, Leilei; Cheng, Jinyong

    2018-03-01

    Protein secondary structure prediction is belong to bioinformatics,and it's important in research area. In this paper, we propose a new prediction way of protein using bayes classifier and autoEncoder network. Our experiments show some algorithms including the construction of the model, the classification of parameters and so on. The data set is a typical CB513 data set for protein. In terms of accuracy, the method is the cross validation based on the 3-fold. Then we can get the Q3 accuracy. Paper results illustrate that the autoencoder network improved the prediction accuracy of protein secondary structure.

  15. Implementation of genomic prediction in Lolium perenne (L. breeding populations

    Directory of Open Access Journals (Sweden)

    Nastasiya F Grinberg

    2016-02-01

    Full Text Available Perennial ryegrass (Lolium perenne L. is one of the most widely grown forage grasses in temperate agriculture. In order to maintain and increase its usage as forage in livestock agriculture, there is a continued need for improvement in biomass yield, quality, disease resistance and seed yield. Genetic gain for traits such as biomass yield has been relatively modest. This has been attributed to its long breeding cycle, and the necessity to use population based breeding methods. Thanks to recent advances in genotyping techniques there is increasing interest in genomic selection from which genomically estimated breeding values (GEBV are derived. In this paper we compare the classical RRBLUP model with state-of-the-art machine learning (ML techniques that should yield themselves easily to use in GS and demonstrate their application to predicting quantitative traits in a breeding population of L. perenne. Prediction accuracies varied from 0 to 0.59 depending on trait, prediction model and composition of the training population. The BLUP model produced the highest prediction accuracies for most traits and training populations. Forage quality traits had the highest accuracies compared to yield related traits. There appeared to be no clear pattern to the effect of the training population composition on the prediction accuracies. The heritability of the forage quality traits was generally higher than for the yield related traits, and could partly explain the difference in accuracy. Some population structure was evident in the breeding populations, and probably contributed to the varying effects of training population on the predictions. The average linkage disequilibrium (LD between adjacent markers ranged from 0.121 to 0.215. Higher marker density and larger training population closely related with the test population are likely to improve the prediction accuracy.

  16. The effect of using genealogy-based haplotypes for genomic prediction.

    Science.gov (United States)

    Edriss, Vahid; Fernando, Rohan L; Su, Guosheng; Lund, Mogens S; Guldbrandtsen, Bernt

    2013-03-06

    Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy.

  17. Significant improvement of accuracy and precision in the determination of trace rare earths by fluorescence analysis

    International Nuclear Information System (INIS)

    Ozawa, L.; Hersh, H.N.

    1976-01-01

    Most of the rare earths in yttrium, gadolinium and lanthanum oxides emit characteristic fluorescent line spectra under irradiation with photons, electrons and x rays. The sensitivity and selectivity of the rare earth fluorescences are high enough to determine the trace amounts (0.01 to 100 ppM) of rare earths. The absolute fluorescent intensities of solids, however, are markedly affected by the synthesis procedure, level of contamination and crystal perfection, resulting in poor accuracy and low precision for the method (larger than 50 percent error). Special care in preparation of the samples is required to obtain good accuracy and precision. It is found that the accuracy and precision for the determination of trace (less than 10 ppM) rare earths by fluorescence analysis improved significantly, while still maintaining the sensitivity, when the determination is made by comparing the ratio of the fluorescent intensities of the trace rare earths to that of a deliberately added rare earth as reference. The variation in the absolute fluorescent intensity remains, but is compensated for by measuring the fluorescent line intensity ratio. Consequently, the determination of trace rare earths (with less than 3 percent error) is easily made by a photoluminescence technique in which the rare earths are excited directly by photons. Accuracy is still maintained when the absolute fluorescent intensity is reduced by 50 percent through contamination by Ni, Fe, Mn or Pb (about 100 ppM). Determination accuracy is also improved for fluorescence analysis by electron excitation and x-ray excitation. For some rare earths, however, accuracy by these techniques is reduced because indirect excitation mechanisms are involved. The excitation mechanisms and the interferences between rare earths are also reported

  18. Musical Scales in Tone Sequences Improve Temporal Accuracy.

    Science.gov (United States)

    Li, Min S; Di Luca, Massimiliano

    2018-01-01

    Predicting the time of stimulus onset is a key component in perception. Previous investigations of perceived timing have focused on the effect of stimulus properties such as rhythm and temporal irregularity, but the influence of non-temporal properties and their role in predicting stimulus timing has not been exhaustively considered. The present study aims to understand how a non-temporal pattern in a sequence of regularly timed stimuli could improve or bias the detection of temporal deviations. We presented interspersed sequences of 3, 4, 5, and 6 auditory tones where only the timing of the last stimulus could slightly deviate from isochrony. Participants reported whether the last tone was 'earlier' or 'later' relative to the expected regular timing. In two conditions, the tones composing the sequence were either organized into musical scales or they were random tones. In one experiment, all sequences ended with the same tone; in the other experiment, each sequence ended with a different tone. Results indicate higher discriminability of anisochrony with musical scales and with longer sequences, irrespective of the knowledge of the final tone. Such an outcome suggests that the predictability of non-temporal properties, as enabled by the musical scale pattern, can be a factor in determining the sensitivity of time judgments.

  19. Effect of length of measurement period on accuracy of predicted annual heating energy consumption of buildings

    International Nuclear Information System (INIS)

    Cho, Sung-Hwan; Kim, Won-Tae; Tae, Choon-Soeb; Zaheeruddin, M.

    2004-01-01

    This study examined the temperature dependent regression models of energy consumption as a function of the length of the measurement period. The methodology applied was to construct linear regression models of daily energy consumption from 1 day to 3 months data sets and compare the annual heating energy consumption predicted by these models with actual annual heating energy consumption. A commercial building in Daejon was selected, and the energy consumption was measured over a heating season. The results from the investigation show that the predicted energy consumption based on 1 day of measurements to build the regression model could lead to errors of 100% or more. The prediction error decreased to 30% when 1 week of data was used to build the regression model. Likewise, the regression model based on 3 months of measured data predicted the annual energy consumption within 6% of the measured energy consumption. These analyses show that the length of the measurement period has a significant impact on the accuracy of the predicted annual energy consumption of buildings

  20. Decadal climate predictions improved by ocean ensemble dispersion filtering

    Science.gov (United States)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-06-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the whole ensemble with its

  1. Improving Odometric Accuracy for an Autonomous Electric Cart

    Directory of Open Access Journals (Sweden)

    Jonay Toledo

    2018-01-01

    Full Text Available In this paper, a study of the odometric system for the autonomous cart Verdino, which is an electric vehicle based on a golf cart, is presented. A mathematical model of the odometric system is derived from cart movement equations, and is used to compute the vehicle position and orientation. The inputs of the system are the odometry encoders, and the model uses the wheels diameter and distance between wheels as parameters. With this model, a least square minimization is made in order to get the nominal best parameters. This model is updated, including a real time wheel diameter measurement improving the accuracy of the results. A neural network model is used in order to learn the odometric model from data. Tests are made using this neural network in several configurations and the results are compared to the mathematical model, showing that the neural network can outperform the first proposed model.

  2. Achieving Climate Change Absolute Accuracy in Orbit

    Science.gov (United States)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; hide

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  3. Automatic selection of reference taxa for protein-protein interaction prediction with phylogenetic profiling

    DEFF Research Database (Denmark)

    Simonsen, Martin; Maetschke, S.R.; Ragan, M.A.

    2012-01-01

    Motivation: Phylogenetic profiling methods can achieve good accuracy in predicting protein–protein interactions, especially in prokaryotes. Recent studies have shown that the choice of reference taxa (RT) is critical for accurate prediction, but with more than 2500 fully sequenced taxa publicly......: We present three novel methods for automating the selection of RT, using machine learning based on known protein–protein interaction networks. One of these methods in particular, Tree-Based Search, yields greatly improved prediction accuracies. We further show that different methods for constituting...... phylogenetic profiles often require very different RT sets to support high prediction accuracy....

  4. Improvement of Measurement Accuracy of Coolant Flow in a Test Loop

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Jintae; Kim, Jong-Bum; Joung, Chang-Young; Ahn, Sung-Ho; Heo, Sung-Ho; Jang, Seoyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this study, to improve the measurement accuracy of coolant flow in a coolant flow simulator, elimination of external noise are enhanced by adding ground pattern in the control panel and earth around signal cables. In addition, a heating unit is added to strengthen the fluctuation signal by heating the coolant because the source of signals are heat energy. Experimental results using the improved system shows good agreement with the reference flow rate. The measurement error is reduced dramatically compared with the previous measurement accuracy and it will help to analyze the performance of nuclear fuels. For further works, out of pile test will be carried out by fabricating a test rig mockup and inspect the feasibility of the developed system. To verify the performance of a newly developed nuclear fuel, irradiation test needs to be carried out in the research reactor and measure the irradiation behavior such as fuel temperature, fission gas release, neutron dose, coolant temperature, and coolant flow rate. In particular, the heat generation rate of nuclear fuels can be measured indirectly by measuring temperature variation of coolant which passes by the fuel rod and its flow rate. However, it is very difficult to measure the flow rate of coolant at the fuel rod owing to the narrow gap between components of the test rig. In nuclear fields, noise analysis using thermocouples in the test rig has been applied to measure the flow velocity of coolant which circulates through the test loop.

  5. Diagnostic accuracy of tuberculous lymphadenitis fine needle aspiration biopsy confirmed by PCR as gold standard

    Science.gov (United States)

    DSuryadi; Delyuzar; Soekimin

    2018-03-01

    Indonesia is the second country with the TB (tuberculosis) burden in the world. Improvement in controlling TB and reducing the complications can accelerate early diagnosis and correct treatment. PCR test is a gold standard. However, it is quite expensive for routine diagnosis. Therefore, an accurate and cheaper diagnostic method such as fine needle aspiration biopsy is needed. The study aimsto determine the accuracy of fine needle aspiration biopsy cytology in the diagnosis of tuberculous lymphadenitis. A cross-sectional analytic study was conducted to the samples from patients suspected with tuberculous lymphadenitis. The fine needle aspiration biopsy (FNAB)test was performed and confirmed by PCR test.There is a comparison to the sensitivity, specificity, accuracy, positive predictive value and negative predictive value of both methods. Sensitivity (92.50%), specificity (96.49%), accuracy (94.85%), positive predictive value (94.87%) and negative predictive value (94.83%) were in FNAB test compared to gold standard. We concluded that fine needle aspiration biopsy is a recommendation for a cheaper and accurate diagnostic test for tuberculous lymphadenitis diagnosis.

  6. The accuracy of body mass prediction for elderly specimens: Implications for paleoanthropology and legal medicine.

    Science.gov (United States)

    Chevalier, Tony; Lefèvre, Philippe; Clarys, Jan Pieter; Beauthier, Jean-Pol

    2016-10-01

    Different practices in paleoanthropology and legal medicine raise questions concerning the robustness of body mass (BM) prediction. Integrating personal identification from body mass estimation with skeleton is not a classic approach in legal medicine. The originality of our study is the use of an elderly sample in order to push prediction methods to their limits and to discuss about implications in paleoanthropology and legal medicine. The aim is to observe the accuracy of BM prediction in relation to the body mass index (BMI, index of classification) using five femoral head (FH) methods and one shaft (FSH) method. The sample is composed of 41 dry femurs obtained from dissection where age (c. 82 years) and gender are known, and weight (c. 59.5 kg) and height are measured upon admission to the body leg service. We show that the estimation of the mean BM of the elderly sample is not significantly different to the real mean BM when the appropriate formula is used for the femoral head diameter. In fact, the best prediction is obtained with the McHenry formula (1992), which was based on a sample with an equivalent average mass to that of our sample. In comparison, external shaft diameters, which are known to be more influenced by mechanical stimuli than femoral head diameters, yield less satisfactory results with the McHenry formula (1992) for shaft diameters. Based on all the methods used and the distinctive selected sample, overestimation (always observed with the different femoral head methods) can be restricted to 1.1%. The observed overestimation with the shaft method can be restricted to 7%. However, the estimation of individual BM is much less reliable. The BMI has a strong impact on the accuracy of individual BM prediction, and is unquestionably more reliable for individuals with normal BMI (9.6% vs 16.7% for the best prediction error). In this case, the FH method is also the better predictive method but not if we integrate the total sample (i.e., the FSH

  7. Comparison of measured and predicted thermal mixing tests using improved finite difference technique

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Rice, J.G.; Kim, J.H.

    1983-01-01

    The numerical diffusion introduced by the use of upwind formulations in the finite difference solution of the flow and energy equations for thermal mixing problems (cold water injection after small break LOCA in a PWR) was examined. The relative importance of numerical diffusion in the flow equations, compared to its effect on the energy equation was demonstrated. The flow field equations were solved using both first order accurate upwind, and second order accurate differencing schemes. The energy equation was treated using the conventional upwind and a mass weighted skew upwind scheme. Results presented for a simple test case showed that, for thermal mixing problems, the numerical diffusion was most significant in the energy equation. The numerical diffusion effect in the flow field equations was much less significant. A comparison of predictions using the skew upwind and the conventional upwind with experimental data from a two dimensional thermal mixing text are presented. The use of the skew upwind scheme showed a significant improvement in the accuracy of the steady state predicted temperatures. (orig./HP)

  8. Improving calibration accuracy in gel dosimetry

    International Nuclear Information System (INIS)

    Oldham, M.; McJury, M.; Webb, S.; Baustert, I.B.; Leach, M.O.

    1998-01-01

    A new method of calibrating gel dosimeters (applicable to both Fricke and polyacrylamide gels) is presented which has intrinsically higher accuracy than current methods, and requires less gel. Two test-tubes of gel (inner diameter 2.5 cm, length 20 cm) are irradiated separately with a 10x10cm 2 field end-on in a water bath, such that the characteristic depth-dose curve is recorded in the gel. The calibration is then determined by fitting the depth-dose measured in water, against the measured change in relaxivity with depth in the gel. Increased accuracy is achieved in this simple depth-dose geometry by averaging the relaxivity at each depth. A large number of calibration data points, each with relatively high accuracy, are obtained. Calibration data over the full range of dose (1.6-10 Gy) is obtained by irradiating one test-tube to 10 Gy at dose maximum (D max ), and the other to 4.5 Gy at D max . The new calibration method is compared with a 'standard method' where five identical test-tubes of gel were irradiated to different known doses between 2 and 10 Gy. The percentage uncertainties in the slope and intercept of the calibration fit are found to be lower with the new method by a factor of about 4 and 10 respectively, when compared with the standard method and with published values. The gel was found to respond linearly within the error bars up to doses of 7 Gy, with a slope of 0.233±0.001 s -1 Gy -1 and an intercept of 1.106±0.005 Gy. For higher doses, nonlinear behaviour was observed. (author)

  9. Improving rational thermal comfort prediction by using subpopulation characteristics: A case study at Hermitage Amsterdam.

    Science.gov (United States)

    Kramer, Rick; Schellen, Lisje; Schellen, Henk; Kingma, Boris

    2017-01-01

    This study aims to improve the prediction accuracy of the rational standard thermal comfort model, known as the Predicted Mean Vote (PMV) model, by (1) calibrating one of its input variables "metabolic rate," and (2) extending it by explicitly incorporating the variable running mean outdoor temperature (RMOT) that relates to adaptive thermal comfort. The analysis was performed with survey data ( n = 1121) and climate measurements of the indoor and outdoor environment from a one year-long case study undertaken at Hermitage Amsterdam museum in the Netherlands. The PMVs were calculated for 35 survey days using (1) an a priori assumed metabolic rate, (2) a calibrated metabolic rate found by fitting the PMVs to the thermal sensation votes (TSVs) of each respondent using an optimization routine, and (3) extending the PMV model by including the RMOT. The results show that the calibrated metabolic rate is estimated to be 1.5 Met for this case study that was predominantly visited by elderly females. However, significant differences in metabolic rates have been revealed between adults and elderly showing the importance of differentiating between subpopulations. Hence, the standard tabular values, which only differentiate between various activities, may be oversimplified for many cases. Moreover, extending the PMV model with the RMOT substantially improves the thermal sensation prediction, but thermal sensation toward extreme cool and warm sensations remains partly underestimated.

  10. Improvement of Accuracy for Background Noise Estimation Method Based on TPE-AE

    Science.gov (United States)

    Itai, Akitoshi; Yasukawa, Hiroshi

    This paper proposes a method of a background noise estimation based on the tensor product expansion with a median and a Monte carlo simulation. We have shown that a tensor product expansion with absolute error method is effective to estimate a background noise, however, a background noise might not be estimated by using conventional method properly. In this paper, it is shown that the estimate accuracy can be improved by using proposed methods.

  11. Advances in criticality predictions for EBR-II

    International Nuclear Information System (INIS)

    Schaefer, R.W.; Imel, G.R.

    1994-01-01

    Improvements to startup criticality predictions for the EBR-II reactor have been made. More exact calculational models, methods and data are now used, and better procedures for obtaining experimental data that enter into the prediction are in place. Accuracy improved by more than a factor of two and the largest ECP error observed since the changes is only 18 cents. An experimental method using subcritical counts is also being implemented

  12. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Science.gov (United States)

    Erbe, Malena; Gredler, Birgit; Seefried, Franz Reinhold; Bapst, Beat; Simianer, Henner

    2013-01-01

    Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]). The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20) cross-validation scenarios (50 replicates, random assignment) were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010), augmented by a weighting factor (w) based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text]) was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  13. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Directory of Open Access Journals (Sweden)

    Malena Erbe

    Full Text Available Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]. The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20 cross-validation scenarios (50 replicates, random assignment were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010, augmented by a weighting factor (w based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text] was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  14. Improving Accuracy of Dempster-Shafer Theory Based Anomaly Detection Systems

    Directory of Open Access Journals (Sweden)

    Ling Zou

    2014-07-01

    Full Text Available While the Dempster-Shafer theory of evidence has been widely used in anomaly detection, there are some issues with them. Dempster-Shafer theory of evidence trusts evidences equally which does not hold in distributed-sensor ADS. Moreover, evidences are dependent with each other sometimes which will lead to false alert. We propose improving by incorporating two algorithms. Features selection algorithm employs Gaussian Graphical Models to discover correlation between some candidate features. A group of suitable ADS were selected to detect and detection result were send to the fusion engine. Information gain is applied to set weight for every feature on Weights estimated algorithm. A weighted Dempster-Shafer theory of evidence combined the detection results to achieve a better accuracy. We evaluate our detection prototype through a set of experiments that were conducted with standard benchmark Wisconsin Breast Cancer Dataset and real Internet traffic. Evaluations on the Wisconsin Breast Cancer Dataset show that our prototype can find the correlation in nine features and improve the detection rate without affecting the false positive rate. Evaluations on Internet traffic show that Weights estimated algorithm can improve the detection performance significantly.

  15. Improving ASTER GDEM Accuracy Using Land Use-Based Linear Regression Methods: A Case Study of Lianyungang, East China

    Directory of Open Access Journals (Sweden)

    Xiaoyan Yang

    2018-04-01

    Full Text Available The Advanced Spaceborne Thermal-Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM is important to a wide range of geographical and environmental studies. Its accuracy, to some extent associated with land-use types reflecting topography, vegetation coverage, and human activities, impacts the results and conclusions of these studies. In order to improve the accuracy of ASTER GDEM prior to its application, we investigated ASTER GDEM errors based on individual land-use types and proposed two linear regression calibration methods, one considering only land use-specific errors and the other considering the impact of both land-use and topography. Our calibration methods were tested on the coastal prefectural city of Lianyungang in eastern China. Results indicate that (1 ASTER GDEM is highly accurate for rice, wheat, grass and mining lands but less accurate for scenic, garden, wood and bare lands; (2 despite improvements in ASTER GDEM2 accuracy, multiple linear regression calibration requires more data (topography and a relatively complex calibration process; (3 simple linear regression calibration proves a practicable and simplified means to systematically investigate and improve the impact of land-use on ASTER GDEM accuracy. Our method is applicable to areas with detailed land-use data based on highly accurate field-based point-elevation measurements.

  16. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    Science.gov (United States)

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  17. The Accuracy of Urinalysis in Predicting Intra-Abdominal Injury Following Blunt Traumas

    Directory of Open Access Journals (Sweden)

    Anita Sabzghabaei

    2016-01-01

    Full Text Available Introduction: In cases of blunt abdominal traumas, predicting the possible intra-abdominal injuries is still a challenge for the physicians involved with these patients. Therefore, this study was designed, to evaluate the accuracy of urinalysis in predicting intra-abdominal injuries. Methods: Patients aged 15 to 65 years with blunt abdominal trauma who were admitted to emergency departments were enrolled. Abdominopelvic computed tomography (CT scan with intravenous contrast and urinalysis were requested for all the included patients. Demographic data, trauma mechanism, the results of urinalysis, and the results of abdominopelvic CT scan were gathered. Finally, the correlation between the results of abdominopelvic CT scan, and urinalysis was determined. Urinalysis was considered positive in case of at least one positive value in gross appearance, blood in dipstick, or red blood cell count. Results: 325 patients with blunt abdominal trauma were admitted to the emergency departments (83% male with the mean age of 32.63±17.48 years. Sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios of urinalysis, were 77.9% (95% CI: 69.6-84.4, 58.5% (95% CI: 51.2-65.5, 56% (95% CI: 48.5-63.3, 79.6% (95% CI: 71.8-85.7, 1.27% (95% CI: 1.30-1.57, and 0.25% (95% CI: 0.18-0.36, respectively. Conclusion: The diagnostic value of urinalysis in prediction of blunt traumatic intra-abdominal injuries is low and it seems that it should be considered as an adjuvant diagnostic tool, in conjunction with other sources such as clinical findings and imaging.

  18. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    Science.gov (United States)

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Efficient Vessel Tracking  with Accuracy Guarantees

    DEFF Research Database (Denmark)

    Redoutey, Martin; Scotti, Eric; Jensen, Christian Søndergaard

    2008-01-01

    Safety and security are top concerns in maritime navigation, particularly as maritime traffic continues to grow and as crew sizes are reduced. The Automatic Identification System (AIS) plays a key role in regard to these concerns. This system, whose objective is in part to identify and locate ves...... accuracies at lower communication costs. The techniques employ movement predictions that are shared between vessels and the VTS. Empirical studies with a prototype implementation and real vessel data demonstrate that the techniques are capable of significantly improving the AIS....

  20. Improving the Stability and Accuracy of Power Hardware-in-the-Loop Simulation Using Virtual Impedance Method

    Directory of Open Access Journals (Sweden)

    Xiaoming Zha

    2016-11-01

    Full Text Available Power hardware-in-the-loop (PHIL systems are advanced, real-time platforms for combined software and hardware testing. Two paramount issues in PHIL simulations are the closed-loop stability and simulation accuracy. This paper presents a virtual impedance (VI method for PHIL simulations that improves the simulation’s stability and accuracy. Through the establishment of an impedance model for a PHIL simulation circuit, which is composed of a voltage-source converter and a simple network, the stability and accuracy of the PHIL system are analyzed. Then, the proposed VI method is implemented in a digital real-time simulator and used to correct the combined impedance in the impedance model, achieving higher stability and accuracy of the results. The validity of the VI method is verified through the PHIL simulation of two typical PHIL examples.

  1. Screening Characteristics of TIMI Score in Predicting Acute Coronary Syndrome Outcome; a Diagnostic Accuracy Study

    Directory of Open Access Journals (Sweden)

    Mostafa Alavi-Moghaddam

    2017-01-01

    Full Text Available Introduction: In cases with potential diagnosis of ischemic chest pain, screening high risk patients for adverse outcomes would be very helpful. The present study was designed aiming to determine the diagnostic accuracy of thrombolysis in myocardial infarction (TIMI score in Patients with potential diagnosis of ischemic chest pain.Method: This diagnostic accuracy study was designed to evaluate the screening performance characteristics of TIMI score in predicting 30-day outcomes of mortality, myocardial infarction (MI, and need for revascularization in patients presenting to ED with complaint of typical chest pain and diagnosis of unstable angina or Non-ST elevation MI.Results: 901 patients with the mean age of 58.17 ± 15.00 years (19-90 were studied (52.9% male. Mean TIMI score of the studied patients was 0.97 ± 0.93 (0-5 and the highest frequency of the score belonged to 0 to 2 with 37.2%, 35.3%, and 21.4%, respectively. In total, 170 (18.8% patients experienced the outcomes evaluated in this study. Total sensitivity, specificity, positive and negative predictive value, and positive and negative likelihood ratio of TIMI score were 20 (95% CI: 17 – 24, 99 (95% CI: 97 – 100, 98 (95% CI: 93 – 100, 42 (95% CI: 39 – 46, 58 (95% CI: 14 – 229, and 1.3 (95% CI: 1.2 – 1.4, respectively. Area under the ROC curve of this system for prediction of 30-day mortality, MI, and need for revascularization were 0.51 (95% CI: 0.47 – 0.55, 0.58 (95% CI: 0.54 – 0.62 and 0.56 (95% CI: 0.52 – 0.60, respectively.Conclusion: Based on the findings of the present study, it seems that TIMI score has a high specificity in predicting 30-day adverse outcomes of mortality, MI, and need for revascularization following acute coronary syndrome. However, since its sensitivity, negative predictive value, and negative likelihood ratio are low, it cannot be used as a proper screening tool for ruling out low risk patients in ED.

  2. Accuracy of gastrocnemius muscles forces in walking and running goats predicted by one-element and two-element Hill-type models.

    Science.gov (United States)

    Lee, Sabrina S M; Arnold, Allison S; Miara, Maria de Boef; Biewener, Andrew A; Wakeling, James M

    2013-09-03

    Hill-type models are commonly used to estimate muscle forces during human and animal movement-yet the accuracy of the forces estimated during walking, running, and other tasks remains largely unknown. Further, most Hill-type models assume a single contractile element, despite evidence that faster and slower motor units, which have different activation-deactivation dynamics, may be independently or collectively excited. This study evaluated a novel, two-element Hill-type model with "differential" activation of fast and slow contractile elements. Model performance was assessed using a comprehensive data set (including measures of EMG intensity, fascicle length, and tendon force) collected from the gastrocnemius muscles of goats during locomotor experiments. Muscle forces predicted by the new two-element model were compared to the forces estimated using traditional one-element models and to the forces measured in vivo using tendon buckle transducers. Overall, the two-element model resulted in the best predictions of in vivo gastrocnemius force. The coefficient of determination, r(2), was up to 26.9% higher and the root mean square error, RMSE, was up to 37.4% lower for the two-element model than for the one-element models tested. All models captured salient features of the measured muscle force during walking, trotting, and galloping (r(2)=0.26-0.51), and all exhibited some errors (RMSE=9.63-32.2% of the maximum in vivo force). These comparisons provide important insight into the accuracy of Hill-type models. The results also show that incorporation of fast and slow contractile elements within muscle models can improve estimates of time-varying, whole muscle force during locomotor tasks. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Protein secondary structure prediction for a single-sequence using hidden semi-Markov models

    Directory of Open Access Journals (Sweden)

    Borodovsky Mark

    2006-03-01

    Full Text Available Abstract Background The accuracy of protein secondary structure prediction has been improving steadily towards the 88% estimated theoretical limit. There are two types of prediction algorithms: Single-sequence prediction algorithms imply that information about other (homologous proteins is not available, while algorithms of the second type imply that information about homologous proteins is available, and use it intensively. The single-sequence algorithms could make an important contribution to studies of proteins with no detected homologs, however the accuracy of protein secondary structure prediction from a single-sequence is not as high as when the additional evolutionary information is present. Results In this paper, we further refine and extend the hidden semi-Markov model (HSMM initially considered in the BSPSS algorithm. We introduce an improved residue dependency model by considering the patterns of statistically significant amino acid correlation at structural segment borders. We also derive models that specialize on different sections of the dependency structure and incorporate them into HSMM. In addition, we implement an iterative training method to refine estimates of HSMM parameters. The three-state-per-residue accuracy and other accuracy measures of the new method, IPSSP, are shown to be comparable or better than ones for BSPSS as well as for PSIPRED, tested under the single-sequence condition. Conclusions We have shown that new dependency models and training methods bring further improvements to single-sequence protein secondary structure prediction. The results are obtained under cross-validation conditions using a dataset with no pair of sequences having significant sequence similarity. As new sequences are added to the database it is possible to augment the dependency structure and obtain even higher accuracy. Current and future advances should contribute to the improvement of function prediction for orphan proteins inscrutable

  4. Overlay accuracy fundamentals

    Science.gov (United States)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  5. Accuracy of formulas used to predict post-transfusion packed cell volume rise in anemic dogs.

    Science.gov (United States)

    Short, Jacqueline L; Diehl, Shenandoah; Seshadri, Ravi; Serrano, Sergi

    2012-08-01

    To assess the accuracy of published formulas used to guide packed red blood cell (pRBC) transfusions in anemic dogs and to compare the predicted rise in packed cell volume (PCV) to the actual post-transfusion rise in PCV. Prospective observational study from April 2009 through July 2009. A small animal emergency and specialty hospital. Thirty-one anemic client-owned dogs that received pRBC transfusions for treatment of anemia. None Four formulas were evaluated to determine their predictive ability with respect to rise in PCV following transfusion with pRBC. Post-transfusion rise in PCV were compared to calculated rise in PCV using 4 different formulas. Bias and limits of agreement were investigated using Bland-Altman analyses. Accuracy of existing formulas to predict rise in PCV following transfusion varied significantly. Formula 1 (volume to be transfused [VT] [mL] = 1 mL × % PCV rise × kg body weight [BW]) overestimated the expected rise in PCV (mean difference, 6.30), while formula 2 (VT [mL] = 2 mL ×% PCV rise × kg BW) underestimated the rise in PCV (mean difference, -3.01). Formula 3 (VT [mL] = 90 mL × kg BW × [(desired PCV - Patient PCV)/PCV of donor blood]) and formula 4 (VT [mL] = 1.5 mL ×% PCV rise × kg BW) performed well (mean difference 0.23 and 0.09, respectively) in predicting rise in PCV following pRBC transfusion. Agreement between 2 formulas, "VT (mL) = kg BW × blood volume (90 mL) × [(desired PCV - recipient PCV)/Donor PCV]" and "VT (mL) = 1.5 ×desired rise in PCV × kg BW," was found when they were compared to the actual rise in PCV following pRBC transfusion in anemic dogs. Further research is warranted to determine whether these formulas perform similarly well for other species. © Veterinary Emergency and Critical Care Society 2012.

  6. Increased prediction accuracy in wheat breeding trials using a marker × environment interaction genomic selection model.

    Science.gov (United States)

    Lopez-Cruz, Marco; Crossa, Jose; Bonnett, David; Dreisigacker, Susanne; Poland, Jesse; Jannink, Jean-Luc; Singh, Ravi P; Autrique, Enrique; de los Campos, Gustavo

    2015-02-06

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates of selection. Originally, these models were developed without considering genotype × environment interaction(G×E). Several authors have proposed extensions of the single-environment GS model that accommodate G×E using either covariance functions or environmental covariates. In this study, we model G×E using a marker × environment interaction (M×E) GS model; the approach is conceptually simple and can be implemented with existing GS software. We discuss how the model can be implemented by using an explicit regression of phenotypes on markers or using co-variance structures (a genomic best linear unbiased prediction-type model). We used the M×E model to analyze three CIMMYT wheat data sets (W1, W2, and W3), where more than 1000 lines were genotyped using genotyping-by-sequencing and evaluated at CIMMYT's research station in Ciudad Obregon, Mexico, under simulated environmental conditions that covered different irrigation levels, sowing dates and planting systems. We compared the M×E model with a stratified (i.e., within-environment) analysis and with a standard (across-environment) GS model that assumes that effects are constant across environments (i.e., ignoring G×E). The prediction accuracy of the M×E model was substantially greater of that of an across-environment analysis that ignores G×E. Depending on the prediction problem, the M×E model had either similar or greater levels of prediction accuracy than the stratified analyses. The M×E model decomposes marker effects and genomic values into components that are stable across environments (main effects) and others that are environment-specific (interactions). Therefore, in principle, the interaction model could shed light over which variants have effects that are stable across environments and which ones are responsible for G×E. The data set and the scripts required to reproduce the analysis are

  7. Improving failure prediction accuracy in smart environments

    NARCIS (Netherlands)

    Ozen, S.; Warriach, E.U.; Özcelebi, T.; Lukkien, J.J.; Bellido, F.J.; Vun, N.C.H.; Dolar, C.; Diaz-Sanchez, D.; Ling, W.-K.

    2016-01-01

    Smart Environments (SE) and Internet of Things (IoT) are two concepts that connect consumer electronics (CE) to each other and to the Internet domain. This enables various applications where CE devices work together to achieve goals of their users by communicating over a network. Application

  8. Validity of Predictive Equations for Resting Energy Expenditure Developed for Obese Patients: Impact of Body Composition Method

    Science.gov (United States)

    Achamrah, Najate; Jésus, Pierre; Grigioni, Sébastien; Rimbert, Agnès; Petit, André; Déchelotte, Pierre; Folope, Vanessa; Coëffier, Moïse

    2018-01-01

    Predictive equations have been specifically developed for obese patients to estimate resting energy expenditure (REE). Body composition (BC) assessment is needed for some of these equations. We assessed the impact of BC methods on the accuracy of specific predictive equations developed in obese patients. REE was measured (mREE) by indirect calorimetry and BC assessed by bioelectrical impedance analysis (BIA) and dual-energy X-ray absorptiometry (DXA). mREE, percentages of prediction accuracy (±10% of mREE) were compared. Predictive equations were studied in 2588 obese patients. Mean mREE was 1788 ± 6.3 kcal/24 h. Only the Müller (BIA) and Harris & Benedict (HB) equations provided REE with no difference from mREE. The Huang, Müller, Horie-Waitzberg, and HB formulas provided a higher accurate prediction (>60% of cases). The use of BIA provided better predictions of REE than DXA for the Huang and Müller equations. Inversely, the Horie-Waitzberg and Lazzer formulas provided a higher accuracy using DXA. Accuracy decreased when applied to patients with BMI ≥ 40, except for the Horie-Waitzberg and Lazzer (DXA) formulas. Müller equations based on BIA provided a marked improvement of REE prediction accuracy than equations not based on BC. The interest of BC to improve REE predictive equations accuracy in obese patients should be confirmed. PMID:29320432

  9. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  10. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    Science.gov (United States)

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  11. Transthoracic CT-guided biopsy with multiplanar reconstruction image improves diagnostic accuracy of solitary pulmonary nodules

    International Nuclear Information System (INIS)

    Ohno, Yoshiharu; Hatabu, Hiroto; Takenaka, Daisuke; Imai, Masatake; Ohbayashi, Chiho; Sugimura, Kazuro

    2004-01-01

    Objective: To evaluate the utility of multiplanar reconstruction (MPR) image for CT-guided biopsy and determine factors of influencing diagnostic accuracy and the pneumothorax rate. Materials and methods: 390 patients with 396 pulmonary nodules underwent transthoracic CT-guided aspiration biopsy (TNAB) and transthoracic CT-guided cutting needle core biopsy (TCNB) as follows: 250 solitary pulmonary nodules (SPNs) underwent conventional CT-guided biopsy (conventional method), 81 underwent CT-fluoroscopic biopsy (CT-fluoroscopic method) and 65 underwent conventional CT-guided biopsy in combination with MPR image (MPR method). Success rate, overall diagnostic accuracy, pneumothorax rate and total procedure time were compared in each method. Factors affecting diagnostic accuracy and pneumothorax rate of CT-guided biopsy were statistically evaluated. Results: Success rates (TNAB: 100.0%, TCNB: 100.0%) and overall diagnostic accuracies (TNAB: 96.9%, TCNB: 97.0%) of MPR were significantly higher than those using the conventional method (TNAB: 87.6 and 82.4%, TCNB: 86.3 and 81.3%) (P<0.05). Diagnostic accuracy were influenced by biopsy method, lesion size, and needle path length (P<0.05). Pneumothorax rate was influenced by pathological diagnostic method, lesion size, number of punctures and FEV1.0% (P<0.05). Conclusion: The use of MPR for CT-guided lung biopsy is useful for improving diagnostic accuracy with no significant increase in pneumothorax rate or total procedure time

  12. Quantifying and comparing dynamic predictive accuracy of joint models for longitudinal marker and time-to-event in presence of censoring and competing risks

    DEFF Research Database (Denmark)

    Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie

    2015-01-01

    to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands...

  13. Predicting watershed post-fire sediment yield with the InVEST sediment retention model: Accuracy and uncertainties

    Science.gov (United States)

    Sankey, Joel B.; McVay, Jason C.; Kreitler, Jason R.; Hawbaker, Todd J.; Vaillant, Nicole; Lowe, Scott

    2015-01-01

    Increased sedimentation following wildland fire can negatively impact water supply and water quality. Understanding how changing fire frequency, extent, and location will affect watersheds and the ecosystem services they supply to communities is of great societal importance in the western USA and throughout the world. In this work we assess the utility of the InVEST (Integrated Valuation of Ecosystem Services and Tradeoffs) Sediment Retention Model to accurately characterize erosion and sedimentation of burned watersheds. InVEST was developed by the Natural Capital Project at Stanford University (Tallis et al., 2014) and is a suite of GIS-based implementations of common process models, engineered for high-end computing to allow the faster simulation of larger landscapes and incorporation into decision-making. The InVEST Sediment Retention Model is based on common soil erosion models (e.g., USLE – Universal Soil Loss Equation) and determines which areas of the landscape contribute the greatest sediment loads to a hydrological network and conversely evaluate the ecosystem service of sediment retention on a watershed basis. In this study, we evaluate the accuracy and uncertainties for InVEST predictions of increased sedimentation after fire, using measured postfire sediment yields available for many watersheds throughout the western USA from an existing, published large database. We show that the model can be parameterized in a relatively simple fashion to predict post-fire sediment yield with accuracy. Our ultimate goal is to use the model to accurately predict variability in post-fire sediment yield at a watershed scale as a function of future wildfire conditions.

  14. Genomic selection prediction accuracy in a perennial crop: case study of oil palm (Elaeis guineensis Jacq.).

    Science.gov (United States)

    Cros, David; Denis, Marie; Sánchez, Leopoldo; Cochard, Benoit; Flori, Albert; Durand-Gasselin, Tristan; Nouy, Bruno; Omoré, Alphonse; Pomiès, Virginie; Riou, Virginie; Suryana, Edyana; Bouvet, Jean-Marc

    2015-03-01

    Genomic selection empirically appeared valuable for reciprocal recurrent selection in oil palm as it could account for family effects and Mendelian sampling terms, despite small populations and low marker density. Genomic selection (GS) can increase the genetic gain in plants. In perennial crops, this is expected mainly through shortened breeding cycles and increased selection intensity, which requires sufficient GS accuracy in selection candidates, despite often small training populations. Our objective was to obtain the first empirical estimate of GS accuracy in oil palm (Elaeis guineensis), the major world oil crop. We used two parental populations involved in conventional reciprocal recurrent selection (Deli and Group B) with 131 individuals each, genotyped with 265 SSR. We estimated within-population GS accuracies when predicting breeding values of non-progeny-tested individuals for eight yield traits. We used three methods to sample training sets and five statistical methods to estimate genomic breeding values. The results showed that GS could account for family effects and Mendelian sampling terms in Group B but only for family effects in Deli. Presumably, this difference between populations originated from their contrasting breeding history. The GS accuracy ranged from -0.41 to 0.94 and was positively correlated with the relationship between training and test sets. Training sets optimized with the so-called CDmean criterion gave the highest accuracies, ranging from 0.49 (pulp to fruit ratio in Group B) to 0.94 (fruit weight in Group B). The statistical methods did not affect the accuracy. Finally, Group B could be preselected for progeny tests by applying GS to key yield traits, therefore increasing the selection intensity. Our results should be valuable for breeding programs with small populations, long breeding cycles, or reduced effective size.

  15. Accuracy of liver function tests for predicting adverse maternal and fetal outcomes in women with preeclampsia: a systematic review

    NARCIS (Netherlands)

    Thangaratinam, Shakila; Koopmans, Corine M.; Iyengar, Shalini; Zamora, Javier; Ismail, Khaled M. K.; Mol, Ben W. J.; Khan, Khalid S.

    2011-01-01

    Background. Liver function tests are routinely performed in women as part of a battery of investigations to assess severity at admission and later to guide appropriate management. Objective. To determine the accuracy with which liver function tests predict complications in women with preeclampsia by

  16. Accuracy of liver function tests for predicting adverse maternal and fetal outcomes in women with preeclampsia : a systematic review

    NARCIS (Netherlands)

    Thangaratinam, Shakila; Koopmans, Corine M.; Iyengar, Shalini; Zamora, Javier; Ismail, Khaled M. K.; Mol, Ben W. J.; Khan, Khalid S.

    Background. Liver function tests are routinely performed in women as part of a battery of investigations to assess severity at admission and later to guide appropriate management. Objective. To determine the accuracy with which liver function tests predict complications in women with preeclampsia by

  17. Improved darunavir genotypic mutation score predicting treatment response for patients infected with HIV-1 subtype B and non-subtype B receiving a salvage regimen

    DEFF Research Database (Denmark)

    De Luca, Andrea; Flandre, Philippe; Dunn, David

    2016-01-01

    OBJECTIVES: The objective of this study was to improve the prediction of the impact of HIV-1 protease mutations in different viral subtypes on virological response to darunavir. METHODS: Darunavir-containing treatment change episodes (TCEs) in patients previously failing PIs were selected from...... was derived based on best subset least squares estimation with mutational weights corresponding to regression coefficients. Virological outcome prediction accuracy was compared with that from existing genotypic resistance interpretation systems (GISs) (ANRS 2013, Rega 9.1.0 and HIVdb 7.0). RESULTS: TCEs were...

  18. Improved apparatus for predictive diagnosis of rotator cuff disease

    Science.gov (United States)

    Pillai, Anup; Hall, Brittany N.; Thigpen, Charles A.; Kwartowitz, David M.

    2014-03-01

    Rotator cuff disease impacts over 50% of the population over 60, with reports of incidence being as high as 90% within this population, causing pain and possible loss of function. The rotator cuff is composed of muscles and tendons that work in tandem to support the shoulder. Heavy use of these muscles can lead to rotator cuff tear, with the most common causes is age-related degeneration or sport injuries, both being a function of overuse. Tears ranges in severity from partial thickness tear to total rupture. Diagnostic techniques are based on physical assessment, detailed patient history, and medical imaging; primarily X-ray, MRI and ultrasonography are the chosen modalities for assessment. The final treatment technique and imaging modality; however, is chosen by the clinician is at their discretion. Ultrasound has been shown to have good accuracy for identification and measurement of full-thickness and partial-thickness rotator cuff tears. In this study, we report on the progress and improvement of our method of transduction and analysis of in situ measurement of rotator cuff biomechanics. We have improved the ability of the clinician to apply a uniform force to the underlying musculotendentious tissues while simultaneously obtaining the ultrasound image. This measurement protocol combined with region of interest (ROI) based image processing will help in developing a predictive diagnostic model for treatment of rotator cuff disease and help the clinicians choose the best treatment technique.

  19. AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Kottalanka Srikanth

    2017-07-01

    Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.

  20. Accuracy of Dolphin visual treatment objective (VTO prediction software on class III patients treated with maxillary advancement and mandibular setback

    Directory of Open Access Journals (Sweden)

    Robert J. Peterman

    2016-06-01

    Full Text Available Abstract Background Dolphin® visual treatment objective (VTO prediction software is routinely utilized by orthodontists during the treatment planning of orthognathic cases to help predict post-surgical soft tissue changes. Although surgical soft tissue prediction is considered to be a vital tool, its accuracy is not well understood in tow-jaw surgical procedures. The objective of this study was to quantify the accuracy of Dolphin Imaging’s VTO soft tissue prediction software on class III patients treated with maxillary advancement and mandibular setback and to validate the efficacy of the software in such complex cases. Methods This retrospective study analyzed the records of 14 patients treated with comprehensive orthodontics in conjunction with two-jaw orthognathic surgery. Pre- and post-treatment radiographs were traced and superimposed to determine the actual skeletal movements achieved in surgery. This information was then used to simulate surgery in the software and generate a final soft tissue patient profile prediction. Prediction images were then compared to the actual post-treatment profile photos to determine differences. Results Dolphin Imaging’s software was determined to be accurate within an error range of +/− 2 mm in the X-axis at most landmarks. The lower lip predictions were most inaccurate. Conclusions Clinically, the observed error suggests that the VTO may be used for demonstration and communication with a patient or consulting practitioner. However, Dolphin should not be useful for precise treatment planning of surgical movements. This program should be used with caution to prevent unrealistic patient expectations and dissatisfaction.

  1. Analysis of prostate cancer localization toward improved diagnostic accuracy of transperineal prostate biopsy

    Directory of Open Access Journals (Sweden)

    Yoshiro Sakamoto

    2014-09-01

    Conclusions: The concordance of prostate cancer between prostatectomy specimens and biopsies is comparatively favorable. According to our study, the diagnostic accuracy of transperineal prostate biopsy can be improved in our institute by including the anterior portion of the Apex-Mid and Mid regions in the 12-core biopsy or 16-core biopsy, such that a 4-core biopsy of the anterior portion is included.

  2. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  3. Four-hour quantitative real-time polymerase chain reaction-based comprehensive chromosome screening and accumulating evidence of accuracy, safety, predictive value, and clinical efficacy.

    Science.gov (United States)

    Treff, Nathan R; Scott, Richard T

    2013-03-15

    Embryonic comprehensive chromosomal euploidy may represent a powerful biomarker to improve the success of IVF. However, there are a number of aneuploidy screening strategies to consider, including different technologic platforms with which to interrogate the embryonic DNA, and different embryonic developmental stages from which DNA can be analyzed. Although there are advantages and disadvantages associated with each strategy, a series of experiments producing evidence of accuracy, safety, clinical predictive value, and clinical efficacy indicate that trophectoderm biopsy and quantitative real-time polymerase chain reaction (qPCR)-based comprehensive chromosome screening (CCS) may represent a useful strategy to improve the success of IVF. This Biomarkers in Reproductive Medicine special issue review summarizes the accumulated experience with the development and clinical application of a 4-hour blastocyst qPCR-based CCS technology. Copyright © 2013 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  4. Improving Accuracy and Simplifying Training in Fingerprinting-Based Indoor Location Algorithms at Room Level

    Directory of Open Access Journals (Sweden)

    Mario Muñoz-Organero

    2016-01-01

    Full Text Available Fingerprinting-based algorithms are popular in indoor location systems based on mobile devices. Comparing the RSSI (Received Signal Strength Indicator from different radio wave transmitters, such as Wi-Fi access points, with prerecorded fingerprints from located points (using different artificial intelligence algorithms, fingerprinting-based systems can locate unknown points with a few meters resolution. However, training the system with already located fingerprints tends to be an expensive task both in time and in resources, especially if large areas are to be considered. Moreover, the decision algorithms tend to be of high memory and CPU consuming in such cases and so does the required time for obtaining the estimated location for a new fingerprint. In this paper, we study, propose, and validate a way to select the locations for the training fingerprints which reduces the amount of required points while improving the accuracy of the algorithms when locating points at room level resolution. We present a comparison of different artificial intelligence decision algorithms and select those with better results. We do a comparison with other systems in the literature and draw conclusions about the improvements obtained in our proposal. Moreover, some techniques such as filtering nonstable access points for improving accuracy are introduced, studied, and validated.

  5. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  6. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Eric R. Edelman

    2017-06-01

    Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

  7. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

    Science.gov (United States)

    Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

    2017-01-01

    For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

  8. Solving the stability-accuracy-diversity dilemma of recommender systems

    Science.gov (United States)

    Hou, Lei; Liu, Kecheng; Liu, Jianguo; Zhang, Runtong

    2017-02-01

    Recommender systems are of great significance in predicting the potential interesting items based on the target user's historical selections. However, the recommendation list for a specific user has been found changing vastly when the system changes, due to the unstable quantification of item similarities, which is defined as the recommendation stability problem. To improve the similarity stability and recommendation stability is crucial for the user experience enhancement and the better understanding of user interests. While the stability as well as accuracy of recommendation could be guaranteed by recommending only popular items, studies have been addressing the necessity of diversity which requires the system to recommend unpopular items. By ranking the similarities in terms of stability and considering only the most stable ones, we present a top- n-stability method based on the Heat Conduction algorithm (denoted as TNS-HC henceforth) for solving the stability-accuracy-diversity dilemma. Experiments on four benchmark data sets indicate that the TNS-HC algorithm could significantly improve the recommendation stability and accuracy simultaneously and still retain the high-diversity nature of the Heat Conduction algorithm. Furthermore, we compare the performance of the TNS-HC algorithm with a number of benchmark recommendation algorithms. The result suggests that the TNS-HC algorithm is more efficient in solving the stability-accuracy-diversity triple dilemma of recommender systems.

  9. Ensemble-based prediction of RNA secondary structures.

    Science.gov (United States)

    Aghaeepour, Nima; Hoos, Holger H

    2013-04-24

    Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between

  10. Thoracic injury rule out criteria and NEXUS chest in predicting the risk of traumatic intra-thoracic injuries: A diagnostic accuracy study.

    Science.gov (United States)

    Safari, Saeed; Radfar, Fatemeh; Baratloo, Alireza

    2018-05-01

    This study aimed to compare the diagnostic accuracy of NEXUS chest and Thoracic Injury Rule out criteria (TIRC) models in predicting the risk of intra-thoracic injuries following blunt multiple trauma. In this diagnostic accuracy study, using the 2 mentioned models, blunt multiple trauma patients over the age of 15 years presenting to emergency department were screened regarding the presence of intra-thoracic injuries that are detectable via chest x-ray and screening performance characteristics of the models were compared. In this study, 3118 patients with the mean (SD) age of 37.4 (16.9) years were studied (57.4% male). Based on TIRC and NEXUS chest, respectively, 1340 (43%) and 1417 (45.4%) patients were deemed in need of radiography performance. Sensitivity, specificity, and positive and negative predictive values of TIRC were 98.95%, 62.70%, 21.19% and 99.83%. These values were 98.61%, 59.94%, 19.97% and 99.76%, for NEXUS chest, respectively. Accuracy of TIRC and NEXUS chest models were 66.04 (95% CI: 64.34-67.70) and 63.50 (95% CI: 61.78-65.19), respectively. TIRC and NEXUS chest models have proper and similar sensitivity in prediction of blunt traumatic intra-thoracic injuries that are detectable via chest x-ray. However, TIRC had a significantly higher specificity in this regard. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Improved Modeling and Prediction of Surface Wave Amplitudes

    Science.gov (United States)

    2017-05-31

    AFRL-RV-PS- AFRL-RV-PS- TR-2017-0162 TR-2017-0162 IMPROVED MODELING AND PREDICTION OF SURFACE WAVE AMPLITUDES Jeffry L. Stevens, et al. Leidos...data does not license the holder or any other person or corporation; or convey any rights or permission to manufacture, use, or sell any patented...SUBTITLE Improved Modeling and Prediction of Surface Wave Amplitudes 5a. CONTRACT NUMBER FA9453-14-C-0225 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

  12. 100% classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox.

    Directory of Open Access Journals (Sweden)

    Francisco J Valverde-Albacete

    Full Text Available The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA, a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT, a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers.

  13. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    Science.gov (United States)

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  14. Effect of Trait Heritability, Training Population Size and Marker Density on Genomic Prediction Accuracy Estimation in 22 bi-parental Tropical Maize Populations.

    Science.gov (United States)

    Zhang, Ao; Wang, Hongwu; Beyene, Yoseph; Semagn, Kassa; Liu, Yubo; Cao, Shiliang; Cui, Zhenhai; Ruan, Yanye; Burgueño, Juan; San Vicente, Felix; Olsen, Michael; Prasanna, Boddupalli M; Crossa, José; Yu, Haiqiu; Zhang, Xuecai

    2017-01-01

    Genomic selection is being used increasingly in plant breeding to accelerate genetic gain per unit time. One of the most important applications of genomic selection in maize breeding is to predict and select the best un-phenotyped lines in bi-parental populations based on genomic estimated breeding values. In the present study, 22 bi-parental tropical maize populations genotyped with low density SNPs were used to evaluate the genomic prediction accuracy ( r MG ) of the six trait-environment combinations under various levels of training population size (TPS) and marker density (MD), and assess the effect of trait heritability ( h 2 ), TPS and MD on r MG estimation. Our results showed that: (1) moderate r MG values were obtained for different trait-environment combinations, when 50% of the total genotypes was used as training population and ~200 SNPs were used for prediction; (2) r MG increased with an increase in h 2 , TPS and MD, both correlation and variance analyses showed that h 2 is the most important factor and MD is the least important factor on r MG estimation for most of the trait-environment combinations; (3) predictions between pairwise half-sib populations showed that the r MG values for all the six trait-environment combinations were centered around zero, 49% predictions had r MG values above zero; (4) the trend observed in r MG differed with the trend observed in r MG / h , and h is the square root of heritability of the predicted trait, it indicated that both r MG and r MG / h values should be presented in GS study to show the accuracy of genomic selection and the relative accuracy of genomic selection compared with phenotypic selection, respectively. This study provides useful information to maize breeders to design genomic selection workflow in their breeding programs.

  15. Effect of Trait Heritability, Training Population Size and Marker Density on Genomic Prediction Accuracy Estimation in 22 bi-parental Tropical Maize Populations

    Directory of Open Access Journals (Sweden)

    Ao Zhang

    2017-11-01

    Full Text Available Genomic selection is being used increasingly in plant breeding to accelerate genetic gain per unit time. One of the most important applications of genomic selection in maize breeding is to predict and select the best un-phenotyped lines in bi-parental populations based on genomic estimated breeding values. In the present study, 22 bi-parental tropical maize populations genotyped with low density SNPs were used to evaluate the genomic prediction accuracy (rMG of the six trait-environment combinations under various levels of training population size (TPS and marker density (MD, and assess the effect of trait heritability (h2, TPS and MD on rMG estimation. Our results showed that: (1 moderate rMG values were obtained for different trait-environment combinations, when 50% of the total genotypes was used as training population and ~200 SNPs were used for prediction; (2 rMG increased with an increase in h2, TPS and MD, both correlation and variance analyses showed that h2 is the most important factor and MD is the least important factor on rMG estimation for most of the trait-environment combinations; (3 predictions between pairwise half-sib populations showed that the rMG values for all the six trait-environment combinations were centered around zero, 49% predictions had rMG values above zero; (4 the trend observed in rMG differed with the trend observed in rMG/h, and h is the square root of heritability of the predicted trait, it indicated that both rMG and rMG/h values should be presented in GS study to show the accuracy of genomic selection and the relative accuracy of genomic selection compared with phenotypic selection, respectively. This study provides useful information to maize breeders to design genomic selection workflow in their breeding programs.

  16. An efficient optimization method to improve the measuring accuracy of oxygen saturation by using triangular wave optical signal

    Science.gov (United States)

    Li, Gang; Yu, Yue; Zhang, Cui; Lin, Ling

    2017-09-01

    The oxygen saturation is one of the important parameters to evaluate human health. This paper presents an efficient optimization method that can improve the accuracy of oxygen saturation measurement, which employs an optical frequency division triangular wave signal as the excitation signal to obtain dynamic spectrum and calculate oxygen saturation. In comparison to the traditional method measured RMSE (root mean square error) of SpO2 which is 0.1705, this proposed method significantly reduced the measured RMSE which is 0.0965. It is notable that the accuracy of oxygen saturation measurement has been improved significantly. The method can simplify the circuit and bring down the demand of elements. Furthermore, it has a great reference value on improving the signal to noise ratio of other physiological signals.

  17. Improving contact prediction along three dimensions.

    Directory of Open Access Journals (Sweden)

    Christoph Feinauer

    2014-10-01

    Full Text Available Correlation patterns in multiple sequence alignments of homologous proteins can be exploited to infer information on the three-dimensional structure of their members. The typical pipeline to address this task, which we in this paper refer to as the three dimensions of contact prediction, is to (i filter and align the raw sequence data representing the evolutionarily related proteins; (ii choose a predictive model to describe a sequence alignment; (iii infer the model parameters and interpret them in terms of structural properties, such as an accurate contact map. We show here that all three dimensions are important for overall prediction success. In particular, we show that it is possible to improve significantly along the second dimension by going beyond the pair-wise Potts models from statistical physics, which have hitherto been the focus of the field. These (simple extensions are motivated by multiple sequence alignments often containing long stretches of gaps which, as a data feature, would be rather untypical for independent samples drawn from a Potts model. Using a large test set of proteins we show that the combined improvements along the three dimensions are as large as any reported to date.

  18. Improving diagnostic accuracy using agent-based distributed data mining system.

    Science.gov (United States)

    Sridhar, S

    2013-09-01

    The use of data mining techniques to improve the diagnostic system accuracy is investigated in this paper. The data mining algorithms aim to discover patterns and extract useful knowledge from facts recorded in databases. Generally, the expert systems are constructed for automating diagnostic procedures. The learning component uses the data mining algorithms to extract the expert system rules from the database automatically. Learning algorithms can assist the clinicians in extracting knowledge automatically. As the number and variety of data sources is dramatically increasing, another way to acquire knowledge from databases is to apply various data mining algorithms that extract knowledge from data. As data sets are inherently distributed, the distributed system uses agents to transport the trained classifiers and uses meta learning to combine the knowledge. Commonsense reasoning is also used in association with distributed data mining to obtain better results. Combining human expert knowledge and data mining knowledge improves the performance of the diagnostic system. This work suggests a framework of combining the human knowledge and knowledge gained by better data mining algorithms on a renal and gallstone data set.

  19. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  20. Text mining improves prediction of protein functional sites.

    Directory of Open Access Journals (Sweden)

    Karin M Verspoor

    Full Text Available We present an approach that integrates protein structure analysis and text mining for protein functional site prediction, called LEAP-FS (Literature Enhanced Automated Prediction of Functional Sites. The structure analysis was carried out using Dynamics Perturbation Analysis (DPA, which predicts functional sites at control points where interactions greatly perturb protein vibrations. The text mining extracts mentions of residues in the literature, and predicts that residues mentioned are functionally important. We assessed the significance of each of these methods by analyzing their performance in finding known functional sites (specifically, small-molecule binding sites and catalytic sites in about 100,000 publicly available protein structures. The DPA predictions recapitulated many of the functional site annotations and preferentially recovered binding sites annotated as biologically relevant vs. those annotated as potentially spurious. The text-based predictions were also substantially supported by the functional site annotations: compared to other residues, residues mentioned in text were roughly six times more likely to be found in a functional site. The overlap of predictions with annotations improved when the text-based and structure-based methods agreed. Our analysis also yielded new high-quality predictions of many functional site residues that were not catalogued in the curated data sources we inspected. We conclude that both DPA and text mining independently provide valuable high-throughput protein functional site predictions, and that integrating the two methods using LEAP-FS further improves the quality of these predictions.

  1. Text Mining Improves Prediction of Protein Functional Sites

    Science.gov (United States)

    Cohn, Judith D.; Ravikumar, Komandur E.

    2012-01-01

    We present an approach that integrates protein structure analysis and text mining for protein functional site prediction, called LEAP-FS (Literature Enhanced Automated Prediction of Functional Sites). The structure analysis was carried out using Dynamics Perturbation Analysis (DPA), which predicts functional sites at control points where interactions greatly perturb protein vibrations. The text mining extracts mentions of residues in the literature, and predicts that residues mentioned are functionally important. We assessed the significance of each of these methods by analyzing their performance in finding known functional sites (specifically, small-molecule binding sites and catalytic sites) in about 100,000 publicly available protein structures. The DPA predictions recapitulated many of the functional site annotations and preferentially recovered binding sites annotated as biologically relevant vs. those annotated as potentially spurious. The text-based predictions were also substantially supported by the functional site annotations: compared to other residues, residues mentioned in text were roughly six times more likely to be found in a functional site. The overlap of predictions with annotations improved when the text-based and structure-based methods agreed. Our analysis also yielded new high-quality predictions of many functional site residues that were not catalogued in the curated data sources we inspected. We conclude that both DPA and text mining independently provide valuable high-throughput protein functional site predictions, and that integrating the two methods using LEAP-FS further improves the quality of these predictions. PMID:22393388

  2. Comparison of accuracy in predicting emotional instability from MMPI data: fisherian versus contingent probability statistics

    Energy Technology Data Exchange (ETDEWEB)

    Berghausen, P.E. Jr.; Mathews, T.W.

    1987-01-01

    The security plans of nuclear power plants generally require that all personnel who are to have access to protected areas or vital islands be screened for emotional stability. In virtually all instances, the screening involves the administration of one or more psychological tests, usually including the Minnesota Multiphasic Personality Inventory (MMPI). At some plants, all employees receive a structured clinical interview after they have taken the MMPI and results have been obtained. At other plants, only those employees with dirty MMPI are interviewed. This latter protocol is referred to as interviews by exception. Behaviordyne Psychological Corp. has succeeded in removing some of the uncertainty associated with interview-by-exception protocols by developing an empirically based, predictive equation. This equation permits utility companies to make informed choices regarding the risks they are assuming. A conceptual problem exists with the predictive equation, however. Like most predictive equations currently in use, it is based on Fisherian statistics, involving least-squares analyses. Consequently, Behaviordyne Psychological Corp., in conjunction with T.W. Mathews and Associates, has just developed a second predictive equation, one based on contingent probability statistics. The particular technique used in the multi-contingent analysis of probability systems (MAPS) approach. The present paper presents a comparison of predictive accuracy of the two equations: the one derived using Fisherian techniques versus the one thing contingent probability techniques.

  3. Comparison of accuracy in predicting emotional instability from MMPI data: fisherian versus contingent probability statistics

    International Nuclear Information System (INIS)

    Berghausen, P.E. Jr.; Mathews, T.W.

    1987-01-01

    The security plans of nuclear power plants generally require that all personnel who are to have access to protected areas or vital islands be screened for emotional stability. In virtually all instances, the screening involves the administration of one or more psychological tests, usually including the Minnesota Multiphasic Personality Inventory (MMPI). At some plants, all employees receive a structured clinical interview after they have taken the MMPI and results have been obtained. At other plants, only those employees with dirty MMPI are interviewed. This latter protocol is referred to as interviews by exception. Behaviordyne Psychological Corp. has succeeded in removing some of the uncertainty associated with interview-by-exception protocols by developing an empirically based, predictive equation. This equation permits utility companies to make informed choices regarding the risks they are assuming. A conceptual problem exists with the predictive equation, however. Like most predictive equations currently in use, it is based on Fisherian statistics, involving least-squares analyses. Consequently, Behaviordyne Psychological Corp., in conjunction with T.W. Mathews and Associates, has just developed a second predictive equation, one based on contingent probability statistics. The particular technique used in the multi-contingent analysis of probability systems (MAPS) approach. The present paper presents a comparison of predictive accuracy of the two equations: the one derived using Fisherian techniques versus the one thing contingent probability techniques

  4. Genomic Prediction Within and Across Biparental Families: Means and Variances of Prediction Accuracy and Usefulness of Deterministic Equations

    Directory of Open Access Journals (Sweden)

    Pascal Schopp

    2017-11-01

    Full Text Available A major application of genomic prediction (GP in plant breeding is the identification of superior inbred lines within families derived from biparental crosses. When models for various traits were trained within related or unrelated biparental families (BPFs, experimental studies found substantial variation in prediction accuracy (PA, but little is known about the underlying factors. We used SNP marker genotypes of inbred lines from either elite germplasm or landraces of maize (Zea mays L. as parents to generate in silico 300 BPFs of doubled-haploid lines. We analyzed PA within each BPF for 50 simulated polygenic traits, using genomic best linear unbiased prediction (GBLUP models trained with individuals from either full-sib (FSF, half-sib (HSF, or unrelated families (URF for various sizes (Ntrain of the training set and different heritabilities (h2 . In addition, we modified two deterministic equations for forecasting PA to account for inbreeding and genetic variance unexplained by the training set. Averaged across traits, PA was high within FSF (0.41–0.97 with large variation only for Ntrain < 50 and h2 < 0.6. For HSF and URF, PA was on average ∼40–60% lower and varied substantially among different combinations of BPFs used for model training and prediction as well as different traits. As exemplified by HSF results, PA of across-family GP can be very low if causal variants not segregating in the training set account for a sizeable proportion of the genetic variance among predicted individuals. Deterministic equations accurately forecast the PA expected over many traits, yet cannot capture trait-specific deviations. We conclude that model training within BPFs generally yields stable PA, whereas a high level of uncertainty is encountered in across-family GP. Our study shows the extent of variation in PA that must be at least reckoned with in practice and offers a starting point for the design of training sets composed of multiple BPFs.

  5. An Inventory Controlled Supply Chain Model Based on Improved BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wei He

    2013-01-01

    Full Text Available Inventory control is a key factor for reducing supply chain cost and increasing customer satisfaction. However, prediction of inventory level is a challenging task for managers. As one of the widely used techniques for inventory control, standard BP neural network has such problems as low convergence rate and poor prediction accuracy. Aiming at these problems, a new fast convergent BP neural network model for predicting inventory level is developed in this paper. By adding an error offset, this paper deduces the new chain propagation rule and the new weight formula. This paper also applies the improved BP neural network model to predict the inventory level of an automotive parts company. The results show that the improved algorithm not only significantly exceeds the standard algorithm but also outperforms some other improved BP algorithms both on convergence rate and prediction accuracy.

  6. Prediction of Tubal Ectopic Pregnancy Using Offline Analysis of 3-Dimensional Transvaginal Ultrasonographic Data Sets: An Interobserver and Diagnostic Accuracy Study.

    Science.gov (United States)

    Infante, Fernando; Espada Vaquero, Mercedes; Bignardi, Tommaso; Lu, Chuan; Testa, Antonia C; Fauchon, David; Epstein, Elisabeth; Leone, Francesco P G; Van den Bosch, Thierry; Martins, Wellington P; Condous, George

    2017-12-08

    To assess interobserver reproducibility in detecting tubal ectopic pregnancies by reading data sets from 3-dimensional (3D) transvaginal ultrasonography (TVUS) and comparing it with real-time 2-dimensional (2D) TVUS. Images were initially classified as showing pregnancies of unknown location or tubal ectopic pregnancies on real time 2D TVUS by an experienced sonologist, who acquired 5 3D volumes. Data sets were analyzed offline by 5 observers who had to classify each case as ectopic pregnancy or pregnancy of unknown location. The interobserver reproducibility was evaluated by the Fleiss κ statistic. The performance of each observer in predicting ectopic pregnancies was compared to that of the experienced sonologist. Women were followed until they were reclassified as follows: (1) failed pregnancy of unknown location; (2) intrauterine pregnancy; (3) ectopic pregnancy; or (4) persistent pregnancy of unknown location. Sixty-one women were included. The agreement between reading offline 3D data sets and the first real-time 2D TVUS was very good (80%-82%; κ = 0.89). The overall interobserver agreement among observers reading offline 3D data sets was moderate (κ = 0.52). The diagnostic performance of experienced observers reading offline 3D data sets had accuracy of 78.3% to 85.0%, sensitivity of 66.7% to 81.3%, specificity of 79.5% to 88.4%, positive predictive value of 57.1% to 72.2%, and negative predictive value of 87.5% to 91.3%, compared to the experienced sonologist's real-time 2D TVUS: accuracy of 94.5%, sensitivity of 94.4%, specificity of 94.5%, positive predictive value of 85.0%, and negative predictive value of 98.1%. The diagnostic accuracy of 3D TVUS by reading offline data sets for predicting ectopic pregnancies is dependent on experience. Reading only static 3D data sets without clinical information does not match the diagnostic performance of real time 2D TVUS combined with clinical information obtained during the scan. © 2017 by the American

  7. Persistency of Prediction Accuracy and Genetic Gain in Synthetic Populations Under Recurrent Genomic Selection

    Directory of Open Access Journals (Sweden)

    Dominik Müller

    2017-03-01

    Full Text Available Recurrent selection (RS has been used in plant breeding to successively improve synthetic and other multiparental populations. Synthetics are generated from a limited number of parents ( Np , but little is known about how Np affects genomic selection (GS in RS, especially the persistency of prediction accuracy (rg , g ^ and genetic gain. Synthetics were simulated by intermating Np= 2–32 parent lines from an ancestral population with short- or long-range linkage disequilibrium (LDA and subjected to multiple cycles of GS. We determined rg , g ^ and genetic gain across 30 cycles for different training set (TS sizes, marker densities, and generations of recombination before model training. Contributions to rg , g ^ and genetic gain from pedigree relationships, as well as from cosegregation and LDA between QTL and markers, were analyzed via four scenarios differing in (i the relatedness between TS and selection candidates and (ii whether selection was based on markers or pedigree records. Persistency of rg , g ^ was high for small Np , where predominantly cosegregation contributed to rg , g ^ , but also for large Np , where LDA replaced cosegregation as the dominant information source. Together with increasing genetic variance, this compensation resulted in relatively constant long- and short-term genetic gain for increasing Np > 4, given long-range LDA in the ancestral population. Although our scenarios suggest that information from pedigree relationships contributed to rg , g ^ for only very few generations in GS, we expect a longer contribution than in pedigree BLUP, because capturing Mendelian sampling by markers reduces selective pressure on pedigree relationships. Larger TS size (NTS and higher marker density improved persistency of rg , g ^ and hence genetic gain, but additional recombinations could not increase genetic gain.

  8. Study on MPGA-BP of Gravity Dam Deformation Prediction

    Directory of Open Access Journals (Sweden)

    Xiaoyu Wang

    2017-01-01

    Full Text Available Displacement is an important physical quantity of hydraulic structures deformation monitoring, and its prediction accuracy is the premise of ensuring the safe operation. Most existing metaheuristic methods have three problems: (1 falling into local minimum easily, (2 slowing convergence, and (3 the initial value’s sensitivity. Resolving these three problems and improving the prediction accuracy necessitate the application of genetic algorithm-based backpropagation (GA-BP neural network and multiple population genetic algorithm (MPGA. A hybrid multiple population genetic algorithm backpropagation (MPGA-BP neural network algorithm is put forward to optimize deformation prediction from periodic monitoring surveys of hydraulic structures. This hybrid model is employed for analyzing the displacement of a gravity dam in China. The results show the proposed model is superior to an ordinary BP neural network and statistical regression model in the aspect of global search, convergence speed, and prediction accuracy.

  9. Effectiveness of blood pressure educational and evaluation program for the improvement of measurement accuracy among nurses.

    Science.gov (United States)

    Rabbia, Franco; Testa, Elisa; Rabbia, Silvia; Praticò, Santina; Colasanto, Claudia; Montersino, Federica; Berra, Elena; Covella, Michele; Fulcheri, Chiara; Di Monaco, Silvia; Buffolo, Fabrizio; Totaro, Silvia; Veglio, Franco

    2013-06-01

    To assess the procedure for measuring blood pressure (BP) among hospital nurses and to assess if a training program would improve technique and accuracy. 160 nurses from Molinette Hospital were included in the study. The program was based upon theoretical and practical lessons. It was one day long and it was held by trained nurses and physicians who have practice in the Hypertension Unit. An evaluation of nurses' measuring technique and accuracy was performed before and after the program, by using a 9-item checklist. Moreover we calculated the differences between measured and effective BP values before and after the training program. At baseline evaluation, we observed inadequate performance on some points of clinical BP measurement technique, specifically: only 10% of nurses inspected the arm diameter before placing the cuff, 4% measured BP in both arms, 80% placed the head of the stethoscope under the cuff, 43% did not remove all clothing that covered the location of cuff placement, did not have the patient seat comfortably with his legs uncrossed and with his back and arms supported. After the training we found a significant improvement in the technique for all items. We didn't observe any significant difference of measurement knowledge between nurses working in different settings such as medical or surgical departments. Periodical education in BP measurement may be required, and this may significantly improve the technique and consequently the accuracy.

  10. An improved distance-to-dose correlation for predicting bladder and rectum dose-volumes in knowledge-based VMAT planning for prostate cancer

    Science.gov (United States)

    Wall, Phillip D. H.; Carver, Robert L.; Fontenot, Jonas D.

    2018-01-01

    The overlap volume histogram (OVH) is an anatomical metric commonly used to quantify the geometric relationship between an organ at risk (OAR) and target volume when predicting expected dose-volumes in knowledge-based planning (KBP). This work investigated the influence of additional variables contributing to variations in the assumed linear DVH-OVH correlation for the bladder and rectum in VMAT plans of prostate patients, with the goal of increasing prediction accuracy and achievability of knowledge-based planning methods. VMAT plans were retrospectively generated for 124 prostate patients using multi-criteria optimization. DVHs quantified patient dosimetric data while OVHs quantified patient anatomical information. The DVH-OVH correlations were calculated for fractional bladder and rectum volumes of 30, 50, 65, and 80%. Correlations between potential influencing factors and dose were quantified using the Pearson product-moment correlation coefficient (R). Factors analyzed included the derivative of the OVH, prescribed dose, PTV volume, bladder volume, rectum volume, and in-field OAR volume. Out of the selected factors, only the in-field bladder volume (mean R  =  0.86) showed a strong correlation with bladder doses. Similarly, only the in-field rectal volume (mean R  =  0.76) showed a strong correlation with rectal doses. Therefore, an OVH formalism accounting for in-field OAR volumes was developed to determine the extent to which it improved the DVH-OVH correlation. Including the in-field factor improved the DVH-OVH correlation, with the mean R values over the fractional volumes studied improving from  -0.79 to  -0.85 and  -0.82 to  -0.86 for the bladder and rectum, respectively. A re-planning study was performed on 31 randomly selected database patients to verify the increased accuracy of KBP dose predictions by accounting for bladder and rectum volume within treatment fields. The in-field OVH led to significantly more precise

  11. Additional measures do not improve the diagnostic accuracy of the Hospital Admission Risk Profile for detecting downstream quality of life in community-dwelling older people presenting to a hospital emergency department

    Directory of Open Access Journals (Sweden)

    Grimmer K

    2014-01-01

    Full Text Available K Grimmer, S Milanese, K Beaton, A AtlasInternational Centre for Allied Health Evidence, University of South Australia, Adelaide, SA, AustraliaIntroduction: The Hospital Admission Risk Profile (HARP instrument is commonly used to assess risk of functional decline when older people are admitted to hospital. HARP has moderate diagnostic accuracy (65% for downstream decreased scores in activities of daily living. This paper reports the diagnostic accuracy of HARP for downstream quality of life. It also tests whether adding other measures to HARP improves its diagnostic accuracy.Methods: One hundred and forty-eight independent community dwelling individuals aged 65 years or older were recruited in the emergency department of one large Australian hospital with a medical problem for which they were discharged without a hospital ward admission. Data, including age, sex, primary language, highest level of education, postcode, living status, requiring care for daily activities, using a gait aid, receiving formal community supports, instrumental activities of daily living in the last week, hospitalization and falls in the last 12 months, and mental state were collected at recruitment. HARP scores were derived from a formula that summed scores assigned to age, activities of daily living, and mental state categories. Physical and mental component scores of a quality of life measure were captured by telephone interview at 1 and 3 months after recruitment.Results: HARP scores are moderately accurate at predicting downstream decline in physical quality of life, but did not predict downstream decline in mental quality of life. The addition of other variables to HARP did not improve its diagnostic accuracy for either measure of quality of life.Conclusion: HARP is a poor predictor of quality of life.Keywords: functional decline, HARP, quality of life, older people

  12. Improving the accuracy of self-assessment of practical clinical skills using video feedback--the importance of including benchmarks.

    Science.gov (United States)

    Hawkins, S C; Osborne, A; Schofield, S J; Pournaras, D J; Chester, J F

    2012-01-01

    Isolated video recording has not been demonstrated to improve self-assessment accuracy. This study examines if the inclusion of a defined standard benchmark performance in association with video feedback of a student's own performance improves the accuracy of student self-assessment of clinical skills. Final year medical students were video recorded performing a standardised suturing task in a simulated environment. After the exercise, the students self-assessed their performance using global rating scales (GRSs). An identical self-assessment process was repeated following video review of their performance. Students were then shown a video-recorded 'benchmark performance', which was specifically developed for the study. This demonstrated the competency levels required to score full marks (30 points). A further self-assessment task was then completed. Students' scores were correlated against expert assessor scores. A total of 31 final year medical students participated. Student self-assessment scores before video feedback demonstrated moderate positive correlation with expert assessor scores (r = 0.48, p benchmark performance demonstration, self-assessment scores demonstrated a very strong positive correlation with expert scores (r = 0.83, p benchmark performance in combination with video feedback may significantly improve the accuracy of students' self-assessments.

  13. Accuracy evaluation of Fourier series analysis and singular spectrum analysis for predicting the volume of motorcycle sales in Indonesia

    Science.gov (United States)

    Sasmita, Yoga; Darmawan, Gumgum

    2017-08-01

    This research aims to evaluate the performance of forecasting by Fourier Series Analysis (FSA) and Singular Spectrum Analysis (SSA) which are more explorative and not requiring parametric assumption. Those methods are applied to predicting the volume of motorcycle sales in Indonesia from January 2005 to December 2016 (monthly). Both models are suitable for seasonal and trend component data. Technically, FSA defines time domain as the result of trend and seasonal component in different frequencies which is difficult to identify in the time domain analysis. With the hidden period is 2,918 ≈ 3 and significant model order is 3, FSA model is used to predict testing data. Meanwhile, SSA has two main processes, decomposition and reconstruction. SSA decomposes the time series data into different components. The reconstruction process starts with grouping the decomposition result based on similarity period of each component in trajectory matrix. With the optimum of window length (L = 53) and grouping effect (r = 4), SSA predicting testing data. Forecasting accuracy evaluation is done based on Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). The result shows that in the next 12 month, SSA has MAPE = 13.54 percent, MAE = 61,168.43 and RMSE = 75,244.92 and FSA has MAPE = 28.19 percent, MAE = 119,718.43 and RMSE = 142,511.17. Therefore, to predict volume of motorcycle sales in the next period should use SSA method which has better performance based on its accuracy.

  14. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.

    Science.gov (United States)

    Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu

    2018-01-19

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing

  15. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model

    Directory of Open Access Journals (Sweden)

    Jingzhou Xin

    2018-01-01

    Full Text Available Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA, and generalized autoregressive conditional heteroskedasticity (GARCH. Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS deformation monitoring system demonstrated that: (1 the Kalman filter is capable of denoising the bridge deformation monitoring data; (2 the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3 in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity; the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data

  16. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    Science.gov (United States)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  17. Four Reasons to Question the Accuracy of a Biotic Index; the Risk of Metric Bias and the Scope to Improve Accuracy.

    Directory of Open Access Journals (Sweden)

    Kieran A Monaghan

    Full Text Available Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data

  18. An index with improved diagnostic accuracy for the diagnosis of Crohn's disease derived from the Lennard-Jones criteria.

    Science.gov (United States)

    Reinisch, S; Schweiger, K; Pablik, E; Collet-Fenetrier, B; Peyrin-Biroulet, L; Alfaro, I; Panés, J; Moayyedi, P; Reinisch, W

    2016-09-01

    The Lennard-Jones criteria are considered the gold standard for diagnosing Crohn's disease (CD) and include the items granuloma, macroscopic discontinuity, transmural inflammation, fibrosis, lymphoid aggregates and discontinuous inflammation on histology. The criteria have never been subjected to a formal validation process. To develop a validated and improved diagnostic index based on the items of Lennard-Jones criteria. Included were 328 adult patients with long-standing CD (median disease duration 10 years) from three centres and classified as 'established', 'probable' or 'non-CD' by Lennard-Jones criteria at time of diagnosis. Controls were patients with ulcerative colitis (n = 170). The performance of each of the six diagnostic items of Lennard-Jones criteria was modelled by logistic regression and a new index based on stepwise backward selection and cut-offs was developed. The diagnostic value of the new index was analysed by comparing sensitivity, specificity and accuracy vs. Lennard-Jones criteria. By Lennard-Jones criteria 49% (n = 162) of CD patients would have been diagnosed as 'non-CD' at time of diagnosis (sensitivity/specificity/accuracy, 'established' CD: 0.34/0.99/0.67; 'probable' CD: 0.51/0.95/0.73). A new index was derived from granuloma, fibrosis, transmural inflammation and macroscopic discontinuity, but excluded lymphoid aggregates and discontinuous inflammation on histology. Our index provided improved diagnostic accuracy for 'established' and 'probable' CD (sensitivity/specificity/accuracy, 'established' CD: 0.45/1/0.72; 'probable' CD: 0.8/0.85/0.82), including the subgroup isolated colonic CD ('probable' CD, new index: 0.73/0.85/0.79; Lennard-Jones criteria: 0.43/0.95/0.69). We developed an index based on items of Lennard-Jones criteria providing improved diagnostic accuracy for the differential diagnosis between CD and UC. © 2016 John Wiley & Sons Ltd.

  19. Accuracy of the actuator disc-RANS approach for predicting the performance and wake of tidal turbines.

    Science.gov (United States)

    Batten, W M J; Harrison, M E; Bahaj, A S

    2013-02-28

    The actuator disc-RANS model has widely been used in wind and tidal energy to predict the wake of a horizontal axis turbine. The model is appropriate where large-scale effects of the turbine on a flow are of interest, for example, when considering environmental impacts, or arrays of devices. The accuracy of the model for modelling the wake of tidal stream turbines has not been demonstrated, and flow predictions presented in the literature for similar modelled scenarios vary significantly. This paper compares the results of the actuator disc-RANS model, where the turbine forces have been derived using a blade-element approach, to experimental data measured in the wake of a scaled turbine. It also compares the results with those of a simpler uniform actuator disc model. The comparisons show that the model is accurate and can predict up to 94 per cent of the variation in the experimental velocity data measured on the centreline of the wake, therefore demonstrating that the actuator disc-RANS model is an accurate approach for modelling a turbine wake, and a conservative approach to predict performance and loads. It can therefore be applied to similar scenarios with confidence.

  20. Early clinical esophageal adenocarcinoma (cT1): Utility of CT in regional nodal metastasis detection and can the clinical accuracy be improved?

    Energy Technology Data Exchange (ETDEWEB)

    Betancourt Cuellar, Sonia L., E-mail: slbetancourt@mdanderson.org; Sabloff, Bradley, E-mail: bsabloff@mdanderson.org; Carter, Brett W., E-mail: bcarter2@mdanderson.org; Benveniste, Marcelo F., E-mail: mfbenveniste@mdanderson.org; Correa, Arlene M., E-mail: amcorrea@mdanderson.org; Maru, Dipen M., E-mail: dmaru@mdanderson.org; Ajani, Jaffer A., E-mail: jajani@mdanderson.org; Erasmus, Jeremy J., E-mail: jerasmus@mdanderson.org; Hofstetter, Wayne L., E-mail: whofstetter@mdanderson.org

    2017-03-15

    Introduction: Treatment of early esophageal cancer depends on the extent of the primary tumor and presence of regional lymph node metastasis.(RNM). Short axis diameter >10 mm is typically used to detect RNM. However, clinical determination of RNM is inaccurate and can result in inappropriate treatment. Purpose of this study is to evaluate the accuracy of a single linear measurement (short axis > 10 mm) of regional nodes on CT in predicting nodal metastasis, in patients with early esophageal cancer and whether using a mean diameter value (short axis + long axis/2) as well as nodal shape improves cN designation. Methods: CTs of 49 patients with cT1 adenocarcinoma treated with surgical resection alone were reviewed retrospectively. Regional nodes were considered positive for malignancy when round or ovoid and mean size >5 mm adjacent to the primary tumor and >7 mm when not adjacent. Results were compared with pN status after esophagectomy. Results: 18/49 patients had pN+ at resection. Using a single short axis diameter >10 mm on CT, nodal metastasis (cN) was positive in 7/49. Only 1 of these patients was pN+ at resection (sensitivity 5%, specificity 80%, accuracy 53%). Using mean size and morphologic criteria, cN was positive in 28/49. 11 of these patients were pN+ at resection (sensitivity 61%, specificity 45%, accuracy 51%). EUS with limited FNA of regional nodes resulted in 16/49 patients with pN+ being inappropriately designated as cN0. Conclusions: Evaluation of size, shape and location of regional lymph nodes on CT improves the sensitivity of cN determination compared with a short axis measurement alone in patients with cT1 esophageal cancer, although clinical utility is limited.

  1. Accuracy of magnetic resonance in identifying traumatic intraarticular knee lesions

    International Nuclear Information System (INIS)

    Vaz, Carlos Eduardo Sanches; Camargo, Olavo Pires de; Santana, Paulo Jose de; Valezi, Antonio Carlos

    2005-01-01

    Purpose: To evaluate the diagnostic accuracy of magnetic resonance imaging of the knee in identifying traumatic intraarticular knee lesions. Method: 300 patients with a clinical diagnosis of traumatic intraarticular knee lesions underwent prearthoscopic magnetic resonance imaging. The sensitivity, specificity, positive predictive value, negative predictive value, likelihood ratio for a positive test, likelihood ratio for a negative test, and accuracy of magnetic resonance imaging were calculated relative to the findings during arthroscopy in the studied structures of the knee (medial meniscus, lateral meniscus, anterior cruciate ligament, posterior cruciate ligament, and articular cartilage). Results: Magnetic resonance imaging produced the following results regarding detection of lesions: medial meniscus: sensitivity 97.5%, specificity 92.9%, positive predictive value 93.9%, positive negative value 97%, likelihood positive ratio 13.7, likelihood negative ratio 0.02, and accuracy 95.3%; lateral meniscus: sensitivity 91.9%, specificity 93.6%, positive predictive value 92.7%, positive negative value 92.9%, likelihood positive ratio 14.3, likelihood negative ratio 0.08, and accuracy 93.6%; anterior cruciate ligament: sensitivity 99.0%, specificity 95.9%, positive predictive value 91.9%, positive negative value 99.5%, likelihood positive ratio 21.5, likelihood negative ratio 0.01, and accuracy 96.6%; posterior cruciate ligament: sensitivity 100%, specificity 99%, positive predictive value 80.0%, positive negative value 100%, likelihood positive ratio 100, likelihood negative ratio 0.01, and accuracy 99.6%; articular cartilage: sensitivity 76.1%, specificity 94.9%, positive predictive value 94.7%, positive negative value 76.9%, likelihood positive ratio 14.9, likelihood negative ratio 0.25, and accuracy 84.6%. Conclusion: Magnetic resonance imaging is a satisfactory diagnostic tool for evaluating meniscal and ligamentous lesions of the knee, but it is unable to clearly

  2. Accuracy of magnetic resonance in identifying traumatic intraarticular knee lesions

    Directory of Open Access Journals (Sweden)

    Vaz Carlos Eduardo Sanches

    2005-01-01

    Full Text Available PURPOSE: To evaluate the diagnostic accuracy of magnetic resonance imaging of the knee in identifying traumatic intraarticular knee lesions. METHOD: 300 patients with a clinical diagnosis of traumatic intraarticular knee lesions underwent prearthoscopic magnetic resonance imaging. The sensitivity, specificity, positive predictive value, negative predictive value, likelihood ratio for a positive test, likelihood ratio for a negative test, and accuracy of magnetic resonance imaging were calculated relative to the findings during arthroscopy in the studied structures of the knee (medial meniscus, lateral meniscus, anterior cruciate ligament, posterior cruciate ligament, and articular cartilage. RESULTS: Magnetic resonance imaging produced the following results regarding detection of lesions: medial meniscus: sensitivity 97.5%, specificity 92.9%, positive predictive value 93.9%, positive negative value 97%, likelihood positive ratio 13.7, likelihood negative ratio 0.02, and accuracy 95.3%; lateral meniscus: sensitivity 91.9%, specificity 93.6%, positive predictive value 92.7%, positive negative value 92.9%, likelihood positive ratio 14.3, likelihood negative ratio 0.08, and accuracy 93.6%; anterior cruciate ligament: sensitivity 99.0%, specificity 95.9%, positive predictive value 91.9%, positive negative value 99.5%, likelihood positive ratio 21.5, likelihood negative ratio 0.01, and accuracy 96.6%; posterior cruciate ligament: sensitivity 100%, specificity 99%, positive predictive value 80.0%, positive negative value 100%, likelihood positive ratio 100, likelihood negative ratio 0.01, and accuracy 99.6%; articular cartilage: sensitivity 76.1%, specificity 94.9%, positive predictive value 94.7%, positive negative value 76.9%, likelihood positive ratio 14.9, likelihood negative ratio 0.25, and accuracy 84.6%. CONCLUSION: Magnetic resonance imaging is a satisfactory diagnostic tool for evaluating meniscal and ligamentous lesions of the knee, but it is

  3. Improving density functional tight binding predictions of free energy surfaces for peptide condensation reactions in solution

    Science.gov (United States)

    Kroonblawd, Matthew; Goldman, Nir

    First principles molecular dynamics using highly accurate density functional theory (DFT) is a common tool for predicting chemistry, but the accessible time and space scales are often orders of magnitude beyond the resolution of experiments. Semi-empirical methods such as density functional tight binding (DFTB) offer up to a thousand-fold reduction in required CPU hours and can approach experimental scales. However, standard DFTB parameter sets lack good transferability and calibration for a particular system is usually necessary. Force matching the pairwise repulsive energy term in DFTB to short DFT trajectories can improve the former's accuracy for chemistry that is fast relative to DFT simulation times (Contract DE-AC52-07NA27344.

  4. Improving Density Functional Tight Binding Predictions of Free Energy Surfaces for Slow Chemical Reactions in Solution

    Science.gov (United States)

    Kroonblawd, Matthew; Goldman, Nir

    2017-06-01

    First principles molecular dynamics using highly accurate density functional theory (DFT) is a common tool for predicting chemistry, but the accessible time and space scales are often orders of magnitude beyond the resolution of experiments. Semi-empirical methods such as density functional tight binding (DFTB) offer up to a thousand-fold reduction in required CPU hours and can approach experimental scales. However, standard DFTB parameter sets lack good transferability and calibration for a particular system is usually necessary. Force matching the pairwise repulsive energy term in DFTB to short DFT trajectories can improve the former's accuracy for reactions that are fast relative to DFT simulation times (Contract DE-AC52-07NA27344.

  5. Improving Clinical Prediction of Bipolar Spectrum Disorders in Youth

    Directory of Open Access Journals (Sweden)

    Thomas W. Frazier

    2014-03-01

    Full Text Available This report evaluates whether classification tree algorithms (CTA may improve the identification of individuals at risk for bipolar spectrum disorders (BPSD. Analyses used the Longitudinal Assessment of Manic Symptoms (LAMS cohort (629 youth, 148 with BPSD and 481 without BPSD. Parent ratings of mania symptoms, stressful life events, parenting stress, and parental history of mania were included as risk factors. Comparable overall accuracy was observed for CTA (75.4% relative to logistic regression (77.6%. However, CTA showed increased sensitivity (0.28 vs. 0.18 at the expense of slightly decreased specificity and positive predictive power. The advantage of CTA algorithms for clinical decision making is demonstrated by the combinations of predictors most useful for altering the probability of BPSD. The 24% sample probability of BPSD was substantially decreased in youth with low screening and baseline parent ratings of mania, negative parental history of mania, and low levels of stressful life events (2%. High screening plus high baseline parent-rated mania nearly doubled the BPSD probability (46%. Future work will benefit from examining additional, powerful predictors, such as alternative data sources (e.g., clinician ratings, neurocognitive test data; these may increase the clinical utility of CTA models further.

  6. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study.

    Science.gov (United States)

    Sallent, A; Vicente, M; Reverté, M M; Lopez, A; Rodríguez-Baeza, A; Pérez-Domínguez, M; Velez, R

    2017-10-01

    To assess the accuracy of patient-specific instruments (PSIs) versus standard manual technique and the precision of computer-assisted planning and PSI-guided osteotomies in pelvic tumour resection. CT scans were obtained from five female cadaveric pelvises. Five osteotomies were designed using Mimics software: sacroiliac, biplanar supra-acetabular, two parallel iliopubic and ischial. For cases of the left hemipelvis, PSIs were designed to guide standard oscillating saw osteotomies and later manufactured using 3D printing. Osteotomies were performed using the standard manual technique in cases of the right hemipelvis. Post-resection CT scans were quantitatively analysed. Student's t -test and Mann-Whitney U test were used. Compared with the manual technique, PSI-guided osteotomies improved accuracy by a mean 9.6 mm (p 5 mm and 27% (n = 8) were > 10 mm. In the PSI cases, deviations were 10% (n = 3) and 0 % (n = 0), respectively. For angular deviation from pre-operative plans, we observed a mean improvement of 7.06° (p Cite this article : A. Sallent, M. Vicente, M. M. Reverté, A. Lopez, A. Rodríguez-Baeza, M. Pérez-Domínguez, R. Velez. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study. Bone Joint Res 2017;6:577-583. DOI: 10.1302/2046-3758.610.BJR-2017-0094.R1. © 2017 Sallent et al.

  7. Common and rare variants in the exons and regulatory regions of osteoporosis-related genes improve osteoporotic fracture risk prediction.

    Science.gov (United States)

    Lee, Seung Hun; Kang, Moo Il; Ahn, Seong Hee; Lim, Kyeong-Hye; Lee, Gun Eui; Shin, Eun-Soon; Lee, Jong-Eun; Kim, Beom-Jun; Cho, Eun-Hee; Kim, Sang-Wook; Kim, Tae-Ho; Kim, Hyun-Ju; Yoon, Kun-Ho; Lee, Won Chul; Kim, Ghi Su; Koh, Jung-Min; Kim, Shin-Yoon

    2014-11-01

    Osteoporotic fracture risk is highly heritable, but genome-wide association studies have explained only a small proportion of the heritability to date. Genetic data may improve prediction of fracture risk in osteopenic subjects and assist early intervention and management. To detect common and rare variants in coding and regulatory regions related to osteoporosis-related traits, and to investigate whether genetic profiling improves the prediction of fracture risk. This cross-sectional study was conducted in three clinical units in Korea. Postmenopausal women with extreme phenotypes (n = 982) were used for the discovery set, and 3895 participants were used for the replication set. We performed targeted resequencing of 198 genes. Genetic risk scores from common variants (GRS-C) and from common and rare variants (GRS-T) were calculated. Nineteen common variants in 17 genes (of the discovered 34 functional variants in 26 genes) and 31 rare variants in five genes (of the discovered 87 functional variants in 15 genes) were associated with one or more osteoporosis-related traits. Accuracy of fracture risk classification was improved in the osteopenic patients by adding GRS-C to fracture risk assessment models (6.8%; P risk in an osteopenic individual.

  8. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    Science.gov (United States)

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.

  9. Genomic predictions across Nordic Holstein and Nordic Red using the genomic best linear unbiased prediction model with different genomic relationship matrices.

    Science.gov (United States)

    Zhou, L; Lund, M S; Wang, Y; Su, G

    2014-08-01

    This study investigated genomic predictions across Nordic Holstein and Nordic Red using various genomic relationship matrices. Different sources of information, such as consistencies of linkage disequilibrium (LD) phase and marker effects, were used to construct the genomic relationship matrices (G-matrices) across these two breeds. Single-trait genomic best linear unbiased prediction (GBLUP) model and two-trait GBLUP model were used for single-breed and two-breed genomic predictions. The data included 5215 Nordic Holstein bulls and 4361 Nordic Red bulls, which was composed of three populations: Danish Red, Swedish Red and Finnish Ayrshire. The bulls were genotyped with 50 000 SNP chip. Using the two-breed predictions with a joint Nordic Holstein and Nordic Red reference population, accuracies increased slightly for all traits in Nordic Red, but only for some traits in Nordic Holstein. Among the three subpopulations of Nordic Red, accuracies increased more for Danish Red than for Swedish Red and Finnish Ayrshire. This is because closer genetic relationships exist between Danish Red and Nordic Holstein. Among Danish Red, individuals with higher genomic relationship coefficients with Nordic Holstein showed more increased accuracies in the two-breed predictions. Weighting the two-breed G-matrices by LD phase consistencies, marker effects or both did not further improve accuracies of the two-breed predictions. © 2014 Blackwell Verlag GmbH.

  10. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    Science.gov (United States)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  11. Efficient first-principles prediction of solid stability: Towards chemical accuracy

    Science.gov (United States)

    Zhang, Yubo; Kitchaev, Daniil A.; Yang, Julia; Chen, Tina; Dacek, Stephen T.; Sarmiento-Pérez, Rafael A.; Marques, Maguel A. L.; Peng, Haowei; Ceder, Gerbrand; Perdew, John P.; Sun, Jianwei

    2018-03-01

    The question of material stability is of fundamental importance to any analysis of system properties in condensed matter physics and materials science. The ability to evaluate chemical stability, i.e., whether a stoichiometry will persist in some chemical environment, and structure selection, i.e. what crystal structure a stoichiometry will adopt, is critical to the prediction of materials synthesis, reactivity and properties. Here, we demonstrate that density functional theory, with the recently developed strongly constrained and appropriately normed (SCAN) functional, has advanced to a point where both facets of the stability problem can be reliably and efficiently predicted for main group compounds, while transition metal compounds are improved but remain a challenge. SCAN therefore offers a robust model for a significant portion of the periodic table, presenting an opportunity for the development of novel materials and the study of fine phase transformations even in largely unexplored systems with little to no experimental data.

  12. Accuracy of cone-beam computed tomography in predicting the diameter of unerupted teeth.

    Science.gov (United States)

    Nguyen, Emerald; Boychuk, Darrell; Orellana, Maria

    2011-08-01

    An accurate prediction of the mesiodistal diameter (MDD) of the erupting permanent teeth is essential in orthodontic diagnosis and treatment planning during the mixed dentition period. Our objective was to test the accuracy and reproducibility of cone-beam computed tomography (CBCT) in predicting the MDD of unerupted teeth. Our secondary objective was to determine the accuracy and reproducibility of 3 viewing methods by using 2 CBCT software programs, InVivoDental (version 4.0; Anatomage, San Jose, Calif) and CBWorks (version 3.0, CyberMed, Seoul, Korea) in measuring the MDD of teeth in models simulating unerupted teeth. CBCT data were collected on the CB MercuRay (Hitachi Medical Corporation, Tokyo, Japan). Models of unerupted teeth (n = 25), created by embedding 25 tooth samples into a polydimethylsiloxane polymer with a similar density to tissues surrounding teeth, were scanned and measured by 2 investigators. Repeated MDD measurements of each sample were made by using 3 CBCT viewing methods: InVivo Section, InVivo Volume Render (both Anatomage), and CBWorks Volume Render (version 3.0, CyberMed). These measurements were then compared with the MDD physically measured by digital calipers before the teeth were embedded and scanned. All 3 of the new methods had mean measurements that were statistically significantly less (P <0.0001) than the physical method, adjusting for investigator and tooth effects. Specifically, InVivo Section measurements were 0.3 mm (95% CI, -0.4 to -0.2) less than the measurements with calipers, InVivo Volume Render measurements were 0.5 mm less (95% CI, -0.6 to -0.4) than those with calipers, and CBWorks Volume Render measurements were 0.4 mm less (95% CI, -0.4 to -0.3) than those with calipers. Overall, there were high correlation values among the 3 viewing methods, indicating that CBCT can be used to measure the MDD of unerupted teeth. The InVivo Section method had the greatest correlation with the calipers. Copyright © 2011 American

  13. Using spectrotemporal indices to improve the fruit-tree crop classification accuracy

    Science.gov (United States)

    Peña, M. A.; Liao, R.; Brenning, A.

    2017-06-01

    This study assesses the potential of spectrotemporal indices derived from satellite image time series (SITS) to improve the classification accuracy of fruit-tree crops. Six major fruit-tree crop types in the Aconcagua Valley, Chile, were classified by applying various linear discriminant analysis (LDA) techniques on a Landsat-8 time series of nine images corresponding to the 2014-15 growing season. As features we not only used the complete spectral resolution of the SITS, but also all possible normalized difference indices (NDIs) that can be constructed from any two bands of the time series, a novel approach to derive features from SITS. Due to the high dimensionality of this "enhanced" feature set we used the lasso and ridge penalized variants of LDA (PLDA). Although classification accuracies yielded by the standard LDA applied on the full-band SITS were good (misclassification error rate, MER = 0.13), they were further improved by 23% (MER = 0.10) with ridge PLDA using the enhanced feature set. The most important bands to discriminate the crops of interest were mainly concentrated on the first two image dates of the time series, corresponding to the crops' greenup stage. Despite the high predictor weights provided by the red and near infrared bands, typically used to construct greenness spectral indices, other spectral regions were also found important for the discrimination, such as the shortwave infrared band at 2.11-2.19 μm, sensitive to foliar water changes. These findings support the usefulness of spectrotemporal indices in the context of SITS-based crop type classifications, which until now have been mainly constructed by the arithmetic combination of two bands of the same image date in order to derive greenness temporal profiles like those from the normalized difference vegetation index.

  14. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    Energy Technology Data Exchange (ETDEWEB)

    Beckon, William N., E-mail: William_Beckon@fws.gov

    2016-07-15

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  15. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    International Nuclear Information System (INIS)

    Beckon, William N.

    2016-01-01

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  16. Predicting protein subcellular locations using hierarchical ensemble of Bayesian classifiers based on Markov chains

    Directory of Open Access Journals (Sweden)

    Eils Roland

    2006-06-01

    Full Text Available Abstract Background The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. Results A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. Conclusion This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.

  17. Monitoring and regulation of learning in medical education: the need for predictive cues.

    Science.gov (United States)

    de Bruin, Anique B H; Dunlosky, John; Cavalcanti, Rodrigo B

    2017-06-01

    Being able to accurately monitor learning activities is a key element in self-regulated learning in all settings, including medical schools. Yet students' ability to monitor their progress is often limited, leading to inefficient use of study time. Interventions that improve the accuracy of students' monitoring can optimise self-regulated learning, leading to higher achievement. This paper reviews findings from cognitive psychology and explores potential applications in medical education, as well as areas for future research. Effective monitoring depends on students' ability to generate information ('cues') that accurately reflects their knowledge and skills. The ability of these 'cues' to predict achievement is referred to as 'cue diagnosticity'. Interventions that improve the ability of students to elicit predictive cues typically fall into two categories: (i) self-generation of cues and (ii) generation of cues that is delayed after self-study. Providing feedback and support is useful when cues are predictive but may be too complex to be readily used. Limited evidence exists about interventions to improve the accuracy of self-monitoring among medical students or trainees. Developing interventions that foster use of predictive cues can enhance the accuracy of self-monitoring, thereby improving self-study and clinical reasoning. First, insight should be gained into the characteristics of predictive cues used by medical students and trainees. Next, predictive cue prompts should be designed and tested to improve monitoring and regulation of learning. Finally, the use of predictive cues should be explored in relation to teaching and learning clinical reasoning. Improving self-regulated learning is important to help medical students and trainees efficiently acquire knowledge and skills necessary for clinical practice. Interventions that help students generate and use predictive cues hold the promise of improved self-regulated learning and achievement. This framework is

  18. PCA based feature reduction to improve the accuracy of decision tree c4.5 classification

    Science.gov (United States)

    Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.

    2018-03-01

    Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.

  19. Assessing sensor accuracy for non-adjunct use of continuous glucose monitoring.

    Science.gov (United States)

    Kovatchev, Boris P; Patek, Stephen D; Ortiz, Edward Andrew; Breton, Marc D

    2015-03-01

    The level of continuous glucose monitoring (CGM) accuracy needed for insulin dosing using sensor values (i.e., the level of accuracy permitting non-adjunct CGM use) is a topic of ongoing debate. Assessment of this level in clinical experiments is virtually impossible because the magnitude of CGM errors cannot be manipulated and related prospectively to clinical outcomes. A combination of archival data (parallel CGM, insulin pump, self-monitoring of blood glucose [SMBG] records, and meals for 56 pump users with type 1 diabetes) and in silico experiments was used to "replay" real-life treatment scenarios and relate sensor error to glycemic outcomes. Nominal blood glucose (BG) traces were extracted using a mathematical model, yielding 2,082 BG segments each initiated by insulin bolus and confirmed by SMBG. These segments were replayed at seven sensor accuracy levels (mean absolute relative differences [MARDs] of 3-22%) testing six scenarios: insulin dosing using sensor values, threshold, and predictive alarms, each without or with considering CGM trend arrows. In all six scenarios, the occurrence of hypoglycemia (frequency of BG levels ≤50 mg/dL and BG levels ≤39 mg/dL) increased with sensor error, displaying an abrupt slope change at MARD =10%. Similarly, hyperglycemia (frequency of BG levels ≥250 mg/dL and BG levels ≥400 mg/dL) increased and displayed an abrupt slope change at MARD=10%. When added to insulin dosing decisions, information from CGM trend arrows, threshold, and predictive alarms resulted in improvement in average glycemia by 1.86, 8.17, and 8.88 mg/dL, respectively. Using CGM for insulin dosing decisions is feasible below a certain level of sensor error, estimated in silico at MARD=10%. In our experiments, further accuracy improvement did not contribute substantively to better glycemic outcomes.

  20. Improvement in the Accuracy of Flux Measurement of Radio Sources by Exploiting an Arithmetic Pattern in Photon Bunching Noise

    Energy Technology Data Exchange (ETDEWEB)

    Lieu, Richard [Department of Physics, University of Alabama, Huntsville, AL 35899 (United States)

    2017-07-20

    A hierarchy of statistics of increasing sophistication and accuracy is proposed to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level with the help of high-precision computers to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this method of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the signal-limited bolometric flux measurement of a radio source.