WorldWideScience

Sample records for method predicts formaldonitrone

  1. Earthquake prediction by Kina Method

    International Nuclear Information System (INIS)

    Kianoosh, H.; Keypour, H.; Naderzadeh, A.; Motlagh, H.F.

    2005-01-01

    Earthquake prediction has been one of the earliest desires of the man. Scientists have worked hard to predict earthquakes for a long time. The results of these efforts can generally be divided into two methods of prediction: 1) Statistical Method, and 2) Empirical Method. In the first method, earthquakes are predicted using statistics and probabilities, while the second method utilizes variety of precursors for earthquake prediction. The latter method is time consuming and more costly. However, the result of neither method has fully satisfied the man up to now. In this paper a new method entitled 'Kiana Method' is introduced for earthquake prediction. This method offers more accurate results yet lower cost comparing to other conventional methods. In Kiana method the electrical and magnetic precursors are measured in an area. Then, the time and the magnitude of an earthquake in the future is calculated using electrical, and in particular, electrical capacitors formulas. In this method, by daily measurement of electrical resistance in an area we make clear that the area is capable of earthquake occurrence in the future or not. If the result shows a positive sign, then the occurrence time and the magnitude can be estimated by the measured quantities. This paper explains the procedure and details of this prediction method. (authors)

  2. Machine learning methods for metabolic pathway prediction

    Directory of Open Access Journals (Sweden)

    Karp Peter D

    2010-01-01

    Full Text Available Abstract Background A key challenge in systems biology is the reconstruction of an organism's metabolic network from its genome sequence. One strategy for addressing this problem is to predict which metabolic pathways, from a reference database of known pathways, are present in the organism, based on the annotated genome of the organism. Results To quantitatively validate methods for pathway prediction, we developed a large "gold standard" dataset of 5,610 pathway instances known to be present or absent in curated metabolic pathway databases for six organisms. We defined a collection of 123 pathway features, whose information content we evaluated with respect to the gold standard. Feature data were used as input to an extensive collection of machine learning (ML methods, including naïve Bayes, decision trees, and logistic regression, together with feature selection and ensemble methods. We compared the ML methods to the previous PathoLogic algorithm for pathway prediction using the gold standard dataset. We found that ML-based prediction methods can match the performance of the PathoLogic algorithm. PathoLogic achieved an accuracy of 91% and an F-measure of 0.786. The ML-based prediction methods achieved accuracy as high as 91.2% and F-measure as high as 0.787. The ML-based methods output a probability for each predicted pathway, whereas PathoLogic does not, which provides more information to the user and facilitates filtering of predicted pathways. Conclusions ML methods for pathway prediction perform as well as existing methods, and have qualitative advantages in terms of extensibility, tunability, and explainability. More advanced prediction methods and/or more sophisticated input features may improve the performance of ML methods. However, pathway prediction performance appears to be limited largely by the ability to correctly match enzymes to the reactions they catalyze based on genome annotations.

  3. Machine learning methods for metabolic pathway prediction

    Science.gov (United States)

    2010-01-01

    Background A key challenge in systems biology is the reconstruction of an organism's metabolic network from its genome sequence. One strategy for addressing this problem is to predict which metabolic pathways, from a reference database of known pathways, are present in the organism, based on the annotated genome of the organism. Results To quantitatively validate methods for pathway prediction, we developed a large "gold standard" dataset of 5,610 pathway instances known to be present or absent in curated metabolic pathway databases for six organisms. We defined a collection of 123 pathway features, whose information content we evaluated with respect to the gold standard. Feature data were used as input to an extensive collection of machine learning (ML) methods, including naïve Bayes, decision trees, and logistic regression, together with feature selection and ensemble methods. We compared the ML methods to the previous PathoLogic algorithm for pathway prediction using the gold standard dataset. We found that ML-based prediction methods can match the performance of the PathoLogic algorithm. PathoLogic achieved an accuracy of 91% and an F-measure of 0.786. The ML-based prediction methods achieved accuracy as high as 91.2% and F-measure as high as 0.787. The ML-based methods output a probability for each predicted pathway, whereas PathoLogic does not, which provides more information to the user and facilitates filtering of predicted pathways. Conclusions ML methods for pathway prediction perform as well as existing methods, and have qualitative advantages in terms of extensibility, tunability, and explainability. More advanced prediction methods and/or more sophisticated input features may improve the performance of ML methods. However, pathway prediction performance appears to be limited largely by the ability to correctly match enzymes to the reactions they catalyze based on genome annotations. PMID:20064214

  4. Prediction methods and databases within chemoinformatics

    DEFF Research Database (Denmark)

    Jónsdóttir, Svava Osk; Jørgensen, Flemming Steen; Brunak, Søren

    2005-01-01

    MOTIVATION: To gather information about available databases and chemoinformatics methods for prediction of properties relevant to the drug discovery and optimization process. RESULTS: We present an overview of the most important databases with 2-dimensional and 3-dimensional structural information...... about drugs and drug candidates, and of databases with relevant properties. Access to experimental data and numerical methods for selecting and utilizing these data is crucial for developing accurate predictive in silico models. Many interesting predictive methods for classifying the suitability...

  5. NEURAL METHODS FOR THE FINANCIAL PREDICTION

    OpenAIRE

    Jerzy Balicki; Piotr Dryja; Waldemar Korłub; Piotr Przybyłek; Maciej Tyszka; Marcin Zadroga; Marcin Zakidalski

    2016-01-01

    Artificial neural networks can be used to predict share investment on the stock market, assess the reliability of credit client or predicting banking crises. Moreover, this paper discusses the principles of cooperation neural network algorithms with evolutionary method, and support vector machines. In addition, a reference is made to other methods of artificial intelligence, which are used in finance prediction.

  6. NEURAL METHODS FOR THE FINANCIAL PREDICTION

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2016-06-01

    Full Text Available Artificial neural networks can be used to predict share investment on the stock market, assess the reliability of credit client or predicting banking crises. Moreover, this paper discusses the principles of cooperation neural network algorithms with evolutionary method, and support vector machines. In addition, a reference is made to other methods of artificial intelligence, which are used in finance prediction.

  7. Rainfall prediction with backpropagation method

    Science.gov (United States)

    Wahyuni, E. G.; Fauzan, L. M. F.; Abriyani, F.; Muchlis, N. F.; Ulfa, M.

    2018-03-01

    Rainfall is an important factor in many fields, such as aviation and agriculture. Although it has been assisted by technology but the accuracy can not reach 100% and there is still the possibility of error. Though current rainfall prediction information is needed in various fields, such as agriculture and aviation fields. In the field of agriculture, to obtain abundant and quality yields, farmers are very dependent on weather conditions, especially rainfall. Rainfall is one of the factors that affect the safety of aircraft. To overcome the problems above, then it’s required a system that can accurately predict rainfall. In predicting rainfall, artificial neural network modeling is applied in this research. The method used in modeling this artificial neural network is backpropagation method. Backpropagation methods can result in better performance in repetitive exercises. This means that the weight of the ANN interconnection can approach the weight it should be. Another advantage of this method is the ability in the learning process adaptively and multilayer owned on this method there is a process of weight changes so as to minimize error (fault tolerance). Therefore, this method can guarantee good system resilience and consistently work well. The network is designed using 4 input variables, namely air temperature, air humidity, wind speed, and sunshine duration and 3 output variables ie low rainfall, medium rainfall, and high rainfall. Based on the research that has been done, the network can be used properly, as evidenced by the results of the prediction of the system precipitation is the same as the results of manual calculations.

  8. Development of a regional ensemble prediction method for probabilistic weather prediction

    International Nuclear Information System (INIS)

    Nohara, Daisuke; Tamura, Hidetoshi; Hirakuchi, Hiromaru

    2015-01-01

    A regional ensemble prediction method has been developed to provide probabilistic weather prediction using a numerical weather prediction model. To obtain consistent perturbations with the synoptic weather pattern, both of initial and lateral boundary perturbations were given by differences between control and ensemble member of the Japan Meteorological Agency (JMA)'s operational one-week ensemble forecast. The method provides a multiple ensemble member with a horizontal resolution of 15 km for 48-hour based on a downscaling of the JMA's operational global forecast accompanied with the perturbations. The ensemble prediction was examined in the case of heavy snow fall event in Kanto area on January 14, 2013. The results showed that the predictions represent different features of high-resolution spatiotemporal distribution of precipitation affected by intensity and location of extra-tropical cyclone in each ensemble member. Although the ensemble prediction has model bias of mean values and variances in some variables such as wind speed and solar radiation, the ensemble prediction has a potential to append a probabilistic information to a deterministic prediction. (author)

  9. Deep learning methods for protein torsion angle prediction.

    Science.gov (United States)

    Li, Haiou; Hou, Jie; Adhikari, Badri; Lyu, Qiang; Cheng, Jianlin

    2017-09-18

    Deep learning is one of the most powerful machine learning methods that has achieved the state-of-the-art performance in many domains. Since deep learning was introduced to the field of bioinformatics in 2012, it has achieved success in a number of areas such as protein residue-residue contact prediction, secondary structure prediction, and fold recognition. In this work, we developed deep learning methods to improve the prediction of torsion (dihedral) angles of proteins. We design four different deep learning architectures to predict protein torsion angles. The architectures including deep neural network (DNN) and deep restricted Boltzmann machine (DRBN), deep recurrent neural network (DRNN) and deep recurrent restricted Boltzmann machine (DReRBM) since the protein torsion angle prediction is a sequence related problem. In addition to existing protein features, two new features (predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments) are used as input to each of the four deep learning architectures to predict phi and psi angles of protein backbone. The mean absolute error (MAE) of phi and psi angles predicted by DRNN, DReRBM, DRBM and DNN is about 20-21° and 29-30° on an independent dataset. The MAE of phi angle is comparable to the existing methods, but the MAE of psi angle is 29°, 2° lower than the existing methods. On the latest CASP12 targets, our methods also achieved the performance better than or comparable to a state-of-the art method. Our experiment demonstrates that deep learning is a valuable method for predicting protein torsion angles. The deep recurrent network architecture performs slightly better than deep feed-forward architecture, and the predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments are useful features for improving prediction accuracy.

  10. Novel hyperspectral prediction method and apparatus

    Science.gov (United States)

    Kemeny, Gabor J.; Crothers, Natalie A.; Groth, Gard A.; Speck, Kathy A.; Marbach, Ralf

    2009-05-01

    Both the power and the challenge of hyperspectral technologies is the very large amount of data produced by spectral cameras. While off-line methodologies allow the collection of gigabytes of data, extended data analysis sessions are required to convert the data into useful information. In contrast, real-time monitoring, such as on-line process control, requires that compression of spectral data and analysis occur at a sustained full camera data rate. Efficient, high-speed practical methods for calibration and prediction are therefore sought to optimize the value of hyperspectral imaging. A novel method of matched filtering known as science based multivariate calibration (SBC) was developed for hyperspectral calibration. Classical (MLR) and inverse (PLS, PCR) methods are combined by spectroscopically measuring the spectral "signal" and by statistically estimating the spectral "noise." The accuracy of the inverse model is thus combined with the easy interpretability of the classical model. The SBC method is optimized for hyperspectral data in the Hyper-CalTM software used for the present work. The prediction algorithms can then be downloaded into a dedicated FPGA based High-Speed Prediction EngineTM module. Spectral pretreatments and calibration coefficients are stored on interchangeable SD memory cards, and predicted compositions are produced on a USB interface at real-time camera output rates. Applications include minerals, pharmaceuticals, food processing and remote sensing.

  11. Modified-Fibonacci-Dual-Lucas method for earthquake prediction

    Science.gov (United States)

    Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.

    2015-06-01

    The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with the earthquake date and in this case the FDL method coincides with the MFDL. Based on the MDFL method we present the prediction method capable of predicting global events or localized earthquakes and we will discuss the accuracy of the method in as far as the prediction and location parts of the method. We show example calendar style predictions for global events as well as for the Greek region using

  12. DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail

    2016-03-16

    Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind

  13. Prediction methods environmental-effect reporting

    International Nuclear Information System (INIS)

    Jonker, R.J.; Koester, H.W.

    1987-12-01

    This report provides a survey of prediction methods which can be applied to the calculation of emissions in cuclear-reactor accidents, in the framework of environment-effect reports (dutch m.e.r.) or risk analyses. Also emissions during normal operation are important for m.e.r.. These can be derived from measured emissions of power plants being in operation. Data concerning the latter are reported. The report consists of an introduction into reactor technology, among which a description of some reactor types, the corresponding fuel cycle and dismantling scenarios - a discussion of risk-analyses for nuclear power plants and the physical processes which can play a role during accidents - a discussion of prediction methods to be employed and the expected developments in this area - some background information. (aughor). 145 refs.; 21 figs.; 20 tabs

  14. The trajectory prediction of spacecraft by grey method

    International Nuclear Information System (INIS)

    Wang, Qiyue; Wang, Zhongyu; Zhang, Zili; Wang, Yanqing; Zhou, Weihu

    2016-01-01

    The real-time and high-precision trajectory prediction of a moving object is a core technology in the field of aerospace engineering. The real-time monitoring and tracking technology are also significant guarantees of aerospace equipment. A dynamic trajectory prediction method called grey dynamic filter (GDF) which combines the dynamic measurement theory and grey system theory is proposed. GDF can use coordinates of the current period to extrapolate coordinates of the following period. At meantime, GDF can also keep the instantaneity of measured coordinates by the metabolism model. In this paper the optimal model length of GDF is firstly selected to improve the prediction accuracy. Then the simulation for uniformly accelerated motion and variably accelerated motion is conducted. The simulation results indicate that the mean composite position error of GDF prediction is one-fifth to that of Kalman filter (KF). By using a spacecraft landing experiment, the prediction accuracy of GDF is compared with the KF method and the primitive grey method (GM). The results show that the motion trajectory of spacecraft predicted by GDF is much closer to actual trajectory than the other two methods. The mean composite position error calculated by GDF is one-eighth to KF and one-fifth to GM respectively. (paper)

  15. Methods and techniques for prediction of environmental impact

    International Nuclear Information System (INIS)

    1992-04-01

    Environmental impact assessment (EIA) is the procedure that helps decision makers understand the environmental implications of their decisions. The prediction of environmental effects or impact is an extremely important part of the EIA procedure and improvements in existing capabilities are needed. Considerable attention is paid within environmental impact assessment and in handbooks on EIA to methods for identifying and evaluating environmental impacts. However, little attention is given to the issue distribution of information on impact prediction methods. The quantitative or qualitative methods for the prediction of environmental impacts appear to be the two basic approaches for incorporating environmental concerns into the decision-making process. Depending on the nature of the proposed activity and the environment likely to be affected, a combination of both quantitative and qualitative methods is used. Within environmental impact assessment, the accuracy of methods for the prediction of environmental impacts is of major importance while it provides for sound and well-balanced decision making. Pertinent and effective action to deal with the problems of environmental protection and the rational use of natural resources and sustainable development is only possible given objective methods and techniques for the prediction of environmental impact. Therefore, the Senior Advisers to ECE Governments on Environmental and Water Problems, decided to set up a task force, with the USSR as lead country, on methods and techniques for the prediction of environmental impacts in order to undertake a study to review and analyse existing methodological approaches and to elaborate recommendations to ECE Governments. The work of the task force was completed in 1990 and the resulting report, with all relevant background material, was approved by the Senior Advisers to ECE Governments on Environmental and Water Problems in 1991. The present report reflects the situation, state of

  16. Connecting clinical and actuarial prediction with rule-based methods

    NARCIS (Netherlands)

    Fokkema, M.; Smits, N.; Kelderman, H.; Penninx, B.W.J.H.

    2015-01-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction

  17. Prediction Methods for Blood Glucose Concentration

    DEFF Research Database (Denmark)

    “Recent Results on Glucose–Insulin Predictions by Means of a State Observer for Time-Delay Systems” by Pasquale Palumbo et al. introduces a prediction model which in real time predicts the insulin concentration in blood which in turn is used in a control system. The method is tested in simulation...... EEG signals to predict upcoming hypoglycemic situations in real-time by employing artificial neural networks. The results of a 30-day long clinical study with the implanted device and the developed algorithm are presented. The chapter “Meta-Learning Based Blood Glucose Predictor for Diabetic......, but the insulin amount is chosen using factors that account for this expectation. The increasing availability of more accurate continuous blood glucose measurement (CGM) systems is attracting much interest to the possibilities of explicit prediction of future BG values. Against this background, in 2014 a two...

  18. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  19. New prediction methods for collaborative filtering

    Directory of Open Access Journals (Sweden)

    Hasan BULUT

    2016-05-01

    Full Text Available Companies, in particular e-commerce companies, aims to increase customer satisfaction, hence in turn increase their profits, using recommender systems. Recommender Systems are widely used nowadays and they provide strategic advantages to the companies that use them. These systems consist of different stages. In the first stage, the similarities between the active user and other users are computed using the user-product ratings matrix. Then, the neighbors of the active user are found from these similarities. In prediction calculation stage, the similarities computed at the first stage are used to generate the weight vector of the closer neighbors. Neighbors affect the prediction value by the corresponding value of the weight vector. In this study, we developed two new methods for the prediction calculation stage which is the last stage of collaborative filtering. The performance of these methods are measured with evaluation metrics used in the literature and compared with other studies in this field.

  20. A method for predicting monthly rainfall patterns

    International Nuclear Information System (INIS)

    Njau, E.C.

    1987-11-01

    A brief survey is made of previous methods that have been used to predict rainfall trends or drought spells in different parts of the earth. The basic methodologies or theoretical strategies used in these methods are compared with contents of a recent theory of Sun-Weather/Climate links (Njau, 1985a; 1985b; 1986; 1987a; 1987b; 1987c) which point towards the possibility of practical climatic predictions. It is shown that not only is the theoretical basis of each of these methodologies or strategies fully incorporated into the above-named theory, but also this theory may be used to develop a technique by which future monthly rainfall patterns can be predicted in further and finer details. We describe the latter technique and then illustrate its workability by means of predictions made on monthly rainfall patterns in some East African meteorological stations. (author). 43 refs, 11 figs, 2 tabs

  1. Predicting Metabolic Syndrome Using the Random Forest Method

    Directory of Open Access Journals (Sweden)

    Apilak Worachartcheewan

    2015-01-01

    Full Text Available Aims. This study proposes a computational method for determining the prevalence of metabolic syndrome (MS and to predict its occurrence using the National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III criteria. The Random Forest (RF method is also applied to identify significant health parameters. Materials and Methods. We used data from 5,646 adults aged between 18–78 years residing in Bangkok who had received an annual health check-up in 2008. MS was identified using the NCEP ATP III criteria. The RF method was applied to predict the occurrence of MS and to identify important health parameters surrounding this disorder. Results. The overall prevalence of MS was 23.70% (34.32% for males and 17.74% for females. RF accuracy for predicting MS in an adult Thai population was 98.11%. Further, based on RF, triglyceride levels were the most important health parameter associated with MS. Conclusion. RF was shown to predict MS in an adult Thai population with an accuracy >98% and triglyceride levels were identified as the most informative variable associated with MS. Therefore, using RF to predict MS may be potentially beneficial in identifying MS status for preventing the development of diabetes mellitus and cardiovascular diseases.

  2. An Approximate Method for Pitch-Damping Prediction

    National Research Council Canada - National Science Library

    Danberg, James

    2003-01-01

    ...) method for predicting the pitch-damping coefficients has been employed. The CFD method provides important details necessary to derive the correlation functions that are unavailable from the current experimental database...

  3. Epitope prediction methods

    DEFF Research Database (Denmark)

    Karosiene, Edita

    Analysis. The chapter provides detailed explanations on how to use different methods for T cell epitope discovery research, explaining how input should be given as well as how to interpret the output. In the last chapter, I present the results of a bioinformatics analysis of epitopes from the yellow fever...... peptide-MHC interactions. Furthermore, using yellow fever virus epitopes, we demonstrated the power of the %Rank score when compared with the binding affinity score of MHC prediction methods, suggesting that this score should be considered to be used for selecting potential T cell epitopes. In summary...... immune responses. Therefore, it is of great importance to be able to identify peptides that bind to MHC molecules, in order to understand the nature of immune responses and discover T cell epitopes useful for designing new vaccines and immunotherapies. MHC molecules in humans, referred to as human...

  4. Connecting clinical and actuarial prediction with rule-based methods.

    Science.gov (United States)

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  5. FREEZING AND THAWING TIME PREDICTION METHODS OF FOODS II: NUMARICAL METHODS

    Directory of Open Access Journals (Sweden)

    Yahya TÜLEK

    1999-03-01

    Full Text Available Freezing is one of the excellent methods for the preservation of foods. If freezing and thawing processes and frozen storage method are carried out correctly, the original characteristics of the foods can remain almost unchanged over an extended periods of time. It is very important to determine the freezing and thawing time period of the foods, as they strongly influence the both quality of food material and process productivity and the economy. For developing a simple and effectively usable mathematical model, less amount of process parameters and physical properties should be enrolled in calculations. But it is a difficult to have all of these in one prediction method. For this reason, various freezing and thawing time prediction methods were proposed in literature and research studies have been going on.

  6. Investigation into Methods for Predicting Connection Temperatures

    Directory of Open Access Journals (Sweden)

    K. Anderson

    2009-01-01

    Full Text Available The mechanical response of connections in fire is largely based on material strength degradation and the interactions between the various components of the connection. In order to predict connection performance in fire, temperature profiles must initially be established in order to evaluate the material strength degradation over time. This paper examines two current methods for predicting connection temperatures: The percentage method, where connection temperatures are calculated as a percentage of the adjacent beam lower-flange, mid-span temperatures; and the lumped capacitance method, based on the lumped mass of the connection. Results from the percentage method do not correlate well with experimental results, whereas the lumped capacitance method shows much better agreement with average connection temperatures. A 3D finite element heat transfer model was also created in Abaqus, and showed good correlation with experimental results. 

  7. Different protein-protein interface patterns predicted by different machine learning methods.

    Science.gov (United States)

    Wang, Wei; Yang, Yongxiao; Yin, Jianxin; Gong, Xinqi

    2017-11-22

    Different types of protein-protein interactions make different protein-protein interface patterns. Different machine learning methods are suitable to deal with different types of data. Then, is it the same situation that different interface patterns are preferred for prediction by different machine learning methods? Here, four different machine learning methods were employed to predict protein-protein interface residue pairs on different interface patterns. The performances of the methods for different types of proteins are different, which suggest that different machine learning methods tend to predict different protein-protein interface patterns. We made use of ANOVA and variable selection to prove our result. Our proposed methods taking advantages of different single methods also got a good prediction result compared to single methods. In addition to the prediction of protein-protein interactions, this idea can be extended to other research areas such as protein structure prediction and design.

  8. Univariate Time Series Prediction of Solar Power Using a Hybrid Wavelet-ARMA-NARX Prediction Method

    Energy Technology Data Exchange (ETDEWEB)

    Nazaripouya, Hamidreza; Wang, Yubo; Chu, Chi-Cheng; Pota, Hemanshu; Gadh, Rajit

    2016-05-02

    This paper proposes a new hybrid method for super short-term solar power prediction. Solar output power usually has a complex, nonstationary, and nonlinear characteristic due to intermittent and time varying behavior of solar radiance. In addition, solar power dynamics is fast and is inertia less. An accurate super short-time prediction is required to compensate for the fluctuations and reduce the impact of solar power penetration on the power system. The objective is to predict one step-ahead solar power generation based only on historical solar power time series data. The proposed method incorporates discrete wavelet transform (DWT), Auto-Regressive Moving Average (ARMA) models, and Recurrent Neural Networks (RNN), while the RNN architecture is based on Nonlinear Auto-Regressive models with eXogenous inputs (NARX). The wavelet transform is utilized to decompose the solar power time series into a set of richer-behaved forming series for prediction. ARMA model is employed as a linear predictor while NARX is used as a nonlinear pattern recognition tool to estimate and compensate the error of wavelet-ARMA prediction. The proposed method is applied to the data captured from UCLA solar PV panels and the results are compared with some of the common and most recent solar power prediction methods. The results validate the effectiveness of the proposed approach and show a considerable improvement in the prediction precision.

  9. A highly accurate predictive-adaptive method for lithium-ion battery remaining discharge energy prediction in electric vehicle applications

    International Nuclear Information System (INIS)

    Liu, Guangming; Ouyang, Minggao; Lu, Languang; Li, Jianqiu; Hua, Jianfeng

    2015-01-01

    Highlights: • An energy prediction (EP) method is introduced for battery E RDE determination. • EP determines E RDE through coupled prediction of future states, parameters, and output. • The PAEP combines parameter adaptation and prediction to update model parameters. • The PAEP provides improved E RDE accuracy compared with DC and other EP methods. - Abstract: In order to estimate the remaining driving range (RDR) in electric vehicles, the remaining discharge energy (E RDE ) of the applied battery system needs to be precisely predicted. Strongly affected by the load profiles, the available E RDE varies largely in real-world applications and requires specific determination. However, the commonly-used direct calculation (DC) method might result in certain energy prediction errors by relating the E RDE directly to the current state of charge (SOC). To enhance the E RDE accuracy, this paper presents a battery energy prediction (EP) method based on the predictive control theory, in which a coupled prediction of future battery state variation, battery model parameter change, and voltage response, is implemented on the E RDE prediction horizon, and the E RDE is subsequently accumulated and real-timely optimized. Three EP approaches with different model parameter updating routes are introduced, and the predictive-adaptive energy prediction (PAEP) method combining the real-time parameter identification and the future parameter prediction offers the best potential. Based on a large-format lithium-ion battery, the performance of different E RDE calculation methods is compared under various dynamic profiles. Results imply that the EP methods provide much better accuracy than the traditional DC method, and the PAEP could reduce the E RDE error by more than 90% and guarantee the relative energy prediction error under 2%, proving as a proper choice in online E RDE prediction. The correlation of SOC estimation and E RDE calculation is then discussed to illustrate the

  10. Improvement of gas entrainment prediction method. Introduction of surface tension effect

    International Nuclear Information System (INIS)

    Ito, Kei; Sakai, Takaaki; Ohshima, Hiroyuki; Uchibori, Akihiro; Eguchi, Yuzuru; Monji, Hideaki; Xu, Yongze

    2010-01-01

    A gas entrainment (GE) prediction method has been developed to establish design criteria for the large-scale sodium-cooled fast reactor (JSFR) systems. The prototype of the GE prediction method was already confirmed to give reasonable gas core lengths by simple calculation procedures. However, for simplification, the surface tension effects were neglected. In this paper, the evaluation accuracy of gas core lengths is improved by introducing the surface tension effects into the prototype GE prediction method. First, the mechanical balance between gravitational, centrifugal, and surface tension forces is considered. Then, the shape of a gas core tip is approximated by a quadratic function. Finally, using the approximated gas core shape, the authors determine the gas core length satisfying the mechanical balance. This improved GE prediction method is validated by analyzing the gas core lengths observed in simple experiments. Results show that the analytical gas core lengths calculated by the improved GE prediction method become shorter in comparison to the prototype GE prediction method, and are in good agreement with the experimental data. In addition, the experimental data under different temperature and surfactant concentration conditions are reproduced by the improved GE prediction method. (author)

  11. Method for Predicting Thermal Buckling in Rails

    Science.gov (United States)

    2018-01-01

    A method is proposed herein for predicting the onset of thermal buckling in rails in such a way as to provide a means of avoiding this type of potentially devastating failure. The method consists of the development of a thermomechanical model of rail...

  12. Motor degradation prediction methods

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, J.R.; Kelly, J.F.; Delzingaro, M.J.

    1996-12-01

    Motor Operated Valve (MOV) squirrel cage AC motor rotors are susceptible to degradation under certain conditions. Premature failure can result due to high humidity/temperature environments, high running load conditions, extended periods at locked rotor conditions (i.e. > 15 seconds) or exceeding the motor`s duty cycle by frequent starts or multiple valve stroking. Exposure to high heat and moisture due to packing leaks, pressure seal ring leakage or other causes can significantly accelerate the degradation. ComEd and Liberty Technologies have worked together to provide and validate a non-intrusive method using motor power diagnostics to evaluate MOV rotor condition and predict failure. These techniques have provided a quick, low radiation dose method to evaluate inaccessible motors, identify degradation and allow scheduled replacement of motors prior to catastrophic failures.

  13. Motor degradation prediction methods

    International Nuclear Information System (INIS)

    Arnold, J.R.; Kelly, J.F.; Delzingaro, M.J.

    1996-01-01

    Motor Operated Valve (MOV) squirrel cage AC motor rotors are susceptible to degradation under certain conditions. Premature failure can result due to high humidity/temperature environments, high running load conditions, extended periods at locked rotor conditions (i.e. > 15 seconds) or exceeding the motor's duty cycle by frequent starts or multiple valve stroking. Exposure to high heat and moisture due to packing leaks, pressure seal ring leakage or other causes can significantly accelerate the degradation. ComEd and Liberty Technologies have worked together to provide and validate a non-intrusive method using motor power diagnostics to evaluate MOV rotor condition and predict failure. These techniques have provided a quick, low radiation dose method to evaluate inaccessible motors, identify degradation and allow scheduled replacement of motors prior to catastrophic failures

  14. Evaluation and comparison of mammalian subcellular localization prediction methods

    Directory of Open Access Journals (Sweden)

    Fink J Lynn

    2006-12-01

    Full Text Available Abstract Background Determination of the subcellular location of a protein is essential to understanding its biochemical function. This information can provide insight into the function of hypothetical or novel proteins. These data are difficult to obtain experimentally but have become especially important since many whole genome sequencing projects have been finished and many resulting protein sequences are still lacking detailed functional information. In order to address this paucity of data, many computational prediction methods have been developed. However, these methods have varying levels of accuracy and perform differently based on the sequences that are presented to the underlying algorithm. It is therefore useful to compare these methods and monitor their performance. Results In order to perform a comprehensive survey of prediction methods, we selected only methods that accepted large batches of protein sequences, were publicly available, and were able to predict localization to at least nine of the major subcellular locations (nucleus, cytosol, mitochondrion, extracellular region, plasma membrane, Golgi apparatus, endoplasmic reticulum (ER, peroxisome, and lysosome. The selected methods were CELLO, MultiLoc, Proteome Analyst, pTarget and WoLF PSORT. These methods were evaluated using 3763 mouse proteins from SwissProt that represent the source of the training sets used in development of the individual methods. In addition, an independent evaluation set of 2145 mouse proteins from LOCATE with a bias towards the subcellular localization underrepresented in SwissProt was used. The sensitivity and specificity were calculated for each method and compared to a theoretical value based on what might be observed by random chance. Conclusion No individual method had a sufficient level of sensitivity across both evaluation sets that would enable reliable application to hypothetical proteins. All methods showed lower performance on the LOCATE

  15. Performance prediction method for a multi-stage Knudsen pump

    Science.gov (United States)

    Kugimoto, K.; Hirota, Y.; Kizaki, Y.; Yamaguchi, H.; Niimi, T.

    2017-12-01

    In this study, the novel method to predict the performance of a multi-stage Knudsen pump is proposed. The performance prediction method is carried out in two steps numerically with the assistance of a simple experimental result. In the first step, the performance of a single-stage Knudsen pump was measured experimentally under various pressure conditions, and the relationship of the mass flow rate was obtained with respect to the average pressure between the inlet and outlet of the pump and the pressure difference between them. In the second step, the performance of a multi-stage pump was analyzed by a one-dimensional model derived from the mass conservation law. The performances predicted by the 1D-model of 1-stage, 2-stage, 3-stage, and 4-stage pumps were validated by the experimental results for the corresponding number of stages. It was concluded that the proposed prediction method works properly.

  16. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients.

    Science.gov (United States)

    Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.

  17. Ensemble method for dengue prediction.

    Science.gov (United States)

    Buczak, Anna L; Baugher, Benjamin; Moniz, Linda J; Bagley, Thomas; Babin, Steven M; Guven, Erhan

    2018-01-01

    In the 2015 NOAA Dengue Challenge, participants made three dengue target predictions for two locations (Iquitos, Peru, and San Juan, Puerto Rico) during four dengue seasons: 1) peak height (i.e., maximum weekly number of cases during a transmission season; 2) peak week (i.e., week in which the maximum weekly number of cases occurred); and 3) total number of cases reported during a transmission season. A dengue transmission season is the 12-month period commencing with the location-specific, historical week with the lowest number of cases. At the beginning of the Dengue Challenge, participants were provided with the same input data for developing the models, with the prediction testing data provided at a later date. Our approach used ensemble models created by combining three disparate types of component models: 1) two-dimensional Method of Analogues models incorporating both dengue and climate data; 2) additive seasonal Holt-Winters models with and without wavelet smoothing; and 3) simple historical models. Of the individual component models created, those with the best performance on the prior four years of data were incorporated into the ensemble models. There were separate ensembles for predicting each of the three targets at each of the two locations. Our ensemble models scored higher for peak height and total dengue case counts reported in a transmission season for Iquitos than all other models submitted to the Dengue Challenge. However, the ensemble models did not do nearly as well when predicting the peak week. The Dengue Challenge organizers scored the dengue predictions of the Challenge participant groups. Our ensemble approach was the best in predicting the total number of dengue cases reported for transmission season and peak height for Iquitos, Peru.

  18. Ensemble method for dengue prediction.

    Directory of Open Access Journals (Sweden)

    Anna L Buczak

    Full Text Available In the 2015 NOAA Dengue Challenge, participants made three dengue target predictions for two locations (Iquitos, Peru, and San Juan, Puerto Rico during four dengue seasons: 1 peak height (i.e., maximum weekly number of cases during a transmission season; 2 peak week (i.e., week in which the maximum weekly number of cases occurred; and 3 total number of cases reported during a transmission season. A dengue transmission season is the 12-month period commencing with the location-specific, historical week with the lowest number of cases. At the beginning of the Dengue Challenge, participants were provided with the same input data for developing the models, with the prediction testing data provided at a later date.Our approach used ensemble models created by combining three disparate types of component models: 1 two-dimensional Method of Analogues models incorporating both dengue and climate data; 2 additive seasonal Holt-Winters models with and without wavelet smoothing; and 3 simple historical models. Of the individual component models created, those with the best performance on the prior four years of data were incorporated into the ensemble models. There were separate ensembles for predicting each of the three targets at each of the two locations.Our ensemble models scored higher for peak height and total dengue case counts reported in a transmission season for Iquitos than all other models submitted to the Dengue Challenge. However, the ensemble models did not do nearly as well when predicting the peak week.The Dengue Challenge organizers scored the dengue predictions of the Challenge participant groups. Our ensemble approach was the best in predicting the total number of dengue cases reported for transmission season and peak height for Iquitos, Peru.

  19. Development of motion image prediction method using principal component analysis

    International Nuclear Information System (INIS)

    Chhatkuli, Ritu Bhusal; Demachi, Kazuyuki; Kawai, Masaki; Sakakibara, Hiroshi; Kamiaka, Kazuma

    2012-01-01

    Respiratory motion can induce the limit in the accuracy of area irradiated during lung cancer radiation therapy. Many methods have been introduced to minimize the impact of healthy tissue irradiation due to the lung tumor motion. The purpose of this research is to develop an algorithm for the improvement of image guided radiation therapy by the prediction of motion images. We predict the motion images by using principal component analysis (PCA) and multi-channel singular spectral analysis (MSSA) method. The images/movies were successfully predicted and verified using the developed algorithm. With the proposed prediction method it is possible to forecast the tumor images over the next breathing period. The implementation of this method in real time is believed to be significant for higher level of tumor tracking including the detection of sudden abdominal changes during radiation therapy. (author)

  20. DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail; Soufan, Othman; Essack, Magbubah; Kalnis, Panos; Bajic, Vladimir B.

    2016-01-01

    DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind can be accessed online at: http://​www.​cbrc.​kaust.​edu.​sa/​daspfind.

  1. An assessment on epitope prediction methods for protozoa genomes

    Directory of Open Access Journals (Sweden)

    Resende Daniela M

    2012-11-01

    Full Text Available Abstract Background Epitope prediction using computational methods represents one of the most promising approaches to vaccine development. Reduction of time, cost, and the availability of completely sequenced genomes are key points and highly motivating regarding the use of reverse vaccinology. Parasites of genus Leishmania are widely spread and they are the etiologic agents of leishmaniasis. Currently, there is no efficient vaccine against this pathogen and the drug treatment is highly toxic. The lack of sufficiently large datasets of experimentally validated parasites epitopes represents a serious limitation, especially for trypanomatids genomes. In this work we highlight the predictive performances of several algorithms that were evaluated through the development of a MySQL database built with the purpose of: a evaluating individual algorithms prediction performances and their combination for CD8+ T cell epitopes, B-cell epitopes and subcellular localization by means of AUC (Area Under Curve performance and a threshold dependent method that employs a confusion matrix; b integrating data from experimentally validated and in silico predicted epitopes; and c integrating the subcellular localization predictions and experimental data. NetCTL, NetMHC, BepiPred, BCPred12, and AAP12 algorithms were used for in silico epitope prediction and WoLF PSORT, Sigcleave and TargetP for in silico subcellular localization prediction against trypanosomatid genomes. Results A database-driven epitope prediction method was developed with built-in functions that were capable of: a removing experimental data redundancy; b parsing algorithms predictions and storage experimental validated and predict data; and c evaluating algorithm performances. Results show that a better performance is achieved when the combined prediction is considered. This is particularly true for B cell epitope predictors, where the combined prediction of AAP12 and BCPred12 reached an AUC value

  2. Fast Prediction Method for Steady-State Heat Convection

    KAUST Repository

    Wáng, Yì

    2012-03-14

    A reduced model by proper orthogonal decomposition (POD) and Galerkin projection methods for steady-state heat convection is established on a nonuniform grid. It was verified by thousands of examples that the results are in good agreement with the results obtained from the finite volume method. This model can also predict the cases where model parameters far exceed the sample scope. Moreover, the calculation time needed by the model is much shorter than that needed for the finite volume method. Thus, the nonuniform POD-Galerkin projection method exhibits high accuracy, good suitability, and fast computation. It has universal significance for accurate and fast prediction. Also, the methodology can be applied to more complex modeling in chemical engineering and technology, such as reaction and turbulence. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Link Prediction Methods and Their Accuracy for Different Social Networks and Network Metrics

    Directory of Open Access Journals (Sweden)

    Fei Gao

    2015-01-01

    Full Text Available Currently, we are experiencing a rapid growth of the number of social-based online systems. The availability of the vast amounts of data gathered in those systems brings new challenges that we face when trying to analyse it. One of the intensively researched topics is the prediction of social connections between users. Although a lot of effort has been made to develop new prediction approaches, the existing methods are not comprehensively analysed. In this paper we investigate the correlation between network metrics and accuracy of different prediction methods. We selected six time-stamped real-world social networks and ten most widely used link prediction methods. The results of the experiments show that the performance of some methods has a strong correlation with certain network metrics. We managed to distinguish “prediction friendly” networks, for which most of the prediction methods give good performance, as well as “prediction unfriendly” networks, for which most of the methods result in high prediction error. Correlation analysis between network metrics and prediction accuracy of prediction methods may form the basis of a metalearning system where based on network characteristics it will be able to recommend the right prediction method for a given network.

  4. Predictive ability of machine learning methods for massive crop yield prediction

    Directory of Open Access Journals (Sweden)

    Alberto Gonzalez-Sanchez

    2014-04-01

    Full Text Available An important issue for agricultural planning purposes is the accurate yield estimation for the numerous crops involved in the planning. Machine learning (ML is an essential approach for achieving practical and effective solutions for this problem. Many comparisons of ML methods for yield prediction have been made, seeking for the most accurate technique. Generally, the number of evaluated crops and techniques is too low and does not provide enough information for agricultural planning purposes. This paper compares the predictive accuracy of ML and linear regression techniques for crop yield prediction in ten crop datasets. Multiple linear regression, M5-Prime regression trees, perceptron multilayer neural networks, support vector regression and k-nearest neighbor methods were ranked. Four accuracy metrics were used to validate the models: the root mean square error (RMS, root relative square error (RRSE, normalized mean absolute error (MAE, and correlation factor (R. Real data of an irrigation zone of Mexico were used for building the models. Models were tested with samples of two consecutive years. The results show that M5-Prime and k-nearest neighbor techniques obtain the lowest average RMSE errors (5.14 and 4.91, the lowest RRSE errors (79.46% and 79.78%, the lowest average MAE errors (18.12% and 19.42%, and the highest average correlation factors (0.41 and 0.42. Since M5-Prime achieves the largest number of crop yield models with the lowest errors, it is a very suitable tool for massive crop yield prediction in agricultural planning.

  5. Predicting chaos in memristive oscillator via harmonic balance method.

    Science.gov (United States)

    Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai

    2012-12-01

    This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.

  6. A Method for Driving Route Predictions Based on Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Ning Ye

    2015-01-01

    Full Text Available We present a driving route prediction method that is based on Hidden Markov Model (HMM. This method can accurately predict a vehicle’s entire route as early in a trip’s lifetime as possible without inputting origins and destinations beforehand. Firstly, we propose the route recommendation system architecture, where route predictions play important role in the system. Secondly, we define a road network model, normalize each of driving routes in the rectangular coordinate system, and build the HMM to make preparation for route predictions using a method of training set extension based on K-means++ and the add-one (Laplace smoothing technique. Thirdly, we present the route prediction algorithm. Finally, the experimental results of the effectiveness of the route predictions that is based on HMM are shown.

  7. Soft Computing Methods for Disulfide Connectivity Prediction.

    Science.gov (United States)

    Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.

  8. Comparison of four statistical and machine learning methods for crash severity prediction.

    Science.gov (United States)

    Iranitalab, Amirfarrokh; Khattak, Aemal

    2017-11-01

    Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Generic methods for aero-engine exhaust emission prediction

    NARCIS (Netherlands)

    Shakariyants, S.A.

    2008-01-01

    In the thesis, generic methods have been developed for aero-engine combustor performance, combustion chemistry, as well as airplane aerodynamics, airplane and engine performance. These methods specifically aim to support diverse emission prediction studies coupled with airplane and engine

  10. Supplementary Material for: DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail

    2016-01-01

    Abstract Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery

  11. Assessment of a method for the prediction of mandibular rotation.

    Science.gov (United States)

    Lee, R S; Daniel, F J; Swartz, M; Baumrind, S; Korn, E L

    1987-05-01

    A new method to predict mandibular rotation developed by Skieller and co-workers on a sample of 21 implant subjects with extreme growth patterns has been tested against an alternative sample of 25 implant patients with generally similar mean values, but with less extreme facial patterns. The method, which had been highly successful in retrospectively predicting changes in the sample of extreme subjects, was much less successful in predicting individual patterns of mandibular rotation in the new, less extreme sample. The observation of a large difference in the strength of the predictions for these two samples, even though their mean values were quite similar, should serve to increase our awareness of the complexity of the problem of predicting growth patterns in individual cases.

  12. Drug-Target Interactions: Prediction Methods and Applications.

    Science.gov (United States)

    Anusuya, Shanmugam; Kesherwani, Manish; Priya, K Vishnu; Vimala, Antonydhason; Shanmugam, Gnanendra; Velmurugan, Devadasan; Gromiha, M Michael

    2018-01-01

    Identifying the interactions between drugs and target proteins is a key step in drug discovery. This not only aids to understand the disease mechanism, but also helps to identify unexpected therapeutic activity or adverse side effects of drugs. Hence, drug-target interaction prediction becomes an essential tool in the field of drug repurposing. The availability of heterogeneous biological data on known drug-target interactions enabled many researchers to develop various computational methods to decipher unknown drug-target interactions. This review provides an overview on these computational methods for predicting drug-target interactions along with available webservers and databases for drug-target interactions. Further, the applicability of drug-target interactions in various diseases for identifying lead compounds has been outlined. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  13. Real-time prediction of respiratory motion based on local regression methods

    International Nuclear Information System (INIS)

    Ruan, D; Fessler, J A; Balter, J M

    2007-01-01

    Recent developments in modulation techniques enable conformal delivery of radiation doses to small, localized target volumes. One of the challenges in using these techniques is real-time tracking and predicting target motion, which is necessary to accommodate system latencies. For image-guided-radiotherapy systems, it is also desirable to minimize sampling rates to reduce imaging dose. This study focuses on predicting respiratory motion, which can significantly affect lung tumours. Predicting respiratory motion in real-time is challenging, due to the complexity of breathing patterns and the many sources of variability. We propose a prediction method based on local regression. There are three major ingredients of this approach: (1) forming an augmented state space to capture system dynamics, (2) local regression in the augmented space to train the predictor from previous observation data using semi-periodicity of respiratory motion, (3) local weighting adjustment to incorporate fading temporal correlations. To evaluate prediction accuracy, we computed the root mean square error between predicted tumor motion and its observed location for ten patients. For comparison, we also investigated commonly used predictive methods, namely linear prediction, neural networks and Kalman filtering to the same data. The proposed method reduced the prediction error for all imaging rates and latency lengths, particularly for long prediction lengths

  14. Methods for early prediction of lactation flow in Holstein heifers

    Directory of Open Access Journals (Sweden)

    Vesna Gantner

    2010-12-01

    Full Text Available The aim of this research was to define methods for early prediction (based on I. milk control record of lactation flow in Holstein heifers as well as to choose optimal one in terms of prediction fit and application simplicity. Total of 304,569 daily yield records automatically recorded on a 1,136 first lactation Holstein cows, from March 2003 till August 2008., were included in analysis. According to the test date, calving date, the age at first calving, lactation stage when I. milk control occurred and to the average milk yield in first 25th, T1 (and 25th-45th, T2 lactation days, measuring monthcalving month-age-production-time-period subgroups were formed. The parameters of analysed nonlinear and linear methods were estimated for each defined subgroup. As models evaluation measures,adjusted coefficient of determination, and average and standard deviation of error were used. Considering obtained results, in terms of total variance explanation (R2 adj, the nonlinear Wood’s method showed superiority above the linear ones (Wilmink’s, Ali-Schaeffer’s and Guo-Swalve’s method in both time-period subgroups (T1 - 97.5 % of explained variability; T2 - 98.1 % of explained variability. Regarding the evaluation measures based on prediction error amount (eavg±eSD, the lowest average error of daily milk yield prediction (less than 0.005 kg/day, as well as of lactation milk yield prediction (less than 50 kg/lactation (T1 time-period subgroup and less than 30 kg/lactation (T2 time-period subgroup; were determined when Wood’s nonlinear prediction method were applied. Obtained results indicate that estimated Wood’s regression parameters could be used in routine work for early prediction of Holstein heifer’s lactation flow.

  15. An ensemble method for predicting subnuclear localizations from primary protein structures.

    Directory of Open Access Journals (Sweden)

    Guo Sheng Han

    Full Text Available BACKGROUND: Predicting protein subnuclear localization is a challenging problem. Some previous works based on non-sequence information including Gene Ontology annotations and kernel fusion have respective limitations. The aim of this work is twofold: one is to propose a novel individual feature extraction method; another is to develop an ensemble method to improve prediction performance using comprehensive information represented in the form of high dimensional feature vector obtained by 11 feature extraction methods. METHODOLOGY/PRINCIPAL FINDINGS: A novel two-stage multiclass support vector machine is proposed to predict protein subnuclear localizations. It only considers those feature extraction methods based on amino acid classifications and physicochemical properties. In order to speed up our system, an automatic search method for the kernel parameter is used. The prediction performance of our method is evaluated on four datasets: Lei dataset, multi-localization dataset, SNL9 dataset and a new independent dataset. The overall accuracy of prediction for 6 localizations on Lei dataset is 75.2% and that for 9 localizations on SNL9 dataset is 72.1% in the leave-one-out cross validation, 71.7% for the multi-localization dataset and 69.8% for the new independent dataset, respectively. Comparisons with those existing methods show that our method performs better for both single-localization and multi-localization proteins and achieves more balanced sensitivities and specificities on large-size and small-size subcellular localizations. The overall accuracy improvements are 4.0% and 4.7% for single-localization proteins and 6.5% for multi-localization proteins. The reliability and stability of our classification model are further confirmed by permutation analysis. CONCLUSIONS: It can be concluded that our method is effective and valuable for predicting protein subnuclear localizations. A web server has been designed to implement the proposed method

  16. Available Prediction Methods for Corrosion under Insulation (CUI: A Review

    Directory of Open Access Journals (Sweden)

    Burhani Nurul Rawaida Ain

    2014-07-01

    Full Text Available Corrosion under insulation (CUI is an increasingly important issue for the piping in industries especially petrochemical and chemical plants due to its unexpected catastrophic disaster. Therefore, attention towards the maintenance and prediction of CUI occurrence, particularly in the corrosion rates, has grown in recent years. In this study, a literature review in determining the corrosion rates by using various prediction models and method of the corrosion occurrence between the external surface piping and its insulation was carried out. The results, prediction models and methods available were presented for future research references. However, most of the prediction methods available are based on each local industrial data only which might be different based on the plant location, environment, temperature and many other factors which may contribute to the difference and reliability of the model developed. Thus, it is more reliable if those models or method supported by laboratory testing or simulation which includes the factors promoting CUI such as environment temperature, insulation types, operating temperatures, and other factors.

  17. Prediction of Human Phenotype Ontology terms by means of hierarchical ensemble methods.

    Science.gov (United States)

    Notaro, Marco; Schubach, Max; Robinson, Peter N; Valentini, Giorgio

    2017-10-12

    The prediction of human gene-abnormal phenotype associations is a fundamental step toward the discovery of novel genes associated with human disorders, especially when no genes are known to be associated with a specific disease. In this context the Human Phenotype Ontology (HPO) provides a standard categorization of the abnormalities associated with human diseases. While the problem of the prediction of gene-disease associations has been widely investigated, the related problem of gene-phenotypic feature (i.e., HPO term) associations has been largely overlooked, even if for most human genes no HPO term associations are known and despite the increasing application of the HPO to relevant medical problems. Moreover most of the methods proposed in literature are not able to capture the hierarchical relationships between HPO terms, thus resulting in inconsistent and relatively inaccurate predictions. We present two hierarchical ensemble methods that we formally prove to provide biologically consistent predictions according to the hierarchical structure of the HPO. The modular structure of the proposed methods, that consists in a "flat" learning first step and a hierarchical combination of the predictions in the second step, allows the predictions of virtually any flat learning method to be enhanced. The experimental results show that hierarchical ensemble methods are able to predict novel associations between genes and abnormal phenotypes with results that are competitive with state-of-the-art algorithms and with a significant reduction of the computational complexity. Hierarchical ensembles are efficient computational methods that guarantee biologically meaningful predictions that obey the true path rule, and can be used as a tool to improve and make consistent the HPO terms predictions starting from virtually any flat learning method. The implementation of the proposed methods is available as an R package from the CRAN repository.

  18. Short-term prediction method of wind speed series based on fractal interpolation

    International Nuclear Information System (INIS)

    Xiu, Chunbo; Wang, Tiantian; Tian, Meng; Li, Yanqing; Cheng, Yi

    2014-01-01

    Highlights: • An improved fractal interpolation prediction method is proposed. • The chaos optimization algorithm is used to obtain the iterated function system. • The fractal extrapolate interpolation prediction of wind speed series is performed. - Abstract: In order to improve the prediction performance of the wind speed series, the rescaled range analysis is used to analyze the fractal characteristics of the wind speed series. An improved fractal interpolation prediction method is proposed to predict the wind speed series whose Hurst exponents are close to 1. An optimization function which is composed of the interpolation error and the constraint items of the vertical scaling factors in the fractal interpolation iterated function system is designed. The chaos optimization algorithm is used to optimize the function to resolve the optimal vertical scaling factors. According to the self-similarity characteristic and the scale invariance, the fractal extrapolate interpolation prediction can be performed by extending the fractal characteristic from internal interval to external interval. Simulation results show that the fractal interpolation prediction method can get better prediction result than others for the wind speed series with the fractal characteristic, and the prediction performance of the proposed method can be improved further because the fractal characteristic of its iterated function system is similar to that of the predicted wind speed series

  19. In silico toxicology: computational methods for the prediction of chemical toxicity

    KAUST Repository

    Raies, Arwa B.; Bajic, Vladimir B.

    2016-01-01

    Determining the toxicity of chemicals is necessary to identify their harmful effects on humans, animals, plants, or the environment. It is also one of the main steps in drug design. Animal models have been used for a long time for toxicity testing. However, in vivo animal tests are constrained by time, ethical considerations, and financial burden. Therefore, computational methods for estimating the toxicity of chemicals are considered useful. In silico toxicology is one type of toxicity assessment that uses computational methods to analyze, simulate, visualize, or predict the toxicity of chemicals. In silico toxicology aims to complement existing toxicity tests to predict toxicity, prioritize chemicals, guide toxicity tests, and minimize late-stage failures in drugs design. There are various methods for generating models to predict toxicity endpoints. We provide a comprehensive overview, explain, and compare the strengths and weaknesses of the existing modeling methods and algorithms for toxicity prediction with a particular (but not exclusive) emphasis on computational tools that can implement these methods and refer to expert systems that deploy the prediction models. Finally, we briefly review a number of new research directions in in silico toxicology and provide recommendations for designing in silico models.

  20. In silico toxicology: computational methods for the prediction of chemical toxicity

    KAUST Repository

    Raies, Arwa B.

    2016-01-06

    Determining the toxicity of chemicals is necessary to identify their harmful effects on humans, animals, plants, or the environment. It is also one of the main steps in drug design. Animal models have been used for a long time for toxicity testing. However, in vivo animal tests are constrained by time, ethical considerations, and financial burden. Therefore, computational methods for estimating the toxicity of chemicals are considered useful. In silico toxicology is one type of toxicity assessment that uses computational methods to analyze, simulate, visualize, or predict the toxicity of chemicals. In silico toxicology aims to complement existing toxicity tests to predict toxicity, prioritize chemicals, guide toxicity tests, and minimize late-stage failures in drugs design. There are various methods for generating models to predict toxicity endpoints. We provide a comprehensive overview, explain, and compare the strengths and weaknesses of the existing modeling methods and algorithms for toxicity prediction with a particular (but not exclusive) emphasis on computational tools that can implement these methods and refer to expert systems that deploy the prediction models. Finally, we briefly review a number of new research directions in in silico toxicology and provide recommendations for designing in silico models.

  1. Mechatronics technology in predictive maintenance method

    Science.gov (United States)

    Majid, Nurul Afiqah A.; Muthalif, Asan G. A.

    2017-11-01

    This paper presents recent mechatronics technology that can help to implement predictive maintenance by combining intelligent and predictive maintenance instrument. Vibration Fault Simulation System (VFSS) is an example of mechatronics system. The focus of this study is the prediction on the use of critical machines to detect vibration. Vibration measurement is often used as the key indicator of the state of the machine. This paper shows the choice of the appropriate strategy in the vibration of diagnostic process of the mechanical system, especially rotating machines, in recognition of the failure during the working process. In this paper, the vibration signature analysis is implemented to detect faults in rotary machining that includes imbalance, mechanical looseness, bent shaft, misalignment, missing blade bearing fault, balancing mass and critical speed. In order to perform vibration signature analysis for rotating machinery faults, studies have been made on how mechatronics technology is used as predictive maintenance methods. Vibration Faults Simulation Rig (VFSR) is designed to simulate and understand faults signatures. These techniques are based on the processing of vibrational data in frequency-domain. The LabVIEW-based spectrum analyzer software is developed to acquire and extract frequency contents of faults signals. This system is successfully tested based on the unique vibration fault signatures that always occur in a rotating machinery.

  2. Combining gene prediction methods to improve metagenomic gene annotation

    Directory of Open Access Journals (Sweden)

    Rosen Gail L

    2011-01-01

    Full Text Available Abstract Background Traditional gene annotation methods rely on characteristics that may not be available in short reads generated from next generation technology, resulting in suboptimal performance for metagenomic (environmental samples. Therefore, in recent years, new programs have been developed that optimize performance on short reads. In this work, we benchmark three metagenomic gene prediction programs and combine their predictions to improve metagenomic read gene annotation. Results We not only analyze the programs' performance at different read-lengths like similar studies, but also separate different types of reads, including intra- and intergenic regions, for analysis. The main deficiencies are in the algorithms' ability to predict non-coding regions and gene edges, resulting in more false-positives and false-negatives than desired. In fact, the specificities of the algorithms are notably worse than the sensitivities. By combining the programs' predictions, we show significant improvement in specificity at minimal cost to sensitivity, resulting in 4% improvement in accuracy for 100 bp reads with ~1% improvement in accuracy for 200 bp reads and above. To correctly annotate the start and stop of the genes, we find that a consensus of all the predictors performs best for shorter read lengths while a unanimous agreement is better for longer read lengths, boosting annotation accuracy by 1-8%. We also demonstrate use of the classifier combinations on a real dataset. Conclusions To optimize the performance for both prediction and annotation accuracies, we conclude that the consensus of all methods (or a majority vote is the best for reads 400 bp and shorter, while using the intersection of GeneMark and Orphelia predictions is the best for reads 500 bp and longer. We demonstrate that most methods predict over 80% coding (including partially coding reads on a real human gut sample sequenced by Illumina technology.

  3. Seminal quality prediction using data mining methods.

    Science.gov (United States)

    Sahoo, Anoop J; Kumar, Yugal

    2014-01-01

    Now-a-days, some new classes of diseases have come into existences which are known as lifestyle diseases. The main reasons behind these diseases are changes in the lifestyle of people such as alcohol drinking, smoking, food habits etc. After going through the various lifestyle diseases, it has been found that the fertility rates (sperm quantity) in men has considerably been decreasing in last two decades. Lifestyle factors as well as environmental factors are mainly responsible for the change in the semen quality. The objective of this paper is to identify the lifestyle and environmental features that affects the seminal quality and also fertility rate in man using data mining methods. The five artificial intelligence techniques such as Multilayer perceptron (MLP), Decision Tree (DT), Navie Bayes (Kernel), Support vector machine+Particle swarm optimization (SVM+PSO) and Support vector machine (SVM) have been applied on fertility dataset to evaluate the seminal quality and also to predict the person is either normal or having altered fertility rate. While the eight feature selection techniques such as support vector machine (SVM), neural network (NN), evolutionary logistic regression (LR), support vector machine plus particle swarm optimization (SVM+PSO), principle component analysis (PCA), chi-square test, correlation and T-test methods have been used to identify more relevant features which affect the seminal quality. These techniques are applied on fertility dataset which contains 100 instances with nine attribute with two classes. The experimental result shows that SVM+PSO provides higher accuracy and area under curve (AUC) rate (94% & 0.932) among multi-layer perceptron (MLP) (92% & 0.728), Support Vector Machines (91% & 0.758), Navie Bayes (Kernel) (89% & 0.850) and Decision Tree (89% & 0.735) for some of the seminal parameters. This paper also focuses on the feature selection process i.e. how to select the features which are more important for prediction of

  4. Prediction of Protein–Protein Interactions by Evidence Combining Methods

    Directory of Open Access Journals (Sweden)

    Ji-Wei Chang

    2016-11-01

    Full Text Available Most cellular functions involve proteins’ features based on their physical interactions with other partner proteins. Sketching a map of protein–protein interactions (PPIs is therefore an important inception step towards understanding the basics of cell functions. Several experimental techniques operating in vivo or in vitro have made significant contributions to screening a large number of protein interaction partners, especially high-throughput experimental methods. However, computational approaches for PPI predication supported by rapid accumulation of data generated from experimental techniques, 3D structure definitions, and genome sequencing have boosted the map sketching of PPIs. In this review, we shed light on in silico PPI prediction methods that integrate evidence from multiple sources, including evolutionary relationship, function annotation, sequence/structure features, network topology and text mining. These methods are developed for integration of multi-dimensional evidence, for designing the strategies to predict novel interactions, and for making the results consistent with the increase of prediction coverage and accuracy.

  5. Force prediction in cold rolling mills by polynomial methods

    Directory of Open Access Journals (Sweden)

    Nicu ROMAN

    2007-12-01

    Full Text Available A method for steel and aluminium strip thickness control is provided including a new technique for predictive rolling force estimation method by statistic model based on polynomial techniques.

  6. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Validity of a manual soft tissue profile prediction method following mandibular setback osteotomy.

    Science.gov (United States)

    Kolokitha, Olga-Elpis

    2007-10-01

    The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. To test the validity of the manual method the prediction tracings were compared to the actual post-operative tracings. The Dentofacial Planner software was used to develop the computerized post-surgical prediction tracings. Both manual and computerized prediction printouts were analyzed by using the cephalometric system PORDIOS. Statistical analysis was performed by means of t-test. Comparison between manual prediction tracings and the actual post-operative profile showed that the manual method results in more convex soft tissue profiles; the upper lip was found in a more prominent position, upper lip thickness was increased and, the mandible and lower lip were found in a less posterior position than that of the actual profiles. Comparison between computerized and manual prediction methods showed that in the manual method upper lip thickness was increased, the upper lip was found in a more anterior position and the lower anterior facial height was increased as compared to the computerized prediction method. Cephalometric simulation of post-operative soft tissue profile following orthodontic-surgical management of mandibular prognathism imposes certain limitations related to the methods implied. However, both manual and computerized prediction methods remain a useful tool for patient communication.

  8. River Flow Prediction Using the Nearest Neighbor Probabilistic Ensemble Method

    Directory of Open Access Journals (Sweden)

    H. Sanikhani

    2016-02-01

    Full Text Available Introduction: In the recent years, researchers interested on probabilistic forecasting of hydrologic variables such river flow.A probabilistic approach aims at quantifying the prediction reliability through a probability distribution function or a prediction interval for the unknown future value. The evaluation of the uncertainty associated to the forecast is seen as a fundamental information, not only to correctly assess the prediction, but also to compare forecasts from different methods and to evaluate actions and decisions conditionally on the expected values. Several probabilistic approaches have been proposed in the literature, including (1 methods that use resampling techniques to assess parameter and model uncertainty, such as the Metropolis algorithm or the Generalized Likelihood Uncertainty Estimation (GLUE methodology for an application to runoff prediction, (2 methods based on processing the forecast errors of past data to produce the probability distributions of future values and (3 methods that evaluate how the uncertainty propagates from the rainfall forecast to the river discharge prediction, as the Bayesian forecasting system. Materials and Methods: In this study, two different probabilistic methods are used for river flow prediction.Then the uncertainty related to the forecast is quantified. One approach is based on linear predictors and in the other, nearest neighbor was used. The nonlinear probabilistic ensemble can be used for nonlinear time series analysis using locally linear predictors, while NNPE utilize a method adapted for one step ahead nearest neighbor methods. In this regard, daily river discharge (twelve years of Dizaj and Mashin Stations on Baranduz-Chay basin in west Azerbijan and Zard-River basin in Khouzestan provinces were used, respectively. The first six years of data was applied for fitting the model. The next three years was used to calibration and the remained three yeas utilized for testing the models

  9. A Prediction Method of Airport Noise Based on Hybrid Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Tao XU

    2014-05-01

    Full Text Available Using monitoring history data to build and to train a prediction model for airport noise is a normal method in recent years. However, the single model built in different ways has various performances in the storage, efficiency and accuracy. In order to predict the noise accurately in some complex environment around airport, this paper presents a prediction method based on hybrid ensemble learning. The proposed method ensembles three algorithms: artificial neural network as an active learner, nearest neighbor as a passive leaner and nonlinear regression as a synthesized learner. The experimental results show that the three learners can meet forecast demands respectively in on- line, near-line and off-line. And the accuracy of prediction is improved by integrating these three learners’ results.

  10. Ensemble approach combining multiple methods improves human transcription start site prediction

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-11-30

    Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.

  11. The energetic cost of walking: a comparison of predictive methods.

    Directory of Open Access Journals (Sweden)

    Patricia Ann Kramer

    Full Text Available BACKGROUND: The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1 to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2 to investigate to what degree the prediction methods explain the variation in energy expenditure. METHODOLOGY/PRINCIPAL FINDINGS: We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. CONCLUSION: Our results indicate that the choice of predictive method is dependent on the question(s of interest and the data available for use as inputs. Although we

  12. The energetic cost of walking: a comparison of predictive methods.

    Science.gov (United States)

    Kramer, Patricia Ann; Sylvester, Adam D

    2011-01-01

    The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended

  13. What Predicts Use of Learning-Centered, Interactive Engagement Methods?

    Science.gov (United States)

    Madson, Laura; Trafimow, David; Gray, Tara; Gutowitz, Michael

    2014-01-01

    What makes some faculty members more likely to use interactive engagement methods than others? We use the theory of reasoned action to predict faculty members' use of interactive engagement methods. Results indicate that faculty members' beliefs about the personal positive consequences of using these methods (e.g., "Using interactive…

  14. A dynamic particle filter-support vector regression method for reliability prediction

    International Nuclear Information System (INIS)

    Wei, Zhao; Tao, Tao; ZhuoShu, Ding; Zio, Enrico

    2013-01-01

    Support vector regression (SVR) has been applied to time series prediction and some works have demonstrated the feasibility of its use to forecast system reliability. For accuracy of reliability forecasting, the selection of SVR's parameters is important. The existing research works on SVR's parameters selection divide the example dataset into training and test subsets, and tune the parameters on the training data. However, these fixed parameters can lead to poor prediction capabilities if the data of the test subset differ significantly from those of training. Differently, the novel method proposed in this paper uses particle filtering to estimate the SVR model parameters according to the whole measurement sequence up to the last observation instance. By treating the SVR training model as the observation equation of a particle filter, our method allows updating the SVR model parameters dynamically when a new observation comes. Because of the adaptability of the parameters to dynamic data pattern, the new PF–SVR method has superior prediction performance over that of standard SVR. Four application results show that PF–SVR is more robust than SVR to the decrease of the number of training data and the change of initial SVR parameter values. Also, even if there are trends in the test data different from those in the training data, the method can capture the changes, correct the SVR parameters and obtain good predictions. -- Highlights: •A dynamic PF–SVR method is proposed to predict the system reliability. •The method can adjust the SVR parameters according to the change of data. •The method is robust to the size of training data and initial parameter values. •Some cases based on both artificial and real data are studied. •PF–SVR shows superior prediction performance over standard SVR

  15. Hybrid methods for airframe noise numerical prediction

    Energy Technology Data Exchange (ETDEWEB)

    Terracol, M.; Manoha, E.; Herrero, C.; Labourasse, E.; Redonnet, S. [ONERA, Department of CFD and Aeroacoustics, BP 72, Chatillon (France); Sagaut, P. [Laboratoire de Modelisation en Mecanique - UPMC/CNRS, Paris (France)

    2005-07-01

    This paper describes some significant steps made towards the numerical simulation of the noise radiated by the high-lift devices of a plane. Since the full numerical simulation of such configuration is still out of reach for present supercomputers, some hybrid strategies have been developed to reduce the overall cost of such simulations. The proposed strategy relies on the coupling of an unsteady nearfield CFD with an acoustic propagation solver based on the resolution of the Euler equations for midfield propagation in an inhomogeneous field, and the use of an integral solver for farfield acoustic predictions. In the first part of this paper, this CFD/CAA coupling strategy is presented. In particular, the numerical method used in the propagation solver is detailed, and two applications of this coupling method to the numerical prediction of the aerodynamic noise of an airfoil are presented. Then, a hybrid RANS/LES method is proposed in order to perform some unsteady simulations of complex noise sources. This method allows for significant reduction of the cost of such a simulation by considerably reducing the extent of the LES zone. This method is described and some results of the numerical simulation of the three-dimensional unsteady flow in the slat cove of a high-lift profile are presented. While these results remain very difficult to validate with experiments on similar configurations, they represent up to now the first 3D computations of this kind of flow. (orig.)

  16. Prediction methods and databases within chemoinformatics: emphasis on drugs and drug candidates

    DEFF Research Database (Denmark)

    Jonsdottir, Svava Osk; Jorgensen, FS; Brunak, Søren

    2005-01-01

    about drugs and drug candidates, and of databases with relevant properties. Access to experimental data and numerical methods for selecting and utilizing these data is crucial for developing accurate predictive in silico models. Many interesting predictive methods for classifying the suitability......MOTIVATION: To gather information about available databases and chemoinformatics methods for prediction of properties relevant to the drug discovery and optimization process. RESULTS: We present an overview of the most important databases with 2-dimensional and 3-dimensional structural information...... of chemical compounds as potential drugs, as well as for predicting their physico-chemical and ADMET properties have been proposed in recent years. These methods are discussed, and some possible future directions in this rapidly developing field are described....

  17. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    Directory of Open Access Journals (Sweden)

    Nielsen Morten

    2009-07-01

    Full Text Available Abstract Background Estimation of the reliability of specific real value predictions is nontrivial and the efficacy of this is often questionable. It is important to know if you can trust a given prediction and therefore the best methods associate a prediction with a reliability score or index. For discrete qualitative predictions, the reliability is conventionally estimated as the difference between output scores of selected classes. Such an approach is not feasible for methods that predict a biological feature as a single real value rather than a classification. As a solution to this challenge, we have implemented a method that predicts the relative surface accessibility of an amino acid and simultaneously predicts the reliability for each prediction, in the form of a Z-score. Results An ensemble of artificial neural networks has been trained on a set of experimentally solved protein structures to predict the relative exposure of the amino acids. The method assigns a reliability score to each surface accessibility prediction as an inherent part of the training process. This is in contrast to the most commonly used procedures where reliabilities are obtained by post-processing the output. Conclusion The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability score with the individual predictions. However, our implementation of reliability scores in the form of a Z-score is shown to be the more informative measure for discriminating good predictions from bad ones in the entire range from completely buried to fully exposed amino acids. This is evident when comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0

  18. Comparison of selected methods of prediction of wine exports and imports

    Directory of Open Access Journals (Sweden)

    Radka Šperková

    2008-01-01

    Full Text Available For prediction of future events, there exist a number of methods usable in managerial practice. Decision on which of them should be used in a particular situation depends not only on the amount and quality of input information, but also on a subjective managerial judgement. Paper performs a practical application and consequent comparison of results of two selected methods, which are statistical method and deductive method. Both methods were used for predicting wine exports and imports in (from the Czech Republic. Prediction was done in 2003 and it related to the economic years 2003/2004, 2004/2005, 2005/2006, and 2006/2007, within which it was compared with the real values of the given indicators.Within the deductive methods there were characterized the most important factors of external environment including the most important influence according to authors’ opinion, which was the integration of the Czech Republic into the EU from 1st May, 2004. On the contrary, the statistical method of time-series analysis did not regard the integration, which is comes out of its principle. Statistics only calculates based on data from the past, and cannot incorporate the influence of irregular future conditions, just as the EU integration. Because of this the prediction based on deductive method was more optimistic and more precise in terms of its difference from real development in the given field.

  19. A Novel Grey Wave Method for Predicting Total Chinese Trade Volume

    Directory of Open Access Journals (Sweden)

    Kedong Yin

    2017-12-01

    Full Text Available The total trade volume of a country is an important way of appraising its international trade situation. A prediction based on trade volume will help enterprises arrange production efficiently and promote the sustainability of the international trade. Because the total Chinese trade volume fluctuates over time, this paper proposes a Grey wave forecasting model with a Hodrick–Prescott filter (HP filter to forecast it. This novel model first parses time series into long-term trend and short-term cycle. Second, the model uses a general GM (1,1 to predict the trend term and the Grey wave forecasting model to predict the cycle term. Empirical analysis shows that the improved Grey wave prediction method provides a much more accurate forecast than the basic Grey wave prediction method, achieving better prediction results than autoregressive moving average model (ARMA.

  20. Methods for predicting isochronous stress-strain curves

    International Nuclear Information System (INIS)

    Kiyoshige, Masanori; Shimizu, Shigeki; Satoh, Keisuke.

    1976-01-01

    Isochronous stress-strain curves show the relation between stress and total strain at a certain temperature with time as a parameter, and they are drawn up from the creep test results at various stress levels at a definite temperature. The concept regarding the isochronous stress-strain curves was proposed by McVetty in 1930s, and has been used for the design of aero-engines. Recently the high temperature characteristics of materials are shown as the isochronous stress-strain curves in the design guide for the nuclear energy equipments and structures used in high temperature creep region. It is prescribed that these curves are used as the criteria for determining design stress intensity or the data for analyzing the superposed effects of creep and fatigue. In case of the isochronous stress-strain curves used for the design of nuclear energy equipments with very long service life, it is impractical to determine the curves directly from the results of long time creep test, accordingly the method of predicting long time stress-strain curves from short time creep test results must be established. The method proposed by the authors, for which the creep constitution equations taking the first and second creep stages into account are used, and the method using Larson-Miller parameter were studied, and it was found that both methods were reliable for the prediction. (Kako, I.)

  1. Analysis of the uranium price predicted to 24 months, implementing neural networks and the Monte Carlo method like predictive tools

    International Nuclear Information System (INIS)

    Esquivel E, J.; Ramirez S, J. R.; Palacios H, J. C.

    2011-11-01

    The present work shows predicted prices of the uranium, using a neural network. The importance of predicting financial indexes of an energy resource, in this case, allows establishing budgetary measures, as well as the costs of the resource to medium period. The uranium is part of the main energy generating fuels and as such, its price rebounds in the financial analyses, due to this is appealed to predictive methods to obtain an outline referent to the financial behaviour that will have in a certain time. In this study, two methodologies are used for the prediction of the uranium price: the Monte Carlo method and the neural networks. These methods allow predicting the indexes of monthly costs, for a two years period, starting from the second bimonthly of 2011. For the prediction the uranium costs are used, registered from the year 2005. (Author)

  2. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  3. An Influence Function Method for Predicting Store Aerodynamic Characteristics during Weapon Separation,

    Science.gov (United States)

    1981-05-14

    8217 AO-Ail 777 GRUMMAN AEROSPACE CORP BETHPAGE NY F/G 20/4 AN INFLUENCE FUNCTION METHOD FOR PREDICTING STORE AERODYNAMIC C--ETCCU) MAY 8 1 R MEYER, A...CENKO, S YARDS UNCLASSIFIED N ’.**~~N**n I EHEEKI j~j .25 Q~4 111110 111_L 5. AN INFLUENCE FUNCTION METHOD FOR PREDICTING STORE AERODYNAMIC...extended to their logical conclusion one is led quite naturally to consideration of an " Influence Function Method" for I predicting store aerodynamic

  4. Novel Methods for Drug-Target Interaction Prediction using Graph Mining

    KAUST Repository

    Ba Alawi, Wail

    2016-08-31

    The problem of developing drugs that can be used to cure diseases is important and requires a careful approach. Since pursuing the wrong candidate drug for a particular disease could be very costly in terms of time and money, there is a strong interest in minimizing such risks. Drug repositioning has become a hot topic of research, as it helps reduce these risks significantly at the early stages of drug development by reusing an approved drug for the treatment of a different disease. Still, finding new usage for a drug is non-trivial, as it is necessary to find out strong supporting evidence that the proposed new uses of drugs are plausible. Many computational approaches were developed to narrow the list of possible candidate drug-target interactions (DTIs) before any experiments are done. However, many of these approaches suffer from unacceptable levels of false positives. We developed two novel methods based on graph mining networks of drugs and targets. The first method (DASPfind) finds all non-cyclic paths that connect a drug and a target, and using a function that we define, calculates a score from all the paths. This score describes our confidence that DTI is correct. We show that DASPfind significantly outperforms other state-of-the-art methods in predicting the top ranked target for each drug. We demonstrate the utility of DASPfind by predicting 15 novel DTIs over a set of ion channel proteins, and confirming 12 out of these 15 DTIs through experimental evidence reported in literature and online drug databases. The second method (DASPfind+) modifies DASPfind in order to increase the confidence and reliability of the resultant predictions. Based on the structure of the drug-target interaction (DTI) networks, we introduced an optimization scheme that incrementally alters the network structure locally for each drug to achieve more robust top 1 ranked predictions. Moreover, we explored effects of several similarity measures between the targets on the prediction

  5. A method for uncertainty quantification in the life prediction of gas turbine components

    Energy Technology Data Exchange (ETDEWEB)

    Lodeby, K.; Isaksson, O.; Jaervstraat, N. [Volvo Aero Corporation, Trolhaettan (Sweden)

    1998-12-31

    A failure in an aircraft jet engine can have severe consequences which cannot be accepted and high requirements are therefore raised on engine reliability. Consequently, assessment of the reliability of life predictions used in design and maintenance are important. To assess the validity of the predicted life a method to quantify the contribution to the total uncertainty in the life prediction from different uncertainty sources is developed. The method is a structured approach for uncertainty quantification that uses a generic description of the life prediction process. It is based on an approximate error propagation theory combined with a unified treatment of random and systematic errors. The result is an approximate statistical distribution for the predicted life. The method is applied on life predictions for three different jet engine components. The total uncertainty became of reasonable order of magnitude and a good qualitative picture of the distribution of the uncertainty contribution from the different sources was obtained. The relative importance of the uncertainty sources differs between the three components. It is also highly dependent on the methods and assumptions used in the life prediction. Advantages and disadvantages of this method is discussed. (orig.) 11 refs.

  6. Improving protein function prediction methods with integrated literature data

    Directory of Open Access Journals (Sweden)

    Gabow Aaron P

    2008-04-01

    Full Text Available Abstract Background Determining the function of uncharacterized proteins is a major challenge in the post-genomic era due to the problem's complexity and scale. Identifying a protein's function contributes to an understanding of its role in the involved pathways, its suitability as a drug target, and its potential for protein modifications. Several graph-theoretic approaches predict unidentified functions of proteins by using the functional annotations of better-characterized proteins in protein-protein interaction networks. We systematically consider the use of literature co-occurrence data, introduce a new method for quantifying the reliability of co-occurrence and test how performance differs across species. We also quantify changes in performance as the prediction algorithms annotate with increased specificity. Results We find that including information on the co-occurrence of proteins within an abstract greatly boosts performance in the Functional Flow graph-theoretic function prediction algorithm in yeast, fly and worm. This increase in performance is not simply due to the presence of additional edges since supplementing protein-protein interactions with co-occurrence data outperforms supplementing with a comparably-sized genetic interaction dataset. Through the combination of protein-protein interactions and co-occurrence data, the neighborhood around unknown proteins is quickly connected to well-characterized nodes which global prediction algorithms can exploit. Our method for quantifying co-occurrence reliability shows superior performance to the other methods, particularly at threshold values around 10% which yield the best trade off between coverage and accuracy. In contrast, the traditional way of asserting co-occurrence when at least one abstract mentions both proteins proves to be the worst method for generating co-occurrence data, introducing too many false positives. Annotating the functions with greater specificity is harder

  7. Machine learning methods to predict child posttraumatic stress: a proof of concept study.

    Science.gov (United States)

    Saxe, Glenn N; Ma, Sisi; Ren, Jiwen; Aliferis, Constantin

    2017-07-10

    The care of traumatized children would benefit significantly from accurate predictive models for Posttraumatic Stress Disorder (PTSD), using information available around the time of trauma. Machine Learning (ML) computational methods have yielded strong results in recent applications across many diseases and data types, yet they have not been previously applied to childhood PTSD. Since these methods have not been applied to this complex and debilitating disorder, there is a great deal that remains to be learned about their application. The first step is to prove the concept: Can ML methods - as applied in other fields - produce predictive classification models for childhood PTSD? Additionally, we seek to determine if specific variables can be identified - from the aforementioned predictive classification models - with putative causal relations to PTSD. ML predictive classification methods - with causal discovery feature selection - were applied to a data set of 163 children hospitalized with an injury and PTSD was determined three months after hospital discharge. At the time of hospitalization, 105 risk factor variables were collected spanning a range of biopsychosocial domains. Seven percent of subjects had a high level of PTSD symptoms. A predictive classification model was discovered with significant predictive accuracy. A predictive model constructed based on subsets of potentially causally relevant features achieves similar predictivity compared to the best predictive model constructed with all variables. Causal Discovery feature selection methods identified 58 variables of which 10 were identified as most stable. In this first proof-of-concept application of ML methods to predict childhood Posttraumatic Stress we were able to determine both predictive classification models for childhood PTSD and identify several causal variables. This set of techniques has great potential for enhancing the methodological toolkit in the field and future studies should seek to

  8. Life prediction methods for the combined creep-fatigue endurance

    International Nuclear Information System (INIS)

    Wareing, J.; Lloyd, G.J.

    1980-09-01

    The basis and current status of development of the various approaches to the prediction of the combined creep-fatigue endurance are reviewed. It is concluded that an inadequate materials data base makes it difficult to draw sensible conclusions about the prediction capabilities of each of the available methods. Correlation with data for stainless steel 304 and 316 is presented. (U.K.)

  9. Ensemble approach combining multiple methods improves human transcription start site prediction.

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-01-01

    The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets.

  10. Prediction of protein post-translational modifications: main trends and methods

    Science.gov (United States)

    Sobolev, B. N.; Veselovsky, A. V.; Poroikov, V. V.

    2014-02-01

    The review summarizes main trends in the development of methods for the prediction of protein post-translational modifications (PTMs) by considering the three most common types of PTMs — phosphorylation, acetylation and glycosylation. Considerable attention is given to general characteristics of regulatory interactions associated with PTMs. Different approaches to the prediction of PTMs are analyzed. Most of the methods are based only on the analysis of the neighbouring environment of modification sites. The related software is characterized by relatively low accuracy of PTM predictions, which may be due both to the incompleteness of training data and the features of PTM regulation. Advantages and limitations of the phylogenetic approach are considered. The prediction of PTMs using data on regulatory interactions, including the modular organization of interacting proteins, is a promising field, provided that a more carefully selected training data will be used. The bibliography includes 145 references.

  11. Fast Prediction Method for Steady-State Heat Convection

    KAUST Repository

    Wá ng, Yì ; Yu, Bo; Sun, Shuyu

    2012-01-01

    , the nonuniform POD-Galerkin projection method exhibits high accuracy, good suitability, and fast computation. It has universal significance for accurate and fast prediction. Also, the methodology can be applied to more complex modeling in chemical engineering

  12. An ensemble method to predict target genes and pathways in uveal melanoma

    Directory of Open Access Journals (Sweden)

    Wei Chao

    2018-04-01

    Full Text Available This work proposes to predict target genes and pathways for uveal melanoma (UM based on an ensemble method and pathway analyses. Methods: The ensemble method integrated a correlation method (Pearson correlation coefficient, PCC, a causal inference method (IDA and a regression method (Lasso utilizing the Borda count election method. Subsequently, to validate the performance of PIL method, comparisons between confirmed database and predicted miRNA targets were performed. Ultimately, pathway enrichment analysis was conducted on target genes in top 1000 miRNA-mRNA interactions to identify target pathways for UM patients. Results: Thirty eight of the predicted interactions were matched with the confirmed interactions, indicating that the ensemble method was a suitable and feasible approach to predict miRNA targets. We obtained 50 seed miRNA-mRNA interactions of UM patients and extracted target genes from these interactions, such as ASPG, BSDC1 and C4BP. The 601 target genes in top 1,000 miRNA-mRNA interactions were enriched in 12 target pathways, of which Phototransduction was the most significant one. Conclusion: The target genes and pathways might provide a new way to reveal the molecular mechanism of UM and give hand for target treatments and preventions of this malignant tumor.

  13. Bayesian Methods for Predicting the Shape of Chinese Yam in Terms of Key Diameters

    Directory of Open Access Journals (Sweden)

    Mitsunori Kayano

    2017-01-01

    Full Text Available This paper proposes Bayesian methods for the shape estimation of Chinese yam (Dioscorea opposita using a few key diameters of yam. Shape prediction of yam is applicable to determining optimal cutoff positions of a yam for producing seed yams. Our Bayesian method, which is a combination of Bayesian estimation model and predictive model, enables automatic, rapid, and low-cost processing of yam. After the construction of the proposed models using a sample data set in Japan, the models provide whole shape prediction of yam based on only a few key diameters. The Bayesian method performed well on the shape prediction in terms of minimizing the mean squared error between measured shape and the prediction. In particular, a multiple regression method with key diameters at two fixed positions attained the highest performance for shape prediction. We have developed automatic, rapid, and low-cost yam-processing machines based on the Bayesian estimation model and predictive model. Development of such shape prediction approaches, including our Bayesian method, can be a valuable aid in reducing the cost and time in food processing.

  14. Prediction Methods for Blood Glucose Concentration

    DEFF Research Database (Denmark)

    -day workshop on the design, use and evaluation of prediction methods for blood glucose concentration was held at the Johannes Kepler University Linz, Austria. One intention of the workshop was to bring together experts working in various fields on the same topic, in order to shed light from different angles...... discussions which allowed to receive direct feedback from the point of view of different disciplines. This book is based on the contributions of that workshop and is intended to convey an overview of the different aspects involved in the prediction. The individual chapters are based on the presentations given...... in the process of writing this book: All authors for their individual contributions, all reviewers of the book chapters, Daniela Hummer for the entire organization of the workshop, Boris Tasevski for helping with the typesetting, Florian Reiterer for his help editing the book, as well as Oliver Jackson and Karin...

  15. Benchmarking pKa prediction methods for Lys115 in acetoacetate decarboxylase.

    Science.gov (United States)

    Liu, Yuli; Patel, Anand H G; Burger, Steven K; Ayers, Paul W

    2017-05-01

    Three different pK a prediction methods were used to calculate the pK a of Lys115 in acetoacetate decarboxylase (AADase): the empirical method PROPKA, the multiconformation continuum electrostatics (MCCE) method, and the molecular dynamics/thermodynamic integration (MD/TI) method with implicit solvent. As expected, accurate pK a prediction of Lys115 depends on the protonation patterns of other ionizable groups, especially the nearby Glu76. However, since the prediction methods do not explicitly sample the protonation patterns of nearby residues, this must be done manually. When Glu76 is deprotonated, all three methods give an incorrect pK a value for Lys115. If protonated, Glu76 is used in an MD/TI calculation, the pK a of Lys115 is predicted to be 5.3, which agrees well with the experimental value of 5.9. This result agrees with previous site-directed mutagenesis studies, where the mutation of Glu76 (negative charge when deprotonated) to Gln (neutral) causes no change in K m , suggesting that Glu76 has no effect on the pK a shift of Lys115. Thus, we postulate that the pK a of Glu76 is also shifted so that Glu76 is protonated (neutral) in AADase. Graphical abstract Simulated abundances of protonated species as pH is varied.

  16. A deep learning-based multi-model ensemble method for cancer prediction.

    Science.gov (United States)

    Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong

    2018-01-01

    Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Towards a unified fatigue life prediction method for marine structures

    CERN Document Server

    Cui, Weicheng; Wang, Fang

    2014-01-01

    In order to apply the damage tolerance design philosophy to design marine structures, accurate prediction of fatigue crack growth under service conditions is required. Now, more and more people have realized that only a fatigue life prediction method based on fatigue crack propagation (FCP) theory has the potential to explain various fatigue phenomena observed. In this book, the issues leading towards the development of a unified fatigue life prediction (UFLP) method based on FCP theory are addressed. Based on the philosophy of the UFLP method, the current inconsistency between fatigue design and inspection of marine structures could be resolved. This book presents the state-of-the-art and recent advances, including those by the authors, in fatigue studies. It is designed to lead the future directions and to provide a useful tool in many practical applications. It is intended to address to engineers, naval architects, research staff, professionals and graduates engaged in fatigue prevention design and survey ...

  18. Bicycle Frame Prediction Techniques with Fuzzy Logic Method

    Directory of Open Access Journals (Sweden)

    Rafiuddin Syam

    2015-03-01

    Full Text Available In general, an appropriate size bike frame would get comfort to the rider while biking. This study aims to predict the simulation system on the bike frame sizes with fuzzy logic. Testing method used is the simulation test. In this study, fuzzy logic will be simulated using Matlab language to test their performance. Mamdani fuzzy logic using 3 variables and 1 output variable intake. Triangle function for the input and output. The controller is designed in the type mamdani with max-min composition and the method deffuzification using center of gravity method. The results showed that height, inseam and Crank Size generating appropriate frame size for the rider associated with comfort. Has a height range between 142 cm and 201 cm. Inseam has a range between 64 cm and 97 cm. Crank has a size range between 175 mm and 180 mm. The simulation results have a range of frame sizes between 13 inches and 22 inches. By using the fuzzy logic can be predicted the size frame of bicycle suitable for the biker.

  19. Bicycle Frame Prediction Techniques with Fuzzy Logic Method

    Directory of Open Access Journals (Sweden)

    Rafiuddin Syam

    2017-03-01

    Full Text Available In general, an appropriate size bike frame would get comfort to the rider while biking. This study aims to predict the simulation system on the bike frame sizes with fuzzy logic. Testing method used is the simulation test. In this study, fuzzy logic will be simulated using Matlab language to test their performance. Mamdani fuzzy logic using 3 variables and 1 output variable intake. Triangle function for the input and output. The controller is designed in the type mamdani with max-min composition and the method deffuzification using center of gravity method. The results showed that height, inseam and Crank Size generating appropriate frame size for the rider associated with comfort. Has a height range between 142 cm and 201 cm. Inseam has a range between 64 cm and 97 cm. Crank has a size range between 175 mm and 180 mm. The simulation results have a range of frame sizes between 13 inches and 22 inches. By using the fuzzy logic can be predicted the size frame of bicycle suitable for the biker.

  20. Predictive probability methods for interim monitoring in clinical trials with longitudinal outcomes.

    Science.gov (United States)

    Zhou, Ming; Tang, Qi; Lang, Lixin; Xing, Jun; Tatsuoka, Kay

    2018-04-17

    In clinical research and development, interim monitoring is critical for better decision-making and minimizing the risk of exposing patients to possible ineffective therapies. For interim futility or efficacy monitoring, predictive probability methods are widely adopted in practice. Those methods have been well studied for univariate variables. However, for longitudinal studies, predictive probability methods using univariate information from only completers may not be most efficient, and data from on-going subjects can be utilized to improve efficiency. On the other hand, leveraging information from on-going subjects could allow an interim analysis to be potentially conducted once a sufficient number of subjects reach an earlier time point. For longitudinal outcomes, we derive closed-form formulas for predictive probabilities, including Bayesian predictive probability, predictive power, and conditional power and also give closed-form solutions for predictive probability of success in a future trial and the predictive probability of success of the best dose. When predictive probabilities are used for interim monitoring, we study their distributions and discuss their analytical cutoff values or stopping boundaries that have desired operating characteristics. We show that predictive probabilities utilizing all longitudinal information are more efficient for interim monitoring than that using information from completers only. To illustrate their practical application for longitudinal data, we analyze 2 real data examples from clinical trials. Copyright © 2018 John Wiley & Sons, Ltd.

  1. Computational predictive methods for fracture and fatigue

    Science.gov (United States)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-09-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  2. Using deuterated PAH amendments to validate chemical extraction methods to predict PAH bioavailability in soils

    International Nuclear Information System (INIS)

    Gomez-Eyles, Jose L.; Collins, Chris D.; Hodson, Mark E.

    2011-01-01

    Validating chemical methods to predict bioavailable fractions of polycyclic aromatic hydrocarbons (PAHs) by comparison with accumulation bioassays is problematic. Concentrations accumulated in soil organisms not only depend on the bioavailable fraction but also on contaminant properties. A historically contaminated soil was freshly spiked with deuterated PAHs (dPAHs). dPAHs have a similar fate to their respective undeuterated analogues, so chemical methods that give good indications of bioavailability should extract the fresh more readily available dPAHs and historic more recalcitrant PAHs in similar proportions to those in which they are accumulated in the tissues of test organisms. Cyclodextrin and butanol extractions predicted the bioavailable fraction for earthworms (Eisenia fetida) and plants (Lolium multiflorum) better than the exhaustive extraction. The PAHs accumulated by earthworms had a larger dPAH:PAH ratio than that predicted by chemical methods. The isotope ratio method described here provides an effective way of evaluating other chemical methods to predict bioavailability. - Research highlights: → Isotope ratios can be used to evaluate chemical methods to predict bioavailability. → Chemical methods predicted bioavailability better than exhaustive extractions. → Bioavailability to earthworms was still far from that predicted by chemical methods. - A novel method using isotope ratios to assess the ability of chemical methods to predict PAH bioavailability to soil biota.

  3. Using deuterated PAH amendments to validate chemical extraction methods to predict PAH bioavailability in soils

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Eyles, Jose L., E-mail: j.l.gomezeyles@reading.ac.uk [University of Reading, School of Human and Environmental Sciences, Soil Research Centre, Reading, RG6 6DW Berkshire (United Kingdom); Collins, Chris D.; Hodson, Mark E. [University of Reading, School of Human and Environmental Sciences, Soil Research Centre, Reading, RG6 6DW Berkshire (United Kingdom)

    2011-04-15

    Validating chemical methods to predict bioavailable fractions of polycyclic aromatic hydrocarbons (PAHs) by comparison with accumulation bioassays is problematic. Concentrations accumulated in soil organisms not only depend on the bioavailable fraction but also on contaminant properties. A historically contaminated soil was freshly spiked with deuterated PAHs (dPAHs). dPAHs have a similar fate to their respective undeuterated analogues, so chemical methods that give good indications of bioavailability should extract the fresh more readily available dPAHs and historic more recalcitrant PAHs in similar proportions to those in which they are accumulated in the tissues of test organisms. Cyclodextrin and butanol extractions predicted the bioavailable fraction for earthworms (Eisenia fetida) and plants (Lolium multiflorum) better than the exhaustive extraction. The PAHs accumulated by earthworms had a larger dPAH:PAH ratio than that predicted by chemical methods. The isotope ratio method described here provides an effective way of evaluating other chemical methods to predict bioavailability. - Research highlights: > Isotope ratios can be used to evaluate chemical methods to predict bioavailability. > Chemical methods predicted bioavailability better than exhaustive extractions. > Bioavailability to earthworms was still far from that predicted by chemical methods. - A novel method using isotope ratios to assess the ability of chemical methods to predict PAH bioavailability to soil biota.

  4. Improving Allergen Prediction in Main Crops Using a Weighted Integrative Method.

    Science.gov (United States)

    Li, Jing; Wang, Jing; Li, Jing

    2017-12-01

    As a public health problem, food allergy is frequently caused by food allergy proteins, which trigger a type-I hypersensitivity reaction in the immune system of atopic individuals. The food allergens in our daily lives are mainly from crops including rice, wheat, soybean and maize. However, allergens in these main crops are far from fully uncovered. Although some bioinformatics tools or methods predicting the potential allergenicity of proteins have been proposed, each method has their limitation. In this paper, we built a novel algorithm PREAL W , which integrated PREAL, FAO/WHO criteria and motif-based method by a weighted average score, to benefit the advantages of different methods. Our results illustrated PREAL W has better performance significantly in the crops' allergen prediction. This integrative allergen prediction algorithm could be useful for critical food safety matters. The PREAL W could be accessed at http://lilab.life.sjtu.edu.cn:8080/prealw .

  5. Evaluation of mathematical methods for predicting optimum dose of gamma radiation in sugarcane (Saccharum sp.)

    International Nuclear Information System (INIS)

    Wu, K.K.; Siddiqui, S.H.; Heinz, D.J.; Ladd, S.L.

    1978-01-01

    Two mathematical methods - the reversed logarithmic method and the regression method - were used to compare the predicted and the observed optimum gamma radiation dose (OD 50 ) in vegetative propagules of sugarcane. The reversed logarithmic method, usually used in sexually propagated crops, showed the largest difference between the predicted and observed optimum dose. The regression method resulted in a better prediction of the observed values and is suggested as a better method for the prediction of optimum dose for vegetatively propagated crops. (author)

  6. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  7. Selecting the minimum prediction base of historical data to perform 5-year predictions of the cancer burden: The GoF-optimal method.

    Science.gov (United States)

    Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon

    2015-06-01

    Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Creep-fatigue life prediction method using Diercks equation for Cr-Mo steel

    International Nuclear Information System (INIS)

    Sonoya, Keiji; Nonaka, Isamu; Kitagawa, Masaki

    1990-01-01

    For dealing with the situation that creep-fatigue life properties of materials do not exist, a development of the simple method to predict creep-fatigue life properties is necessary. A method to predict the creep-fatigue life properties of Cr-Mo steels is proposed on the basis of D. Diercks equation which correlates the creep-fatigue lifes of SUS 304 steels under various temperatures, strain ranges, strain rates and hold times. The accuracy of the proposed method was compared with that of the existing methods. The following results were obtained. (1) Fatigue strength and creep rupture strength of Cr-Mo steel are different from those of SUS 304 steel. Therefore in order to apply Diercks equation to creep-fatigue prediction for Cr-Mo steel, the difference of fatigue strength was found to be corrected by fatigue life ratio of both steels and the difference of creep rupture strength was found to be corrected by the equivalent temperature corresponding to equal strength of both steels. (2) Creep-fatigue life can be predicted by the modified Diercks equation within a factor of 2 which is nearly as precise as the accuracy of strain range partitioning method. Required test and analysis procedure of this method are not so complicated as strain range partitioning method. (author)

  9. Predicting metabolic syndrome using decision tree and support vector machine methods

    Directory of Open Access Journals (Sweden)

    Farzaneh Karimi-Alavijeh

    2016-06-01

    Full Text Available BACKGROUND: Metabolic syndrome which underlies the increased prevalence of cardiovascular disease and Type 2 diabetes is considered as a group of metabolic abnormalities including central obesity, hypertriglyceridemia, glucose intolerance, hypertension, and dyslipidemia. Recently, artificial intelligence based health-care systems are highly regarded because of its success in diagnosis, prediction, and choice of treatment. This study employs machine learning technics for predict the metabolic syndrome. METHODS: This study aims to employ decision tree and support vector machine (SVM to predict the 7-year incidence of metabolic syndrome. This research is a practical one in which data from 2107 participants of Isfahan Cohort Study has been utilized. The subjects without metabolic syndrome according to the ATPIII criteria were selected. The features that have been used in this data set include: gender, age, weight, body mass index, waist circumference, waist-to-hip ratio, hip circumference, physical activity, smoking, hypertension, antihypertensive medication use, systolic blood pressure (BP, diastolic BP, fasting blood sugar, 2-hour blood glucose, triglycerides (TGs, total cholesterol, low-density lipoprotein, high density lipoprotein-cholesterol, mean corpuscular volume, and mean corpuscular hemoglobin. Metabolic syndrome was diagnosed based on ATPIII criteria and two methods of decision tree and SVM were selected to predict the metabolic syndrome. The criteria of sensitivity, specificity and accuracy were used for validation. RESULTS: SVM and decision tree methods were examined according to the criteria of sensitivity, specificity and accuracy. Sensitivity, specificity and accuracy were 0.774 (0.758, 0.74 (0.72 and 0.757 (0.739 in SVM (decision tree method. CONCLUSION: The results show that SVM method sensitivity, specificity and accuracy is more efficient than decision tree. The results of decision tree method show that the TG is the most

  10. Improving local clustering based top-L link prediction methods via asymmetric link clustering information

    Science.gov (United States)

    Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan

    2018-02-01

    Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.

  11. Prediction of polymer flooding performance using an analytical method

    International Nuclear Information System (INIS)

    Tan Czek Hoong; Mariyamni Awang; Foo Kok Wai

    2001-01-01

    The study investigated the applicability of an analytical method developed by El-Khatib in polymer flooding. Results from a simulator UTCHEM and experiments were compared with the El-Khatib prediction method. In general, by assuming a constant viscosity polymer injection, the method gave much higher recovery values than the simulation runs and the experiments. A modification of the method gave better correlation, albeit only oil production. Investigation is continuing on modifying the method so that a better overall fit can be obtained for polymer flooding. (Author)

  12. Development of an integrated method for long-term water quality prediction using seasonal climate forecast

    Directory of Open Access Journals (Sweden)

    J. Cho

    2016-10-01

    Full Text Available The APEC Climate Center (APCC produces climate prediction information utilizing a multi-climate model ensemble (MME technique. In this study, four different downscaling methods, in accordance with the degree of utilizing the seasonal climate prediction information, were developed in order to improve predictability and to refine the spatial scale. These methods include: (1 the Simple Bias Correction (SBC method, which directly uses APCC's dynamic prediction data with a 3 to 6 month lead time; (2 the Moving Window Regression (MWR method, which indirectly utilizes dynamic prediction data; (3 the Climate Index Regression (CIR method, which predominantly uses observation-based climate indices; and (4 the Integrated Time Regression (ITR method, which uses predictors selected from both CIR and MWR. Then, a sampling-based temporal downscaling was conducted using the Mahalanobis distance method in order to create daily weather inputs to the Soil and Water Assessment Tool (SWAT model. Long-term predictability of water quality within the Wecheon watershed of the Nakdong River Basin was evaluated. According to the Korean Ministry of Environment's Provisions of Water Quality Prediction and Response Measures, modeling-based predictability was evaluated by using 3-month lead prediction data issued in February, May, August, and November as model input of SWAT. Finally, an integrated approach, which takes into account various climate information and downscaling methods for water quality prediction, was presented. This integrated approach can be used to prevent potential problems caused by extreme climate in advance.

  13. The Comparison Study of Short-Term Prediction Methods to Enhance the Model Predictive Controller Applied to Microgrid Energy Management

    Directory of Open Access Journals (Sweden)

    César Hernández-Hernández

    2017-06-01

    Full Text Available Electricity load forecasting, optimal power system operation and energy management play key roles that can bring significant operational advantages to microgrids. This paper studies how methods based on time series and neural networks can be used to predict energy demand and production, allowing them to be combined with model predictive control. Comparisons of different prediction methods and different optimum energy distribution scenarios are provided, permitting us to determine when short-term energy prediction models should be used. The proposed prediction models in addition to the model predictive control strategy appear as a promising solution to energy management in microgrids. The controller has the task of performing the management of electricity purchase and sale to the power grid, maximizing the use of renewable energy sources and managing the use of the energy storage system. Simulations were performed with different weather conditions of solar irradiation. The obtained results are encouraging for future practical implementation.

  14. PatchSurfers: Two methods for local molecular property-based binding ligand prediction.

    Science.gov (United States)

    Shin, Woong-Hee; Bures, Mark Gregory; Kihara, Daisuke

    2016-01-15

    Protein function prediction is an active area of research in computational biology. Function prediction can help biologists make hypotheses for characterization of genes and help interpret biological assays, and thus is a productive area for collaboration between experimental and computational biologists. Among various function prediction methods, predicting binding ligand molecules for a target protein is an important class because ligand binding events for a protein are usually closely intertwined with the proteins' biological function, and also because predicted binding ligands can often be directly tested by biochemical assays. Binding ligand prediction methods can be classified into two types: those which are based on protein-protein (or pocket-pocket) comparison, and those that compare a target pocket directly to ligands. Recently, our group proposed two computational binding ligand prediction methods, Patch-Surfer, which is a pocket-pocket comparison method, and PL-PatchSurfer, which compares a pocket to ligand molecules. The two programs apply surface patch-based descriptions to calculate similarity or complementarity between molecules. A surface patch is characterized by physicochemical properties such as shape, hydrophobicity, and electrostatic potentials. These properties on the surface are represented using three-dimensional Zernike descriptors (3DZD), which are based on a series expansion of a 3 dimensional function. Utilizing 3DZD for describing the physicochemical properties has two main advantages: (1) rotational invariance and (2) fast comparison. Here, we introduce Patch-Surfer and PL-PatchSurfer with an emphasis on PL-PatchSurfer, which is more recently developed. Illustrative examples of PL-PatchSurfer performance on binding ligand prediction as well as virtual drug screening are also provided. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Building Customer Churn Prediction Models in Fitness Industry with Machine Learning Methods

    OpenAIRE

    Shan, Min

    2017-01-01

    With the rapid growth of digital systems, churn management has become a major focus within customer relationship management in many industries. Ample research has been conducted for churn prediction in different industries with various machine learning methods. This thesis aims to combine feature selection and supervised machine learning methods for defining models of churn prediction and apply them on fitness industry. Forward selection is chosen as feature selection methods. Support Vector ...

  16. [Predictive methods versus clinical titration for the initiation of lithium therapy. A systematic review].

    Science.gov (United States)

    Geeraerts, I; Sienaert, P

    2013-01-01

    When lithium is administered, the clinician needs to know when the lithium in the patient’s blood has reached a therapeutic level. At the initiation of treatment the level is usually achieved gradually through the application of the titration method. In order to increase the efficacy of this procedure several methods for dosing lithium and for predicting lithium levels have been developed. To conduct a systematic review of the publications relating to the various methods for dosing lithium or predicting lithium levels at the initiation of therapy. We searched Medline systematically for articles published in English, French or Dutch between 1966 and April 2012 which described or studied a method for dosing lithium or for predicting the lithium level reached following a specific dosage. We screened the reference lists of relevant articles in order to locate additional papers. We found 38 lithium prediction methods, in addition to the clinical titration method. These methods can be divided into two categories: the ‘a priori’ methods and the ‘test-dose’ methods, the latter requiring the administration of a test dose of lithium. The lithium prediction methods generally achieve a therapeutic blood level faster than the clinical titration method, but none of the methods achieves convincing results. On the basis of our review, we propose that the titration method should be used as the standard method in clinical practice.

  17. Predicting metabolic syndrome using decision tree and support vector machine methods.

    Science.gov (United States)

    Karimi-Alavijeh, Farzaneh; Jalili, Saeed; Sadeghi, Masoumeh

    2016-05-01

    Metabolic syndrome which underlies the increased prevalence of cardiovascular disease and Type 2 diabetes is considered as a group of metabolic abnormalities including central obesity, hypertriglyceridemia, glucose intolerance, hypertension, and dyslipidemia. Recently, artificial intelligence based health-care systems are highly regarded because of its success in diagnosis, prediction, and choice of treatment. This study employs machine learning technics for predict the metabolic syndrome. This study aims to employ decision tree and support vector machine (SVM) to predict the 7-year incidence of metabolic syndrome. This research is a practical one in which data from 2107 participants of Isfahan Cohort Study has been utilized. The subjects without metabolic syndrome according to the ATPIII criteria were selected. The features that have been used in this data set include: gender, age, weight, body mass index, waist circumference, waist-to-hip ratio, hip circumference, physical activity, smoking, hypertension, antihypertensive medication use, systolic blood pressure (BP), diastolic BP, fasting blood sugar, 2-hour blood glucose, triglycerides (TGs), total cholesterol, low-density lipoprotein, high density lipoprotein-cholesterol, mean corpuscular volume, and mean corpuscular hemoglobin. Metabolic syndrome was diagnosed based on ATPIII criteria and two methods of decision tree and SVM were selected to predict the metabolic syndrome. The criteria of sensitivity, specificity and accuracy were used for validation. SVM and decision tree methods were examined according to the criteria of sensitivity, specificity and accuracy. Sensitivity, specificity and accuracy were 0.774 (0.758), 0.74 (0.72) and 0.757 (0.739) in SVM (decision tree) method. The results show that SVM method sensitivity, specificity and accuracy is more efficient than decision tree. The results of decision tree method show that the TG is the most important feature in predicting metabolic syndrome. According

  18. The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network

    Science.gov (United States)

    Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.

    2017-05-01

    The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.

  19. Water hammer prediction and control: the Green's function method

    Science.gov (United States)

    Xuan, Li-Jun; Mao, Feng; Wu, Jie-Zhi

    2012-04-01

    By Green's function method we show that the water hammer (WH) can be analytically predicted for both laminar and turbulent flows (for the latter, with an eddy viscosity depending solely on the space coordinates), and thus its hazardous effect can be rationally controlled and minimized. To this end, we generalize a laminar water hammer equation of Wang et al. (J. Hydrodynamics, B2, 51, 1995) to include arbitrary initial condition and variable viscosity, and obtain its solution by Green's function method. The predicted characteristic WH behaviors by the solutions are in excellent agreement with both direct numerical simulation of the original governing equations and, by adjusting the eddy viscosity coefficient, experimentally measured turbulent flow data. Optimal WH control principle is thereby constructed and demonstrated.

  20. A data-driven prediction method for fast-slow systems

    Science.gov (United States)

    Groth, Andreas; Chekroun, Mickael; Kondrashov, Dmitri; Ghil, Michael

    2016-04-01

    In this work, we present a prediction method for processes that exhibit a mixture of variability on low and fast scales. The method relies on combining empirical model reduction (EMR) with singular spectrum analysis (SSA). EMR is a data-driven methodology for constructing stochastic low-dimensional models that account for nonlinearity and serial correlation in the estimated noise, while SSA provides a decomposition of the complex dynamics into low-order components that capture spatio-temporal behavior on different time scales. Our study focuses on the data-driven modeling of partial observations from dynamical systems that exhibit power spectra with broad peaks. The main result in this talk is that the combination of SSA pre-filtering with EMR modeling improves, under certain circumstances, the modeling and prediction skill of such a system, as compared to a standard EMR prediction based on raw data. Specifically, it is the separation into "fast" and "slow" temporal scales by the SSA pre-filtering that achieves the improvement. We show, in particular that the resulting EMR-SSA emulators help predict intermittent behavior such as rapid transitions between specific regions of the system's phase space. This capability of the EMR-SSA prediction will be demonstrated on two low-dimensional models: the Rössler system and a Lotka-Volterra model for interspecies competition. In either case, the chaotic dynamics is produced through a Shilnikov-type mechanism and we argue that the latter seems to be an important ingredient for the good prediction skills of EMR-SSA emulators. Shilnikov-type behavior has been shown to arise in various complex geophysical fluid models, such as baroclinic quasi-geostrophic flows in the mid-latitude atmosphere and wind-driven double-gyre ocean circulation models. This pervasiveness of the Shilnikow mechanism of fast-slow transition opens interesting perspectives for the extension of the proposed EMR-SSA approach to more realistic situations.

  1. Development of laboratory acceleration test method for service life prediction of concrete structures

    International Nuclear Information System (INIS)

    Cho, M. S.; Song, Y. C.; Bang, K. S.; Lee, J. S.; Kim, D. K.

    1999-01-01

    Service life prediction of nuclear power plants depends on the application of history of structures, field inspection and test, the development of laboratory acceleration tests, their analysis method and predictive model. In this study, laboratory acceleration test method for service life prediction of concrete structures and application of experimental test results are introduced. This study is concerned with environmental condition of concrete structures and is to develop the acceleration test method for durability factors of concrete structures e.g. carbonation, sulfate attack, freeze-thaw cycles and shrinkage-expansion etc

  2. Analysis of deep learning methods for blind protein contact prediction in CASP12.

    Science.gov (United States)

    Wang, Sheng; Sun, Siqi; Xu, Jinbo

    2018-03-01

    Here we present the results of protein contact prediction achieved in CASP12 by our RaptorX-Contact server, which is an early implementation of our deep learning method for contact prediction. On a set of 38 free-modeling target domains with a median family size of around 58 effective sequences, our server obtained an average top L/5 long- and medium-range contact accuracy of 47% and 44%, respectively (L = length). A complete implementation has an average accuracy of 59% and 57%, respectively. Our deep learning method formulates contact prediction as a pixel-level image labeling problem and simultaneously predicts all residue pairs of a protein using a combination of two deep residual neural networks, taking as input the residue conservation information, predicted secondary structure and solvent accessibility, contact potential, and coevolution information. Our approach differs from existing methods mainly in (1) formulating contact prediction as a pixel-level image labeling problem instead of an image-level classification problem; (2) simultaneously predicting all contacts of an individual protein to make effective use of contact occurrence patterns; and (3) integrating both one-dimensional and two-dimensional deep convolutional neural networks to effectively learn complex sequence-structure relationship including high-order residue correlation. This paper discusses the RaptorX-Contact pipeline, both contact prediction and contact-based folding results, and finally the strength and weakness of our method. © 2017 Wiley Periodicals, Inc.

  3. A novel method for improved accuracy of transcription factor binding site prediction

    KAUST Repository

    Khamis, Abdullah M.; Motwalli, Olaa Amin; Oliva, Romina; Jankovic, Boris R.; Medvedeva, Yulia; Ashoor, Haitham; Essack, Magbubah; Gao, Xin; Bajic, Vladimir B.

    2018-01-01

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  4. A novel method for improved accuracy of transcription factor binding site prediction

    KAUST Repository

    Khamis, Abdullah M.

    2018-03-20

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  5. Some new results on correlation-preserving factor scores prediction methods

    NARCIS (Netherlands)

    Ten Berge, J.M.F.; Krijnen, W.P.; Wansbeek, T.J.; Shapiro, A.

    1999-01-01

    Anderson and Rubin and McDonald have proposed a correlation-preserving method of factor scores prediction which minimizes the trace of a residual covariance matrix for variables. Green has proposed a correlation-preserving method which minimizes the trace of a residual covariance matrix for factors.

  6. Improved time series prediction with a new method for selection of model parameters

    International Nuclear Information System (INIS)

    Jade, A M; Jayaraman, V K; Kulkarni, B D

    2006-01-01

    A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)

  7. Assessment of NASA and RAE viscous-inviscid interaction methods for predicting transonic flow over nozzle afterbodies

    Science.gov (United States)

    Putnam, L. E.; Hodges, J.

    1983-01-01

    The Langley Research Center of the National Aeronautics and Space Administration and the Royal Aircraft Establishment have undertaken a cooperative program to conduct an assessment of their patched viscous-inviscid interaction methods for predicting the transonic flow over nozzle afterbodies. The assessment was made by comparing the predictions of the two methods with experimental pressure distributions and boattail pressure drag for several convergent circular-arc nozzle configurations. Comparisons of the predictions of the two methods with the experimental data showed that both methods provided good predictions of the flow characteristics of nozzles with attached boundary layer flow. The RAE method also provided reasonable predictions of the pressure distributions and drag for the nozzles investigated that had separated boundary layers. The NASA method provided good predictions of the pressure distribution on separated flow nozzles that had relatively thin boundary layers. However, the NASA method was in poor agreement with experiment for separated nozzles with thick boundary layers due primarily to deficiencies in the method used to predict the separation location.

  8. Hybrid robust predictive optimization method of power system dispatch

    Science.gov (United States)

    Chandra, Ramu Sharat [Niskayuna, NY; Liu, Yan [Ballston Lake, NY; Bose, Sumit [Niskayuna, NY; de Bedout, Juan Manuel [West Glenville, NY

    2011-08-02

    A method of power system dispatch control solves power system dispatch problems by integrating a larger variety of generation, load and storage assets, including without limitation, combined heat and power (CHP) units, renewable generation with forecasting, controllable loads, electric, thermal and water energy storage. The method employs a predictive algorithm to dynamically schedule different assets in order to achieve global optimization and maintain the system normal operation.

  9. Orthology prediction methods: a quality assessment using curated protein families.

    Science.gov (United States)

    Trachana, Kalliopi; Larsson, Tomas A; Powell, Sean; Chen, Wei-Hua; Doerks, Tobias; Muller, Jean; Bork, Peer

    2011-10-01

    The increasing number of sequenced genomes has prompted the development of several automated orthology prediction methods. Tests to evaluate the accuracy of predictions and to explore biases caused by biological and technical factors are therefore required. We used 70 manually curated families to analyze the performance of five public methods in Metazoa. We analyzed the strengths and weaknesses of the methods and quantified the impact of biological and technical challenges. From the latter part of the analysis, genome annotation emerged as the largest single influencer, affecting up to 30% of the performance. Generally, most methods did well in assigning orthologous group but they failed to assign the exact number of genes for half of the groups. The publicly available benchmark set (http://eggnog.embl.de/orthobench/) should facilitate the improvement of current orthology assignment protocols, which is of utmost importance for many fields of biology and should be tackled by a broad scientific community. Copyright © 2011 WILEY Periodicals, Inc.

  10. Prediction of intestinal absorption and blood-brain barrier penetration by computational methods.

    Science.gov (United States)

    Clark, D E

    2001-09-01

    This review surveys the computational methods that have been developed with the aim of identifying drug candidates likely to fail later on the road to market. The specifications for such computational methods are outlined, including factors such as speed, interpretability, robustness and accuracy. Then, computational filters aimed at predicting "drug-likeness" in a general sense are discussed before methods for the prediction of more specific properties--intestinal absorption and blood-brain barrier penetration--are reviewed. Directions for future research are discussed and, in concluding, the impact of these methods on the drug discovery process, both now and in the future, is briefly considered.

  11. Methods, apparatus and system for notification of predictable memory failure

    Energy Technology Data Exchange (ETDEWEB)

    Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2017-01-03

    A method for providing notification of a predictable memory failure includes the steps of: obtaining information regarding at least one condition associated with a memory; calculating a memory failure probability as a function of the obtained information; calculating a failure probability threshold; and generating a signal when the memory failure probability exceeds the failure probability threshold, the signal being indicative of a predicted future memory failure.

  12. A prediction method based on grey system theory in equipment condition based maintenance

    International Nuclear Information System (INIS)

    Yan, Shengyuan; Yan, Shengyuan; Zhang, Hongguo; Zhang, Zhijian; Peng, Minjun; Yang, Ming

    2007-01-01

    Grey prediction is a modeling method based on historical or present, known or indefinite information, which can be used for forecasting the development of the eigenvalues of the targeted equipment system and setting up the model by using less information. In this paper, the postulate of grey system theory, which includes the grey generating, the sorts of grey generating and the grey forecasting model, is introduced first. The concrete application process, which includes the grey prediction modeling, grey prediction, error calculation, equal dimension and new information approach, is introduced secondly. Application of a so-called 'Equal Dimension and New Information' (EDNI) technology in grey system theory is adopted in an application case, aiming at improving the accuracy of prediction without increasing the amount of calculation by replacing old data with new ones. The proposed method can provide a new way for solving the problem of eigenvalue data exploding in equal distance effectively, short time interval and real time prediction. The proposed method, which was based on historical or present, known or indefinite information, was verified by the vibration prediction of induced draft fan of a boiler of the Yantai Power Station in China, and the results show that the proposed method based on grey system theory is simple and provides a high accuracy in prediction. So, it is very useful and significant to the controlling and controllable management in safety production. (authors)

  13. Reliable B cell epitope predictions: impacts of method development and improved benchmarking

    DEFF Research Database (Denmark)

    Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole

    2012-01-01

    biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping...... evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve highly significant predictive performances suggesting these tools to be a powerful asset in rational epitope discovery. The updated version...

  14. Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.

    Science.gov (United States)

    Moura, Antonio Divino; Hastenrath, Stefan

    2004-07-01

    Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.

  15. Prediction of critical heat flux in fuel assemblies using a CHF table method

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Tae Hyun; Hwang, Dae Hyun; Bang, Je Geon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Baek, Won Pil; Chang, Soon Heung [Korea Advance Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    A CHF table method has been assessed in this study for rod bundle CHF predictions. At the conceptual design stage for a new reactor, a general critical heat flux (CHF) prediction method with a wide applicable range and reasonable accuracy is essential to the thermal-hydraulic design and safety analysis. In many aspects, a CHF table method (i.e., the use of a round tube CHF table with appropriate bundle correction factors) can be a promising way to fulfill this need. So the assessment of the CHF table method has been performed with the bundle CHF data relevant to pressurized water reactors (PWRs). For comparison purposes, W-3R and EPRI-1 were also applied to the same data base. Data analysis has been conducted with the subchannel code COBRA-IV-I. The CHF table method shows the best predictions based on the direct substitution method. Improvements of the bundle correction factors, especially for the spacer grid and cold wall effects, are desirable for better predictions. Though the present assessment is somewhat limited in both fuel geometries and operating conditions, the CHF table method clearly shows potential to be a general CHF predictor. 8 refs., 3 figs., 3 tabs. (Author)

  16. Prediction of critical heat flux in fuel assemblies using a CHF table method

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Tae Hyun; Hwang, Dae Hyun; Bang, Je Geon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Baek, Won Pil; Chang, Soon Heung [Korea Advance Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    A CHF table method has been assessed in this study for rod bundle CHF predictions. At the conceptual design stage for a new reactor, a general critical heat flux (CHF) prediction method with a wide applicable range and reasonable accuracy is essential to the thermal-hydraulic design and safety analysis. In many aspects, a CHF table method (i.e., the use of a round tube CHF table with appropriate bundle correction factors) can be a promising way to fulfill this need. So the assessment of the CHF table method has been performed with the bundle CHF data relevant to pressurized water reactors (PWRs). For comparison purposes, W-3R and EPRI-1 were also applied to the same data base. Data analysis has been conducted with the subchannel code COBRA-IV-I. The CHF table method shows the best predictions based on the direct substitution method. Improvements of the bundle correction factors, especially for the spacer grid and cold wall effects, are desirable for better predictions. Though the present assessment is somewhat limited in both fuel geometries and operating conditions, the CHF table method clearly shows potential to be a general CHF predictor. 8 refs., 3 figs., 3 tabs. (Author)

  17. Genomic prediction based on data from three layer lines: a comparison between linear methods

    NARCIS (Netherlands)

    Calus, M.P.L.; Huang, H.; Vereijken, J.; Visscher, J.; Napel, ten J.; Windig, J.J.

    2014-01-01

    Background The prediction accuracy of several linear genomic prediction models, which have previously been used for within-line genomic prediction, was evaluated for multi-line genomic prediction. Methods Compared to a conventional BLUP (best linear unbiased prediction) model using pedigree data, we

  18. Method for Predicting Solubilities of Solids in Mixed Solvents

    DEFF Research Database (Denmark)

    Ellegaard, Martin Dela; Abildskov, Jens; O'Connell, J. P.

    2009-01-01

    A method is presented for predicting solubilities of solid solutes in mixed solvents, based on excess Henry's law constants. The basis is statistical mechanical fluctuation solution theory for composition derivatives of solute/solvent infinite dilution activity coefficients. Suitable approximatio...

  19. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network.

    Science.gov (United States)

    Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.

  20. Available Prediction Methods for Corrosion under Insulation (CUI): A Review

    OpenAIRE

    Burhani Nurul Rawaida Ain; Muhammad Masdi; Ismail Mokhtar Che

    2014-01-01

    Corrosion under insulation (CUI) is an increasingly important issue for the piping in industries especially petrochemical and chemical plants due to its unexpected catastrophic disaster. Therefore, attention towards the maintenance and prediction of CUI occurrence, particularly in the corrosion rates, has grown in recent years. In this study, a literature review in determining the corrosion rates by using various prediction models and method of the corrosion occurrence between the external su...

  1. A Practical Radiosity Method for Predicting Transmission Loss in Urban Environments

    Directory of Open Access Journals (Sweden)

    Liang Ming

    2004-01-01

    Full Text Available The ability to predict transmission loss or field strength distribution is crucial for determining coverage in planning personal communication systems. This paper presents a practical method to accurately predict entire average transmission loss distribution in complicated urban environments. The method uses a 3D propagation model based on radiosity and a simplified city information database including surfaces of roads and building groups. Narrowband validation measurements with line-of-sight (LOS and non-line-of-sight (NLOS cases at 1800 MHz give excellent agreement in urban environments.

  2. Large-scale validation of methods for cytotoxic T-lymphocyte epitope prediction

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Lundegaard, Claus; Lamberth, K.

    2007-01-01

    BACKGROUND: Reliable predictions of Cytotoxic T lymphocyte (CTL) epitopes are essential for rational vaccine design. Most importantly, they can minimize the experimental effort needed to identify epitopes. NetCTL is a web-based tool designed for predicting human CTL epitopes in any given protein....... of the other methods achieved a sensitivity of 0.64. The NetCTL-1.2 method is available at http://www.cbs.dtu.dk/services/NetCTL.All used datasets are available at http://www.cbs.dtu.dk/suppl/immunology/CTL-1.2.php....

  3. Application of artificial intelligence methods for prediction of steel mechanical properties

    Directory of Open Access Journals (Sweden)

    Z. Jančíková

    2008-10-01

    Full Text Available The target of the contribution is to outline possibilities of applying artificial neural networks for the prediction of mechanical steel properties after heat treatment and to judge their perspective use in this field. The achieved models enable the prediction of final mechanical material properties on the basis of decisive parameters influencing these properties. By applying artificial intelligence methods in combination with mathematic-physical analysis methods it will be possible to create facilities for designing a system of the continuous rationalization of existing and also newly developing industrial technologies.

  4. A comparison of methods for cascade prediction

    OpenAIRE

    Guo, Ruocheng; Shakarian, Paulo

    2016-01-01

    Information cascades exist in a wide variety of platforms on Internet. A very important real-world problem is to identify which information cascades can go viral. A system addressing this problem can be used in a variety of applications including public health, marketing and counter-terrorism. As a cascade can be considered as compound of the social network and the time series. However, in related literature where methods for solving the cascade prediction problem were proposed, the experimen...

  5. Preface to the Focus Issue: Chaos Detection Methods and Predictability

    International Nuclear Information System (INIS)

    Gottwald, Georg A.; Skokos, Charalampos

    2014-01-01

    This Focus Issue presents a collection of papers originating from the workshop Methods of Chaos Detection and Predictability: Theory and Applications held at the Max Planck Institute for the Physics of Complex Systems in Dresden, June 17–21, 2013. The main aim of this interdisciplinary workshop was to review comprehensively the theory and numerical implementation of the existing methods of chaos detection and predictability, as well as to report recent applications of these techniques to different scientific fields. The collection of twelve papers in this Focus Issue represents the wide range of applications, spanning mathematics, physics, astronomy, particle accelerator physics, meteorology and medical research. This Preface surveys the papers of this Issue

  6. Preface to the Focus Issue: chaos detection methods and predictability.

    Science.gov (United States)

    Gottwald, Georg A; Skokos, Charalampos

    2014-06-01

    This Focus Issue presents a collection of papers originating from the workshop Methods of Chaos Detection and Predictability: Theory and Applications held at the Max Planck Institute for the Physics of Complex Systems in Dresden, June 17-21, 2013. The main aim of this interdisciplinary workshop was to review comprehensively the theory and numerical implementation of the existing methods of chaos detection and predictability, as well as to report recent applications of these techniques to different scientific fields. The collection of twelve papers in this Focus Issue represents the wide range of applications, spanning mathematics, physics, astronomy, particle accelerator physics, meteorology and medical research. This Preface surveys the papers of this Issue.

  7. Risk prediction, safety analysis and quantitative probability methods - a caveat

    International Nuclear Information System (INIS)

    Critchley, O.H.

    1976-01-01

    Views are expressed on the use of quantitative techniques for the determination of value judgements in nuclear safety assessments, hazard evaluation, and risk prediction. Caution is urged when attempts are made to quantify value judgements in the field of nuclear safety. Criteria are given the meaningful application of reliability methods but doubts are expressed about their application to safety analysis, risk prediction and design guidances for experimental or prototype plant. Doubts are also expressed about some concomitant methods of population dose evaluation. The complexities of new designs of nuclear power plants make the problem of safety assessment more difficult but some possible approaches are suggested as alternatives to the quantitative techniques criticized. (U.K.)

  8. Prediction of pKa values using the PM6 semiempirical method

    Directory of Open Access Journals (Sweden)

    Jimmy C. Kromann

    2016-08-01

    Full Text Available The PM6 semiempirical method and the dispersion and hydrogen bond-corrected PM6-D3H+ method are used together with the SMD and COSMO continuum solvation models to predict pKa values of pyridines, alcohols, phenols, benzoic acids, carboxylic acids, and phenols using isodesmic reactions and compared to published ab initio results. The pKa values of pyridines, alcohols, phenols, and benzoic acids considered in this study can generally be predicted with PM6 and ab initio methods to within the same overall accuracy, with average mean absolute differences (MADs of 0.6–0.7 pH units. For carboxylic acids, the accuracy (0.7–1.0 pH units is also comparable to ab initio results if a single outlier is removed. For primary, secondary, and tertiary amines the accuracy is, respectively, similar (0.5–0.6, slightly worse (0.5–1.0, and worse (1.0–2.5, provided that di- and tri-ethylamine are used as reference molecules for secondary and tertiary amines. When applied to a drug-like molecule where an empirical pKa predictor exhibits a large (4.9 pH unit error, we find that the errors for PM6-based predictions are roughly the same in magnitude but opposite in sign. As a result, most of the PM6-based methods predict the correct protonation state at physiological pH, while the empirical predictor does not. The computational cost is around 2–5 min per conformer per core processor, making PM6-based pKa prediction computationally efficient enough to be used for high-throughput screening using on the order of 100 core processors.

  9. Method for estimating capacity and predicting remaining useful life of lithium-ion battery

    International Nuclear Information System (INIS)

    Hu, Chao; Jain, Gaurav; Tamirisa, Prabhakar; Gorka, Tom

    2014-01-01

    Highlights: • We develop an integrated method for the capacity estimation and RUL prediction. • A state projection scheme is derived for capacity estimation. • The Gauss–Hermite particle filter technique is used for the RUL prediction. • Results with 10 years’ continuous cycling data verify the effectiveness of the method. - Abstract: Reliability of lithium-ion (Li-ion) rechargeable batteries used in implantable medical devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, physicians, and patients. To ensure Li-ion batteries in these devices operate reliably, it is important to be able to assess the capacity of Li-ion battery and predict the remaining useful life (RUL) throughout the whole life-time. This paper presents an integrated method for the capacity estimation and RUL prediction of Li-ion battery used in implantable medical devices. A state projection scheme from the author’s previous study is used for the capacity estimation. Then, based on the capacity estimates, the Gauss–Hermite particle filter technique is used to project the capacity fade to the end-of-service (EOS) value (or the failure limit) for the RUL prediction. Results of 10 years’ continuous cycling test on Li-ion prismatic cells in the lab suggest that the proposed method achieves good accuracy in the capacity estimation and captures the uncertainty in the RUL prediction. Post-explant weekly cycling data obtained from field cells with 4–7 implant years further verify the effectiveness of the proposed method in the capacity estimation

  10. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    Science.gov (United States)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  11. Application of the backstepping method to the prediction of increase or decrease of infected population.

    Science.gov (United States)

    Kuniya, Toshikazu; Sano, Hideki

    2016-05-10

    In mathematical epidemiology, age-structured epidemic models have usually been formulated as the boundary-value problems of the partial differential equations. On the other hand, in engineering, the backstepping method has recently been developed and widely studied by many authors. Using the backstepping method, we obtained a boundary feedback control which plays the role of the threshold criteria for the prediction of increase or decrease of newly infected population. Under an assumption that the period of infectiousness is same for all infected individuals (that is, the recovery rate is given by the Dirac delta function multiplied by a sufficiently large positive constant), the prediction method is simplified to the comparison of the numbers of reported cases at the current and previous time steps. Our prediction method was applied to the reported cases per sentinel of influenza in Japan from 2006 to 2015 and its accuracy was 0.81 (404 correct predictions to the total 500 predictions). It was higher than that of the ARIMA models with different orders of the autoregressive part, differencing and moving-average process. In addition, a proposed method for the estimation of the number of reported cases, which is consistent with our prediction method, was better than that of the best-fitted ARIMA model ARIMA(1,1,0) in the sense of mean square error. Our prediction method based on the backstepping method can be simplified to the comparison of the numbers of reported cases of the current and previous time steps. In spite of its simplicity, it can provide a good prediction for the spread of influenza in Japan.

  12. Polyadenylation site prediction using PolyA-iEP method.

    Science.gov (United States)

    Kavakiotis, Ioannis; Tzanis, George; Vlahavas, Ioannis

    2014-01-01

    This chapter presents a method called PolyA-iEP that has been developed for the prediction of polyadenylation sites. More precisely, PolyA-iEP is a method that recognizes mRNA 3'ends which contain polyadenylation sites. It is a modular system which consists of two main components. The first exploits the advantages of emerging patterns and the second is a distance-based scoring method. The outputs of the two components are finally combined by a classifier. The final results reach very high scores of sensitivity and specificity.

  13. Customer churn prediction using a hybrid method and censored data

    Directory of Open Access Journals (Sweden)

    Reza Tavakkoli-Moghaddam

    2013-05-01

    Full Text Available Customers are believed to be the main part of any organization’s assets and customer retention as well as customer churn management are important responsibilities of organizations. In today’s competitive environment, organization must do their best to retain their existing customers since attracting new customers cost significantly more than taking care of existing ones. In this paper, we present a hybrid method based on neural network and Cox regression analysis where neural network is used for outlier data and Cox regression method is implemented for prediction of future events. The proposed model of this paper has been implemented on some data and the results are compared based on five criteria including prediction accuracy, errors’ type I and II, root mean square error and mean absolute deviation. The preliminary results indicate that the proposed model of this paper performs better than alternative methods.

  14. Predicting human splicing branchpoints by combining sequence-derived features and multi-label learning methods.

    Science.gov (United States)

    Zhang, Wen; Zhu, Xiaopeng; Fu, Yu; Tsuji, Junko; Weng, Zhiping

    2017-12-01

    Alternative splicing is the critical process in a single gene coding, which removes introns and joins exons, and splicing branchpoints are indicators for the alternative splicing. Wet experiments have identified a great number of human splicing branchpoints, but many branchpoints are still unknown. In order to guide wet experiments, we develop computational methods to predict human splicing branchpoints. Considering the fact that an intron may have multiple branchpoints, we transform the branchpoint prediction as the multi-label learning problem, and attempt to predict branchpoint sites from intron sequences. First, we investigate a variety of intron sequence-derived features, such as sparse profile, dinucleotide profile, position weight matrix profile, Markov motif profile and polypyrimidine tract profile. Second, we consider several multi-label learning methods: partial least squares regression, canonical correlation analysis and regularized canonical correlation analysis, and use them as the basic classification engines. Third, we propose two ensemble learning schemes which integrate different features and different classifiers to build ensemble learning systems for the branchpoint prediction. One is the genetic algorithm-based weighted average ensemble method; the other is the logistic regression-based ensemble method. In the computational experiments, two ensemble learning methods outperform benchmark branchpoint prediction methods, and can produce high-accuracy results on the benchmark dataset.

  15. Validity of a Manual Soft Tissue Profile Prediction Method Following Mandibular Setback Osteotomy

    OpenAIRE

    Kolokitha, Olga-Elpis

    2007-01-01

    Objectives The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. Methods To test the validity of the manu...

  16. A GPS Satellite Clock Offset Prediction Method Based on Fitting Clock Offset Rates Data

    Directory of Open Access Journals (Sweden)

    WANG Fuhong

    2016-12-01

    Full Text Available It is proposed that a satellite atomic clock offset prediction method based on fitting and modeling clock offset rates data. This method builds quadratic model or linear model combined with periodic terms to fit the time series of clock offset rates, and computes the model coefficients of trend with the best estimation. The clock offset precisely estimated at the initial prediction epoch is directly adopted to calculate the model coefficient of constant. The clock offsets in the rapid ephemeris (IGR provided by IGS are used as modeling data sets to perform certain experiments for different types of GPS satellite clocks. The results show that the clock prediction accuracies of the proposed method for 3, 6, 12 and 24 h achieve 0.43, 0.58, 0.90 and 1.47 ns respectively, which outperform the traditional prediction method based on fitting original clock offsets by 69.3%, 61.8%, 50.5% and 37.2%. Compared with the IGU real-time clock products provided by IGS, the prediction accuracies of the new method have improved about 15.7%, 23.7%, 27.4% and 34.4% respectively.

  17. Empirical Flutter Prediction Method.

    Science.gov (United States)

    1988-03-05

    been used in this way to discover species or subspecies of animals, and to discover different types of voter or comsumer requiring different persuasions...respect to behavior or performance or response variables. Once this were done, corresponding clusters might be sought among descriptive or predictive or...jump in a response. The first sort of usage does not apply to the flutter prediction problem. Here the types of behavior are the different kinds of

  18. Prediction of Human Drug Targets and Their Interactions Using Machine Learning Methods: Current and Future Perspectives.

    Science.gov (United States)

    Nath, Abhigyan; Kumari, Priyanka; Chaube, Radha

    2018-01-01

    Identification of drug targets and drug target interactions are important steps in the drug-discovery pipeline. Successful computational prediction methods can reduce the cost and time demanded by the experimental methods. Knowledge of putative drug targets and their interactions can be very useful for drug repurposing. Supervised machine learning methods have been very useful in drug target prediction and in prediction of drug target interactions. Here, we describe the details for developing prediction models using supervised learning techniques for human drug target prediction and their interactions.

  19. Unbiased and non-supervised learning methods for disruption prediction at JET

    International Nuclear Information System (INIS)

    Murari, A.; Vega, J.; Ratta, G.A.; Vagliasindi, G.; Johnson, M.F.; Hong, S.H.

    2009-01-01

    The importance of predicting the occurrence of disruptions is going to increase significantly in the next generation of tokamak devices. The expected energy content of ITER plasmas, for example, is such that disruptions could have a significant detrimental impact on various parts of the device, ranging from erosion of plasma facing components to structural damage. Early detection of disruptions is therefore needed with evermore increasing urgency. In this paper, the results of a series of methods to predict disruptions at JET are reported. The main objective of the investigation consists of trying to determine how early before a disruption it is possible to perform acceptable predictions on the basis of the raw data, keeping to a minimum the number of 'ad hoc' hypotheses. Therefore, the chosen learning techniques have the common characteristic of requiring a minimum number of assumptions. Classification and Regression Trees (CART) is a supervised but, on the other hand, a completely unbiased and nonlinear method, since it simply constructs the best classification tree by working directly on the input data. A series of unsupervised techniques, mainly K-means and hierarchical, have also been tested, to investigate to what extent they can autonomously distinguish between disruptive and non-disruptive groups of discharges. All these independent methods indicate that, in general, prediction with a success rate above 80% can be achieved not earlier than 180 ms before the disruption. The agreement between various completely independent methods increases the confidence in the results, which are also confirmed by a visual inspection of the data performed with pseudo Grand Tour algorithms.

  20. The Use of Data Mining Methods to Predict the Result of Infertility Treatment Using the IVF ET Method

    Directory of Open Access Journals (Sweden)

    Malinowski Paweł

    2014-12-01

    Full Text Available The IVF ET method is a scientifically recognized infertility treat- ment method. The problem, however, is this method’s unsatisfactory efficiency. This calls for a more thorough analysis of the information available in the treat- ment process, in order to detect the factors that have an effect on the results, as well as to effectively predict result of treatment. Classical statistical methods have proven to be inadequate in this issue. Only the use of modern methods of data mining gives hope for a more effective analysis of the collected data. This work provides an overview of the new methods used for the analysis of data on infertility treatment, and formulates a proposal for further directions for research into increasing the efficiency of the predicted result of the treatment process.

  1. A computational method to predict fluid-structure interaction of pressure relief valves

    Energy Technology Data Exchange (ETDEWEB)

    Kang, S. K.; Lee, D. H.; Park, S. K.; Hong, S. R. [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    2004-07-01

    An effective CFD (Computational fluid dynamics) method to predict important performance parameters, such as blowdown and chattering, for pressure relief valves in NPPs is provided in the present study. To calculate the valve motion, 6DOF (six degree of freedom) model is used. A chimera overset grid method is utilized to this study for the elimination of grid remeshing problem, when the disk moves. Further, CFD-Fastran which is developed by CFD-RC for compressible flow analysis is applied to an 1' safety valve. The prediction results ensure the applicability of the presented method in this study.

  2. A method of predicting the reliability of CDM coil insulation

    International Nuclear Information System (INIS)

    Kytasty, A.; Ogle, C.; Arrendale, H.

    1992-01-01

    This paper presents a method of predicting the reliability of the Collider Dipole Magnet (CDM) coil insulation design. The method proposes a probabilistic treatment of electrical test data, stress analysis, material properties variability and loading uncertainties to give the reliability estimate. The approach taken to predict reliability of design related failure modes of the CDM is to form analytical models of the various possible failure modes and their related mechanisms or causes, and then statistically assess the contributions of the various contributing variables. The probability of the failure mode occurring is interpreted as the number of times one would expect certain extreme situations to combine and randomly occur. One of the more complex failure modes of the CDM will be used to illustrate this methodology

  3. Anisotropic Elastoplastic Damage Mechanics Method to Predict Fatigue Life of the Structure

    Directory of Open Access Journals (Sweden)

    Hualiang Wan

    2016-01-01

    Full Text Available New damage mechanics method is proposed to predict the low-cycle fatigue life of metallic structures under multiaxial loading. The microstructure mechanical model is proposed to simulate anisotropic elastoplastic damage evolution. As the micromodel depends on few material parameters, the present method is very concise and suitable for engineering application. The material parameters in damage evolution equation are determined by fatigue experimental data of standard specimens. By employing further development on the ANSYS platform, the anisotropic elastoplastic damage mechanics-finite element method is developed. The fatigue crack propagation life of satellite structure is predicted using the present method and the computational results comply with the experimental data very well.

  4. A novel method for predicting the power outputs of wave energy converters

    Science.gov (United States)

    Wang, Yingguang

    2018-03-01

    This paper focuses on realistically predicting the power outputs of wave energy converters operating in shallow water nonlinear waves. A heaving two-body point absorber is utilized as a specific calculation example, and the generated power of the point absorber has been predicted by using a novel method (a nonlinear simulation method) that incorporates a second order random wave model into a nonlinear dynamic filter. It is demonstrated that the second order random wave model in this article can be utilized to generate irregular waves with realistic crest-trough asymmetries, and consequently, more accurate generated power can be predicted by subsequently solving the nonlinear dynamic filter equation with the nonlinearly simulated second order waves as inputs. The research findings demonstrate that the novel nonlinear simulation method in this article can be utilized as a robust tool for ocean engineers in their design, analysis and optimization of wave energy converters.

  5. RSARF: Prediction of residue solvent accessibility from protein sequence using random forest method

    KAUST Repository

    Ganesan, Pugalenthi; Kandaswamy, Krishna Kumar Umar; Chou -, Kuochen; Vivekanandan, Saravanan; Kolatkar, Prasanna R.

    2012-01-01

    Prediction of protein structure from its amino acid sequence is still a challenging problem. The complete physicochemical understanding of protein folding is essential for the accurate structure prediction. Knowledge of residue solvent accessibility gives useful insights into protein structure prediction and function prediction. In this work, we propose a random forest method, RSARF, to predict residue accessible surface area from protein sequence information. The training and testing was performed using 120 proteins containing 22006 residues. For each residue, buried and exposed state was computed using five thresholds (0%, 5%, 10%, 25%, and 50%). The prediction accuracy for 0%, 5%, 10%, 25%, and 50% thresholds are 72.9%, 78.25%, 78.12%, 77.57% and 72.07% respectively. Further, comparison of RSARF with other methods using a benchmark dataset containing 20 proteins shows that our approach is useful for prediction of residue solvent accessibility from protein sequence without using structural information. The RSARF program, datasets and supplementary data are available at http://caps.ncbs.res.in/download/pugal/RSARF/. - See more at: http://www.eurekaselect.com/89216/article#sthash.pwVGFUjq.dpuf

  6. A NEW METHOD FOR PREDICTING SURVIVAL AND ESTIMATING UNCERTAINTY IN TRAUMA PATIENTS

    Directory of Open Access Journals (Sweden)

    V. G. Schetinin

    2017-01-01

    Full Text Available The Trauma and Injury Severity Score (TRISS is the current “gold” standard of screening patient’s condition for purposes of predicting survival probability. More than 40 years of TRISS practice revealed a number of problems, particularly, 1 unexplained fluctuation of predicted values caused by aggregation of screening tests, and 2 low accuracy of uncertainty intervals estimations. We developed a new method made it available for practitioners as a web calculator to reduce negative effect of factors given above. The method involves Bayesian methodology of statistical inference which, being computationally expensive, in theory provides most accurate predictions. We implemented and tested this approach on a data set including 571,148 patients registered in the US National Trauma Data Bank (NTDB with 1–20 injuries. These patients were distributed over the following categories: (1 174,647 with 1 injury, (2 381,137 with 2–10 injuries, and (3 15,364 with 11–20 injuries. Survival rates in each category were 0.977, 0.953, and 0.831, respectively. The proposed method has improved prediction accuracy by 0.04%, 0.36%, and 3.64% (p-value <0.05 in the categories 1, 2, and 3, respectively. Hosmer-Lemeshow statistics showed a significant improvement of the new model calibration. The uncertainty 2σ intervals were reduced from 0.628 to 0.569 for patients of the second category and from 1.227 to 0.930 for patients of the third category, both with p-value <0.005. The new method shows the statistically significant improvement (p-value <0.05 in accuracy of predicting survival and estimating the uncertainty intervals. The largest improvement has been achieved for patients with 11–20 injuries. The method is available for practitioners as a web calculator http://www.traumacalc.org.

  7. Modification of an Existing In vitro Method to Predict Relative Bioavailable Arsenic in Soils

    Science.gov (United States)

    The soil matrix can sequester arsenic (As) and reduces its exposure by soil ingestion. In vivo dosing studies and in vitro gastrointestinal (IVG) methods have been used to predict relative bioavailable (RBA) As. Originally, the Ohio State University (OSU-IVG) method predicted R...

  8. Interior Noise Prediction of the Automobile Based on Hybrid FE-SEA Method

    Directory of Open Access Journals (Sweden)

    S. M. Chen

    2011-01-01

    created using hybrid FE-SEA method. The modal density was calculated using analytical method and finite element method; the damping loss factors of the structural and acoustic cavity subsystems were also calculated with analytical method; the coupling loss factors between structure and structure, structure and acoustic cavity were both calculated. Four different kinds of excitations including road excitations, engine mount excitations, sound radiation excitations of the engine, and wind excitations are exerted on the body of automobile when the automobile is running on the road. All the excitations were calculated using virtual prototype technology, computational fluid dynamics (CFD, and experiments realized in the design and development stage. The interior noise of the automobile was predicted and verified at speed of 120 km/h. The predicted and tested overall SPLs of the interior noise were 73.79 and 74.44 dB(A respectively. The comparison results also show that the prediction precision is satisfied, and the effectiveness and reliability of the hybrid FE-SEA model of the automobile is verified.

  9. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua

    2014-01-01

    the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...

  10. Predicting proteasomal cleavage sites: a comparison of available methods

    DEFF Research Database (Denmark)

    Saxova, P.; Buus, S.; Brunak, Søren

    2003-01-01

    -terminal, in particular, of CTL epitopes is cleaved precisely by the proteasome, whereas the N-terminal is produced with an extension, and later trimmed by peptidases in the cytoplasm and in the endoplasmic reticulum. Recently, three publicly available methods have been developed for prediction of the specificity...

  11. The Dissolved Oxygen Prediction Method Based on Neural Network

    Directory of Open Access Journals (Sweden)

    Zhong Xiao

    2017-01-01

    Full Text Available The dissolved oxygen (DO is oxygen dissolved in water, which is an important factor for the aquaculture. Using BP neural network method with the combination of purelin, logsig, and tansig activation functions is proposed for the prediction of aquaculture’s dissolved oxygen. The input layer, hidden layer, and output layer are introduced in detail including the weight adjustment process. The breeding data of three ponds in actual 10 consecutive days were used for experiments; these ponds were located in Beihai, Guangxi, a traditional aquaculture base in southern China. The data of the first 7 days are used for training, and the data of the latter 3 days are used for the test. Compared with the common prediction models, curve fitting (CF, autoregression (AR, grey model (GM, and support vector machines (SVM, the experimental results show that the prediction accuracy of the neural network is the highest, and all the predicted values are less than 5% of the error limit, which can meet the needs of practical applications, followed by AR, GM, SVM, and CF. The prediction model can help to improve the water quality monitoring level of aquaculture which will prevent the deterioration of water quality and the outbreak of disease.

  12. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

    International Nuclear Information System (INIS)

    Ernst, Floris; Schweikard, Achim

    2008-01-01

    Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

  13. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

    Energy Technology Data Exchange (ETDEWEB)

    Ernst, Floris; Schweikard, Achim [University of Luebeck, Institute for Robotics and Cognitive Systems, Luebeck (Germany)

    2008-06-15

    Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

  14. Novel computational methods to predict drug–target interactions using graph mining and machine learning approaches

    KAUST Repository

    Olayan, Rawan S.

    2017-12-01

    Computational drug repurposing aims at finding new medical uses for existing drugs. The identification of novel drug-target interactions (DTIs) can be a useful part of such a task. Computational determination of DTIs is a convenient strategy for systematic screening of a large number of drugs in the attempt to identify new DTIs at low cost and with reasonable accuracy. This necessitates development of accurate computational methods that can help focus on the follow-up experimental validation on a smaller number of highly likely targets for a drug. Although many methods have been proposed for computational DTI prediction, they suffer the high false positive prediction rate or they do not predict the effect that drugs exert on targets in DTIs. In this report, first, we present a comprehensive review of the recent progress in the field of DTI prediction from data-centric and algorithm-centric perspectives. The aim is to provide a comprehensive review of computational methods for identifying DTIs, which could help in constructing more reliable methods. Then, we present DDR, an efficient method to predict the existence of DTIs. DDR achieves significantly more accurate results compared to the other state-of-theart methods. As supported by independent evidences, we verified as correct 22 out of the top 25 DDR DTIs predictions. This validation proves the practical utility of DDR, suggesting that DDR can be used as an efficient method to identify 5 correct DTIs. Finally, we present DDR-FE method that predicts the effect types of a drug on its target. On different representative datasets, under various test setups, and using different performance measures, we show that DDR-FE achieves extremely good performance. Using blind test data, we verified as correct 2,300 out of 3,076 DTIs effects predicted by DDR-FE. This suggests that DDR-FE can be used as an efficient method to identify correct effects of a drug on its target.

  15. A prediction method of natural gas hydrate formation in deepwater gas well and its application

    Directory of Open Access Journals (Sweden)

    Yanli Guo

    2016-09-01

    Full Text Available To prevent the deposition of natural gas hydrate in deepwater gas well, the hydrate formation area in wellbore must be predicted. Herein, by comparing four prediction methods of temperature in pipe with field data and comparing five prediction methods of hydrate formation with experiment data, a method based on OLGA & PVTsim for predicting the hydrate formation area in wellbore was proposed. Meanwhile, The hydrate formation under the conditions of steady production, throttling and shut-in was predicted by using this method based on a well data in the South China Sea. The results indicate that the hydrate formation area decreases with the increase of gas production, inhibitor concentrations and the thickness of insulation materials and increases with the increase of thermal conductivity of insulation materials and shutdown time. Throttling effect causes a plunge in temperature and pressure in wellbore, thus leading to an increase of hydrate formation area.

  16. Statistical Analysis of a Method to Predict Drug-Polymer Miscibility

    DEFF Research Database (Denmark)

    Knopp, Matthias Manne; Olesen, Niels Erik; Huang, Yanbin

    2016-01-01

    In this study, a method proposed to predict drug-polymer miscibility from differential scanning calorimetry measurements was subjected to statistical analysis. The method is relatively fast and inexpensive and has gained popularity as a result of the increasing interest in the formulation of drug...... as provided in this study. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci....

  17. Prediction of MHC class II binding affinity using SMM-align, a novel stabilization matrix alignment method.

    Science.gov (United States)

    Nielsen, Morten; Lundegaard, Claus; Lund, Ole

    2007-07-04

    Antigen presenting cells (APCs) sample the extra cellular space and present peptides from here to T helper cells, which can be activated if the peptides are of foreign origin. The peptides are presented on the surface of the cells in complex with major histocompatibility class II (MHC II) molecules. Identification of peptides that bind MHC II molecules is thus a key step in rational vaccine design and developing methods for accurate prediction of the peptide:MHC interactions play a central role in epitope discovery. The MHC class II binding groove is open at both ends making the correct alignment of a peptide in the binding groove a crucial part of identifying the core of an MHC class II binding motif. Here, we present a novel stabilization matrix alignment method, SMM-align, that allows for direct prediction of peptide:MHC binding affinities. The predictive performance of the method is validated on a large MHC class II benchmark data set covering 14 HLA-DR (human MHC) and three mouse H2-IA alleles. The predictive performance of the SMM-align method was demonstrated to be superior to that of the Gibbs sampler, TEPITOPE, SVRMHC, and MHCpred methods. Cross validation between peptide data set obtained from different sources demonstrated that direct incorporation of peptide length potentially results in over-fitting of the binding prediction method. Focusing on amino terminal peptide flanking residues (PFR), we demonstrate a consistent gain in predictive performance by favoring binding registers with a minimum PFR length of two amino acids. Visualizing the binding motif as obtained by the SMM-align and TEPITOPE methods highlights a series of fundamental discrepancies between the two predicted motifs. For the DRB1*1302 allele for instance, the TEPITOPE method favors basic amino acids at most anchor positions, whereas the SMM-align method identifies a preference for hydrophobic or neutral amino acids at the anchors. The SMM-align method was shown to outperform other

  18. Prediction of MHC class II binding affinity using SMM-align, a novel stabilization matrix alignment method

    Directory of Open Access Journals (Sweden)

    Lund Ole

    2007-07-01

    Full Text Available Abstract Background Antigen presenting cells (APCs sample the extra cellular space and present peptides from here to T helper cells, which can be activated if the peptides are of foreign origin. The peptides are presented on the surface of the cells in complex with major histocompatibility class II (MHC II molecules. Identification of peptides that bind MHC II molecules is thus a key step in rational vaccine design and developing methods for accurate prediction of the peptide:MHC interactions play a central role in epitope discovery. The MHC class II binding groove is open at both ends making the correct alignment of a peptide in the binding groove a crucial part of identifying the core of an MHC class II binding motif. Here, we present a novel stabilization matrix alignment method, SMM-align, that allows for direct prediction of peptide:MHC binding affinities. The predictive performance of the method is validated on a large MHC class II benchmark data set covering 14 HLA-DR (human MHC and three mouse H2-IA alleles. Results The predictive performance of the SMM-align method was demonstrated to be superior to that of the Gibbs sampler, TEPITOPE, SVRMHC, and MHCpred methods. Cross validation between peptide data set obtained from different sources demonstrated that direct incorporation of peptide length potentially results in over-fitting of the binding prediction method. Focusing on amino terminal peptide flanking residues (PFR, we demonstrate a consistent gain in predictive performance by favoring binding registers with a minimum PFR length of two amino acids. Visualizing the binding motif as obtained by the SMM-align and TEPITOPE methods highlights a series of fundamental discrepancies between the two predicted motifs. For the DRB1*1302 allele for instance, the TEPITOPE method favors basic amino acids at most anchor positions, whereas the SMM-align method identifies a preference for hydrophobic or neutral amino acids at the anchors. Conclusion

  19. Variable importance and prediction methods for longitudinal problems with missing variables.

    Directory of Open Access Journals (Sweden)

    Iván Díaz

    Full Text Available We present prediction and variable importance (VIM methods for longitudinal data sets containing continuous and binary exposures subject to missingness. We demonstrate the use of these methods for prognosis of medical outcomes of severe trauma patients, a field in which current medical practice involves rules of thumb and scoring methods that only use a few variables and ignore the dynamic and high-dimensional nature of trauma recovery. Well-principled prediction and VIM methods can provide a tool to make care decisions informed by the high-dimensional patient's physiological and clinical history. Our VIM parameters are analogous to slope coefficients in adjusted regressions, but are not dependent on a specific statistical model, nor require a certain functional form of the prediction regression to be estimated. In addition, they can be causally interpreted under causal and statistical assumptions as the expected outcome under time-specific clinical interventions, related to changes in the mean of the outcome if each individual experiences a specified change in the variable (keeping other variables in the model fixed. Better yet, the targeted MLE used is doubly robust and locally efficient. Because the proposed VIM does not constrain the prediction model fit, we use a very flexible ensemble learner (the SuperLearner, which returns a linear combination of a list of user-given algorithms. Not only is such a prediction algorithm intuitive appealing, it has theoretical justification as being asymptotically equivalent to the oracle selector. The results of the analysis show effects whose size and significance would have been not been found using a parametric approach (such as stepwise regression or LASSO. In addition, the procedure is even more compelling as the predictor on which it is based showed significant improvements in cross-validated fit, for instance area under the curve (AUC for a receiver-operator curve (ROC. Thus, given that 1 our VIM

  20. A study on the fatigue life prediction of tire belt-layers using probabilistic method

    International Nuclear Information System (INIS)

    Lee, Dong Woo; Park, Jong Sang; Lee, Tae Won; Kim, Seong Rae; Sung, Ki Deug; Huh, Sun Chul

    2013-01-01

    Tire belt separation failure is occurred by internal cracks generated in *1 and *2 belt layers and by its growth. And belt failure seriously affects tire endurance. Therefore, to improve the tire endurance, it is necessary to analyze tire crack growth behavior and predict fatigue life. Generally, the prediction of tire endurance is performed by the experimental method using tire test machine. But it takes much cost and time to perform experiment. In this paper, to predict tire fatigue life, we applied deterministic fracture mechanics approach, based on finite element analysis. Also, probabilistic analysis method based on statistics using Monte Carlo simulation is presented. Above mentioned two methods include a global-local finite element analysis to provide the detail necessary to model explicitly an internal crack and calculate the J-integral for tire life prediction.

  1. Electronic structure prediction via data-mining the empirical pseudopotential method

    Energy Technology Data Exchange (ETDEWEB)

    Zenasni, H; Aourag, H [LEPM, URMER, Departement of Physics, University Abou Bakr Belkaid, Tlemcen 13000 (Algeria); Broderick, S R; Rajan, K [Department of Materials Science and Engineering, Iowa State University, Ames, Iowa 50011-2230 (United States)

    2010-01-15

    We introduce a new approach for accelerating the calculation of the electronic structure of new materials by utilizing the empirical pseudopotential method combined with data mining tools. Combining data mining with the empirical pseudopotential method allows us to convert an empirical approach to a predictive approach. Here we consider tetrahedrally bounded III-V Bi semiconductors, and through the prediction of form factors based on basic elemental properties we can model the band structure and charge density for these semi-conductors, for which limited results exist. This work represents a unique approach to modeling the electronic structure of a material which may be used to identify new promising semi-conductors and is one of the few efforts utilizing data mining at an electronic level. (Abstract Copyright [2010], Wiley Periodicals, Inc.)

  2. Prediction of MHC class II binding affinity using SMM-align, a novel stabilization matrix alignment method

    DEFF Research Database (Denmark)

    Nielsen, Morten; Lundegaard, Claus; Lund, Ole

    2007-01-01

    the correct alignment of a peptide in the binding groove a crucial part of identifying the core of an MHC class II binding motif. Here, we present a novel stabilization matrix alignment method, SMM-align, that allows for direct prediction of peptide:MHC binding affinities. The predictive performance...... of the method is validated on a large MHC class II benchmark data set covering 14 HLA-DR (human MHC) and three mouse H2-IA alleles. RESULTS: The predictive performance of the SMM-align method was demonstrated to be superior to that of the Gibbs sampler, TEPITOPE, SVRMHC, and MHCpred methods. Cross validation...... between peptide data set obtained from different sources demonstrated that direct incorporation of peptide length potentially results in over-fitting of the binding prediction method. Focusing on amino terminal peptide flanking residues (PFR), we demonstrate a consistent gain in predictive performance...

  3. Prediction method abstracts

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-31

    This conference was held December 4--8, 1994 in Asilomar, California. The purpose of this meeting was to provide a forum for exchange of state-of-the-art information concerning the prediction of protein structure. Attention if focused on the following: comparative modeling; sequence to fold assignment; and ab initio folding.

  4. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  5. Critical assessment of methods of protein structure prediction (CASP) - round x

    KAUST Repository

    Moult, John; Fidelis, Krzysztof; Kryshtafovych, Andriy; Schwede, Torsten; Tramontano, Anna

    2013-01-01

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the tenth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. The 10 CASP experiments span almost 20 years of progress in the field of protein structure modeling, and there have been enormous advances in methods and model accuracy in that period. Notable in this round is the first sustained improvement of models with refinement methods, using molecular dynamics. For the first time, we tested the ability of modeling methods to make use of sparse experimental three-dimensional contact information, such as may be obtained from new experimental techniques, with encouraging results. On the other hand, new contact prediction methods, though holding considerable promise, have yet to make an impact in CASP testing. The nature of CASP targets has been changing in recent CASPs, reflecting shifts in experimental structural biology, with more irregular structures, more multi-domain and multi-subunit structures, and less standard versions of known folds. When allowance is made for these factors, we continue to see steady progress in the overall accuracy of models, particularly resulting from improvement of non-template regions.

  6. Critical assessment of methods of protein structure prediction (CASP) - round x

    KAUST Repository

    Moult, John

    2013-12-17

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the tenth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. The 10 CASP experiments span almost 20 years of progress in the field of protein structure modeling, and there have been enormous advances in methods and model accuracy in that period. Notable in this round is the first sustained improvement of models with refinement methods, using molecular dynamics. For the first time, we tested the ability of modeling methods to make use of sparse experimental three-dimensional contact information, such as may be obtained from new experimental techniques, with encouraging results. On the other hand, new contact prediction methods, though holding considerable promise, have yet to make an impact in CASP testing. The nature of CASP targets has been changing in recent CASPs, reflecting shifts in experimental structural biology, with more irregular structures, more multi-domain and multi-subunit structures, and less standard versions of known folds. When allowance is made for these factors, we continue to see steady progress in the overall accuracy of models, particularly resulting from improvement of non-template regions.

  7. Prediction of periodically correlated processes by wavelet transform and multivariate methods with applications to climatological data

    Science.gov (United States)

    Ghanbarzadeh, Mitra; Aminghafari, Mina

    2015-05-01

    This article studies the prediction of periodically correlated process using wavelet transform and multivariate methods with applications to climatological data. Periodically correlated processes can be reformulated as multivariate stationary processes. Considering this fact, two new prediction methods are proposed. In the first method, we use stepwise regression between the principal components of the multivariate stationary process and past wavelet coefficients of the process to get a prediction. In the second method, we propose its multivariate version without principal component analysis a priori. Also, we study a generalization of the prediction methods dealing with a deterministic trend using exponential smoothing. Finally, we illustrate the performance of the proposed methods on simulated and real climatological data (ozone amounts, flows of a river, solar radiation, and sea levels) compared with the multivariate autoregressive model. The proposed methods give good results as we expected.

  8. HomPPI: a class of sequence homology based protein-protein interface prediction methods

    Directory of Open Access Journals (Sweden)

    Dobbs Drena

    2011-06-01

    Full Text Available Abstract Background Although homology-based methods are among the most widely used methods for predicting the structure and function of proteins, the question as to whether interface sequence conservation can be effectively exploited in predicting protein-protein interfaces has been a subject of debate. Results We studied more than 300,000 pair-wise alignments of protein sequences from structurally characterized protein complexes, including both obligate and transient complexes. We identified sequence similarity criteria required for accurate homology-based inference of interface residues in a query protein sequence. Based on these analyses, we developed HomPPI, a class of sequence homology-based methods for predicting protein-protein interface residues. We present two variants of HomPPI: (i NPS-HomPPI (Non partner-specific HomPPI, which can be used to predict interface residues of a query protein in the absence of knowledge of the interaction partner; and (ii PS-HomPPI (Partner-specific HomPPI, which can be used to predict the interface residues of a query protein with a specific target protein. Our experiments on a benchmark dataset of obligate homodimeric complexes show that NPS-HomPPI can reliably predict protein-protein interface residues in a given protein, with an average correlation coefficient (CC of 0.76, sensitivity of 0.83, and specificity of 0.78, when sequence homologs of the query protein can be reliably identified. NPS-HomPPI also reliably predicts the interface residues of intrinsically disordered proteins. Our experiments suggest that NPS-HomPPI is competitive with several state-of-the-art interface prediction servers including those that exploit the structure of the query proteins. The partner-specific classifier, PS-HomPPI can, on a large dataset of transient complexes, predict the interface residues of a query protein with a specific target, with a CC of 0.65, sensitivity of 0.69, and specificity of 0.70, when homologs of

  9. Methods of developing core collections based on the predicted genotypic value of rice ( Oryza sativa L.).

    Science.gov (United States)

    Li, C T; Shi, C H; Wu, J G; Xu, H M; Zhang, H Z; Ren, Y L

    2004-04-01

    The selection of an appropriate sampling strategy and a clustering method is important in the construction of core collections based on predicted genotypic values in order to retain the greatest degree of genetic diversity of the initial collection. In this study, methods of developing rice core collections were evaluated based on the predicted genotypic values for 992 rice varieties with 13 quantitative traits. The genotypic values of the traits were predicted by the adjusted unbiased prediction (AUP) method. Based on the predicted genotypic values, Mahalanobis distances were calculated and employed to measure the genetic similarities among the rice varieties. Six hierarchical clustering methods, including the single linkage, median linkage, centroid, unweighted pair-group average, weighted pair-group average and flexible-beta methods, were combined with random, preferred and deviation sampling to develop 18 core collections of rice germplasm. The results show that the deviation sampling strategy in combination with the unweighted pair-group average method of hierarchical clustering retains the greatest degree of genetic diversities of the initial collection. The core collections sampled using predicted genotypic values had more genetic diversity than those based on phenotypic values.

  10. Machine learning-based methods for prediction of linear B-cell epitopes.

    Science.gov (United States)

    Wang, Hsin-Wei; Pai, Tun-Wen

    2014-01-01

    B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.

  11. Geometry optimization method versus predictive ability in QSPR modeling for ionic liquids

    Science.gov (United States)

    Rybinska, Anna; Sosnowska, Anita; Barycki, Maciej; Puzyn, Tomasz

    2016-02-01

    Computational techniques, such as Quantitative Structure-Property Relationship (QSPR) modeling, are very useful in predicting physicochemical properties of various chemicals. Building QSPR models requires calculating molecular descriptors and the proper choice of the geometry optimization method, which will be dedicated to specific structure of tested compounds. Herein, we examine the influence of the ionic liquids' (ILs) geometry optimization methods on the predictive ability of QSPR models by comparing three models. The models were developed based on the same experimental data on density collected for 66 ionic liquids, but with employing molecular descriptors calculated from molecular geometries optimized at three different levels of the theory, namely: (1) semi-empirical (PM7), (2) ab initio (HF/6-311+G*) and (3) density functional theory (B3LYP/6-311+G*). The model in which the descriptors were calculated by using ab initio HF/6-311+G* method indicated the best predictivity capabilities ({{Q}}_{{EXT}}2 = 0.87). However, PM7-based model has comparable values of quality parameters ({{Q}}_{{EXT}}2 = 0.84). Obtained results indicate that semi-empirical methods (faster and less expensive regarding CPU time) can be successfully employed to geometry optimization in QSPR studies for ionic liquids.

  12. A maintenance time prediction method considering ergonomics through virtual reality simulation.

    Science.gov (United States)

    Zhou, Dong; Zhou, Xin-Xin; Guo, Zi-Yue; Lv, Chuan

    2016-01-01

    Maintenance time is a critical quantitative index in maintainability prediction. An efficient maintenance time measurement methodology plays an important role in early stage of the maintainability design. While traditional way to measure the maintenance time ignores the differences between line production and maintenance action. This paper proposes a corrective MOD method considering several important ergonomics factors to predict the maintenance time. With the help of the DELMIA analysis tools, the influence coefficient of several factors are discussed to correct the MOD value and the designers can measure maintenance time by calculating the sum of the corrective MOD time of each maintenance therbligs. Finally a case study is introduced, by maintaining the virtual prototype of APU motor starter in DELMIA, designer obtains the actual maintenance time by the proposed method, and the result verifies the effectiveness and accuracy of the proposed method.

  13. Estimation of Mechanical Signals in Induction Motors using the Recursive Prediction Error Method

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik

    1993-01-01

    Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed ........Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed .....

  14. A critical pressure based panel method for prediction of unsteady loading of marine propellers under cavitation

    International Nuclear Information System (INIS)

    Liu, P.; Bose, N.; Colbourne, B.

    2002-01-01

    A simple numerical procedure is established and implemented into a time domain panel method to predict hydrodynamic performance of marine propellers with sheet cavitation. This paper describes the numerical formulations and procedures to construct this integration. Predicted hydrodynamic loads were compared with both a previous numerical model and experimental measurements for a propeller in steady flow. The current method gives a substantial improvement in thrust and torque coefficient prediction over a previous numerical method at low cavitation numbers of less than 2.0, where severe cavitation occurs. Predicted pressure coefficient distributions are also presented. (author)

  15. A New Hybrid Method for Improving the Performance of Myocardial Infarction Prediction

    Directory of Open Access Journals (Sweden)

    Hojatollah Hamidi

    2016-06-01

    Full Text Available Abstract Introduction: Myocardial Infarction, also known as heart attack, normally occurs due to such causes as smoking, family history, diabetes, and so on. It is recognized as one of the leading causes of death in the world. Therefore, the present study aimed to evaluate the performance of classification models in order to predict Myocardial Infarction, using a feature selection method that includes Forward Selection and Genetic Algorithm. Materials & Methods: The Myocardial Infarction data set used in this study contains the information related to 519 visitors to Shahid Madani Specialized Hospital of Khorramabad, Iran. This data set includes 33 features. The proposed method includes a hybrid feature selection method in order to enhance the performance of classification algorithms. The first step of this method selects the features using Forward Selection. At the second step, the selected features were given to a genetic algorithm, in order to select the best features. Classification algorithms entail Ada Boost, Naïve Bayes, J48 decision tree and simpleCART are applied to the data set with selected features, for predicting Myocardial Infarction. Results: The best results have been achieved after applying the proposed feature selection method, which were obtained via simpleCART and J48 algorithms with the accuracies of 96.53% and 96.34%, respectively. Conclusion: Based on the results, the performances of classification algorithms are improved. So, applying the proposed feature selection method, along with classification algorithms seem to be considered as a confident method with respect to predicting the Myocardial Infarction.

  16. A comparison of methods to predict historical daily streamflow time series in the southeastern United States

    Science.gov (United States)

    Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.

    2015-01-01

    Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB

  17. Lattice gas methods for predicting intrinsic permeability of porous media

    Energy Technology Data Exchange (ETDEWEB)

    Santos, L.O.E.; Philippi, P.C. [Santa Catarina Univ., Florianopolis, SC (Brazil). Dept. de Engenharia Mecanica. Lab. de Propriedades Termofisicas e Meios Porosos)]. E-mail: emerich@lmpt.ufsc.br; philippi@lmpt.ufsc.br; Damiani, M.C. [Engineering Simulation and Scientific Software (ESSS), Florianopolis, SC (Brazil). Parque Tecnologico]. E-mail: damiani@lmpt.ufsc.br

    2000-07-01

    This paper presents a method for predicting intrinsic permeability of porous media based on Lattice Gas Cellular Automata methods. Two methods are presented. The first is based on a Boolean model (LGA). The second is Boltzmann method (LB) based on Boltzmann relaxation equation. LGA is a relatively recent method developed to perform hydrodynamic calculations. The method, in its simplest form, consists of a regular lattice populated with particles that hop from site to site in discrete time steps in a process, called propagation. After propagation, the particles in each site interact with each other in a process called collision, in which the number of particles and momentum are conserved. An exclusion principle is imposed in order to achieve better computational efficiency. In despite of its simplicity, this model evolves in agreement with Navier-Stokes equation for low Mach numbers. LB methods were recently developed for the numerical integration of the Navier-Stokes equation based on discrete Boltzmann transport equation. Derived from LGA, LB is a powerful alternative to the standard methods in computational fluid dynamics. In recent years, it has received much attention and has been used in several applications like simulations of flows through porous media, turbulent flows and multiphase flows. It is important to emphasize some aspects that make Lattice Gas Cellular Automata methods very attractive for simulating flows through porous media. In fact, boundary conditions in flows through complex geometry structures are very easy to describe in simulations using these methods. In LGA methods simulations are performed with integers needing less resident memory capability and boolean arithmetic reduces running time. The two methods are used to simulate flows through several Brazilian reservoir petroleum rocks leading to intrinsic permeability prediction. Simulation is compared with experimental results. (author)

  18. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231

  19. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction.

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P; Rother, Kristian M; Bujnicki, Janusz M

    2013-04-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks.

  20. Fatigue Life Prediction of High Modulus Asphalt Concrete Based on the Local Stress-Strain Method

    Directory of Open Access Journals (Sweden)

    Mulian Zheng

    2017-03-01

    Full Text Available Previously published studies have proposed fatigue life prediction models for dense graded asphalt pavement based on flexural fatigue test. This study focused on the fatigue life prediction of High Modulus Asphalt Concrete (HMAC pavement using the local strain-stress method and direct tension fatigue test. First, the direct tension fatigue test at various strain levels was conducted on HMAC prism samples cut from plate specimens. Afterwards, their true stress-strain loop curves were obtained and modified to develop the strain-fatigue life equation. Then the nominal strain of HMAC course determined using finite element method was converted into local strain using the Neuber method. Finally, based on the established fatigue equation and converted local strain, a method to predict the pavement fatigue crack initiation life was proposed and the fatigue life of a typical HMAC overlay pavement which runs a risk of bottom-up cracking was predicted and validated. Results show that the proposed method was able to produce satisfactory crack initiation life.

  1. Prediction of surface tension of binary mixtures with the parachor method

    Directory of Open Access Journals (Sweden)

    Němec Tomáš

    2015-01-01

    Full Text Available The parachor method for the estimation of the surface tension of binary mixtures is modified by considering temperature-dependent values of the parachor parameters. The temperature dependence is calculated by a least-squares fit of pure-solvent surface tension data to the binary parachor equation utilizing the Peng-Robinson equation of state for the calculation of equilibrium densities. A very good agreement between experimental binary surface tension data and the predictions of the modified parachor method are found for the case of the mixtures of carbon dioxide and butane, benzene, and cyclohexane, respectively. The surface tension is also predicted for three refrigerant mixtures, i.e. propane, isobutane, and chlorodifluoromethane, with carbon dioxide.

  2. Maximum Likelihood Method for Predicting Environmental Conditions from Assemblage Composition: The R Package bio.infer

    Directory of Open Access Journals (Sweden)

    Lester L. Yuan

    2007-06-01

    Full Text Available This paper provides a brief introduction to the R package bio.infer, a set of scripts that facilitates the use of maximum likelihood (ML methods for predicting environmental conditions from assemblage composition. Environmental conditions can often be inferred from only biological data, and these inferences are useful when other sources of data are unavailable. ML prediction methods are statistically rigorous and applicable to a broader set of problems than more commonly used weighted averaging techniques. However, ML methods require a substantially greater investment of time to program algorithms and to perform computations. This package is designed to reduce the effort required to apply ML prediction methods.

  3. VAN method of short-term earthquake prediction shows promise

    Science.gov (United States)

    Uyeda, Seiya

    Although optimism prevailed in the 1970s, the present consensus on earthquake prediction appears to be quite pessimistic. However, short-term prediction based on geoelectric potential monitoring has stood the test of time in Greece for more than a decade [VarotsosandKulhanek, 1993] Lighthill, 1996]. The method used is called the VAN method.The geoelectric potential changes constantly due to causes such as magnetotelluric effects, lightning, rainfall, leakage from manmade sources, and electrochemical instabilities of electrodes. All of this noise must be eliminated before preseismic signals are identified, if they exist at all. The VAN group apparently accomplished this task for the first time. They installed multiple short (100-200m) dipoles with different lengths in both north-south and east-west directions and long (1-10 km) dipoles in appropriate orientations at their stations (one of their mega-stations, Ioannina, for example, now has 137 dipoles in operation) and found that practically all of the noise could be eliminated by applying a set of criteria to the data.

  4. Predicting human height by Victorian and genomic methods.

    Science.gov (United States)

    Aulchenko, Yurii S; Struchalin, Maksim V; Belonogova, Nadezhda M; Axenovich, Tatiana I; Weedon, Michael N; Hofman, Albert; Uitterlinden, Andre G; Kayser, Manfred; Oostra, Ben A; van Duijn, Cornelia M; Janssens, A Cecile J W; Borodin, Pavel M

    2009-08-01

    In the Victorian era, Sir Francis Galton showed that 'when dealing with the transmission of stature from parents to children, the average height of the two parents, ... is all we need care to know about them' (1886). One hundred and twenty-two years after Galton's work was published, 54 loci showing strong statistical evidence for association to human height were described, providing us with potential genomic means of human height prediction. In a population-based study of 5748 people, we find that a 54-loci genomic profile explained 4-6% of the sex- and age-adjusted height variance, and had limited ability to discriminate tall/short people, as characterized by the area under the receiver-operating characteristic curve (AUC). In a family-based study of 550 people, with both parents having height measurements, we find that the Galtonian mid-parental prediction method explained 40% of the sex- and age-adjusted height variance, and showed high discriminative accuracy. We have also explored how much variance a genomic profile should explain to reach certain AUC values. For highly heritable traits such as height, we conclude that in applications in which parental phenotypic information is available (eg, medicine), the Victorian Galton's method will long stay unsurpassed, in terms of both discriminative accuracy and costs. For less heritable traits, and in situations in which parental information is not available (eg, forensics), genomic methods may provide an alternative, given that the variants determining an essential proportion of the trait's variation can be identified.

  5. NetMHCpan, a method for MHC class I binding prediction beyond humans

    DEFF Research Database (Denmark)

    Hoof, Ilka; Peters, B; Sidney, J

    2009-01-01

    molecules. We show that the NetMHCpan-2.0 method can accurately predict binding to uncharacterized HLA molecules, including HLA-C and HLA-G. Moreover, NetMHCpan-2.0 is demonstrated to accurately predict peptide binding to chimpanzee and macaque MHC class I molecules. The power of NetMHCpan-2.0 to guide...

  6. Studies of the Raman Spectra of Cyclic and Acyclic Molecules: Combination and Prediction Spectrum Methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Taijin; Assary, Rajeev S.; Marshall, Christopher L.; Gosztola, David J.; Curtiss, Larry A.; Stair, Peter C.

    2012-04-02

    A combination of Raman spectroscopy and density functional methods was employed to investigate the spectral features of selected molecules: furfural, 5-hydroxymethyl furfural (HMF), methanol, acetone, acetic acid, and levulinic acid. The computed spectra and measured spectra are in excellent agreement, consistent with previous studies. Using the combination and prediction spectrum method (CPSM), we were able to predict the important spectral features of two platform chemicals, HMF and levulinic acid.The results have shown that CPSM is a useful alternative method for predicting vibrational spectra of complex molecules in the biomass transformation process.

  7. Long-Term Prediction of Satellite Orbit Using Analytical Method

    Directory of Open Access Journals (Sweden)

    Jae-Cheol Yoon

    1997-12-01

    Full Text Available A long-term prediction algorithm of geostationary orbit was developed using the analytical method. The perturbation force models include geopotential upto fifth order and degree and luni-solar gravitation, and solar radiation pressure. All of the perturbation effects were analyzed by secular variations, short-period variations, and long-period variations for equinoctial elements such as the semi-major axis, eccentricity vector, inclination vector, and mean longitude of the satellite. Result of the analytical orbit propagator was compared with that of the cowell orbit propagator for the KOREASAT. The comparison indicated that the analytical solution could predict the semi-major axis with an accuarcy of better than ~35meters over a period of 3 month.

  8. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    Science.gov (United States)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  9. NetMHCcons: a consensus method for the major histocompatibility complex class I predictions

    DEFF Research Database (Denmark)

    Karosiene, Edita; Lundegaard, Claus; Lund, Ole

    2012-01-01

    A key role in cell-mediated immunity is dedicated to the major histocompatibility complex (MHC) molecules that bind peptides for presentation on the cell surface. Several in silico methods capable of predicting peptide binding to MHC class I have been developed. The accuracy of these methods depe...... at www.cbs.dtu.dk/services/NetMHCcons, and allows the user in an automatic manner to obtain the most accurate predictions for any given MHC molecule....

  10. A summary of methods of predicting reliability life of nuclear equipment with small samples

    International Nuclear Information System (INIS)

    Liao Weixian

    2000-03-01

    Some of nuclear equipment are manufactured in small batch, e.g., 1-3 sets. Their service life may be very difficult to determine experimentally in view of economy and technology. The method combining theoretical analysis with material tests to predict the life of equipment is put forward, based on that equipment consists of parts or elements which are made of different materials. The whole life of an equipment part consists of the crack forming life (i.e., the fatigue life or the damage accumulation life) and the crack extension life. Methods of predicting machine life has systematically summarized with the emphasis on those which use theoretical analysis to substitute large scale prototype experiments. Meanwhile, methods and steps of predicting reliability life have been described by taking into consideration of randomness of various variables and parameters in engineering. Finally, the latest advance and trends of machine life prediction are discussed

  11. Method for predicting peptide detection in mass spectrometry

    Science.gov (United States)

    Kangas, Lars [West Richland, WA; Smith, Richard D [Richland, WA; Petritis, Konstantinos [Richland, WA

    2010-07-13

    A method of predicting whether a peptide present in a biological sample will be detected by analysis with a mass spectrometer. The method uses at least one mass spectrometer to perform repeated analysis of a sample containing peptides from proteins with known amino acids. The method then generates a data set of peptides identified as contained within the sample by the repeated analysis. The method then calculates the probability that a specific peptide in the data set was detected in the repeated analysis. The method then creates a plurality of vectors, where each vector has a plurality of dimensions, and each dimension represents a property of one or more of the amino acids present in each peptide and adjacent peptides in the data set. Using these vectors, the method then generates an algorithm from the plurality of vectors and the calculated probabilities that specific peptides in the data set were detected in the repeated analysis. The algorithm is thus capable of calculating the probability that a hypothetical peptide represented as a vector will be detected by a mass spectrometry based proteomic platform, given that the peptide is present in a sample introduced into a mass spectrometer.

  12. A Meta-Path-Based Prediction Method for Human miRNA-Target Association

    Directory of Open Access Journals (Sweden)

    Jiawei Luo

    2016-01-01

    Full Text Available MicroRNAs (miRNAs are short noncoding RNAs that play important roles in regulating gene expressing, and the perturbed miRNAs are often associated with development and tumorigenesis as they have effects on their target mRNA. Predicting potential miRNA-target associations from multiple types of genomic data is a considerable problem in the bioinformatics research. However, most of the existing methods did not fully use the experimentally validated miRNA-mRNA interactions. Here, we developed RMLM and RMLMSe to predict the relationship between miRNAs and their targets. RMLM and RMLMSe are global approaches as they can reconstruct the missing associations for all the miRNA-target simultaneously and RMLMSe demonstrates that the integration of sequence information can improve the performance of RMLM. In RMLM, we use RM measure to evaluate different relatedness between miRNA and its target based on different meta-paths; logistic regression and MLE method are employed to estimate the weight of different meta-paths. In RMLMSe, sequence information is utilized to improve the performance of RMLM. Here, we carry on fivefold cross validation and pathway enrichment analysis to prove the performance of our methods. The fivefold experiments show that our methods have higher AUC scores compared with other methods and the integration of sequence information can improve the performance of miRNA-target association prediction.

  13. Predicting and explaining inflammation in Crohn's disease patients using predictive analytics methods and electronic medical record data.

    Science.gov (United States)

    Reddy, Bhargava K; Delen, Dursun; Agrawal, Rupesh K

    2018-01-01

    Crohn's disease is among the chronic inflammatory bowel diseases that impact the gastrointestinal tract. Understanding and predicting the severity of inflammation in real-time settings is critical to disease management. Extant literature has primarily focused on studies that are conducted in clinical trial settings to investigate the impact of a drug treatment on the remission status of the disease. This research proposes an analytics methodology where three different types of prediction models are developed to predict and to explain the severity of inflammation in patients diagnosed with Crohn's disease. The results show that machine-learning-based analytic methods such as gradient boosting machines can predict the inflammation severity with a very high accuracy (area under the curve = 92.82%), followed by regularized regression and logistic regression. According to the findings, a combination of baseline laboratory parameters, patient demographic characteristics, and disease location are among the strongest predictors of inflammation severity in Crohn's disease patients.

  14. Online sequential condition prediction method of natural circulation systems based on EOS-ELM and phase space reconstruction

    International Nuclear Information System (INIS)

    Chen, Hanying; Gao, Puzhen; Tan, Sichao; Tang, Jiguo; Yuan, Hongsheng

    2017-01-01

    Highlights: •An online condition prediction method for natural circulation systems in NPP was proposed based on EOS-ELM. •The proposed online prediction method was validated using experimental data. •The training speed of the proposed method is significantly fast. •The proposed method can achieve good accuracy in wide parameter range. -- Abstract: Natural circulation design is widely used in the passive safety systems of advanced nuclear power reactors. The irregular and chaotic flow oscillations are often observed in boiling natural circulation systems so it is difficult for operators to monitor and predict the condition of these systems. An online condition forecasting method for natural circulation system is proposed in this study as an assisting technique for plant operators. The proposed prediction approach was developed based on Ensemble of Online Sequential Extreme Learning Machine (EOS-ELM) and phase space reconstruction. Online Sequential Extreme Learning Machine (OS-ELM) is an online sequential learning neural network algorithm and EOS-ELM is the ensemble method of it. The proposed condition prediction method can be initiated by a small chunk of monitoring data and it can be updated by newly arrived data at very fast speed during the online prediction. Simulation experiments were conducted on the data of two natural circulation loops to validate the performance of the proposed method. The simulation results show that the proposed predication model can successfully recognize different types of flow oscillations and accurately forecast the trend of monitored plant variables. The influence of the number of hidden nodes and neural network inputs on prediction performance was studied and the proposed model can achieve good accuracy in a wide parameter range. Moreover, the comparison results show that the proposed condition prediction method has much faster online learning speed and better prediction accuracy than conventional neural network model.

  15. DomPep--a general method for predicting modular domain-mediated protein-protein interactions.

    Directory of Open Access Journals (Sweden)

    Lei Li

    Full Text Available Protein-protein interactions (PPIs are frequently mediated by the binding of a modular domain in one protein to a short, linear peptide motif in its partner. The advent of proteomic methods such as peptide and protein arrays has led to the accumulation of a wealth of interaction data for modular interaction domains. Although several computational programs have been developed to predict modular domain-mediated PPI events, they are often restricted to a given domain type. We describe DomPep, a method that can potentially be used to predict PPIs mediated by any modular domains. DomPep combines proteomic data with sequence information to achieve high accuracy and high coverage in PPI prediction. Proteomic binding data were employed to determine a simple yet novel parameter Ligand-Binding Similarity which, in turn, is used to calibrate Domain Sequence Identity and Position-Weighted-Matrix distance, two parameters that are used in constructing prediction models. Moreover, DomPep can be used to predict PPIs for both domains with experimental binding data and those without. Using the PDZ and SH2 domain families as test cases, we show that DomPep can predict PPIs with accuracies superior to existing methods. To evaluate DomPep as a discovery tool, we deployed DomPep to identify interactions mediated by three human PDZ domains. Subsequent in-solution binding assays validated the high accuracy of DomPep in predicting authentic PPIs at the proteome scale. Because DomPep makes use of only interaction data and the primary sequence of a domain, it can be readily expanded to include other types of modular domains.

  16. A Simple Microsoft Excel Method to Predict Antibiotic Outbreaks and Underutilization.

    Science.gov (United States)

    Miglis, Cristina; Rhodes, Nathaniel J; Avedissian, Sean N; Zembower, Teresa R; Postelnick, Michael; Wunderink, Richard G; Sutton, Sarah H; Scheetz, Marc H

    2017-07-01

    Benchmarking strategies are needed to promote the appropriate use of antibiotics. We have adapted a simple regressive method in Microsoft Excel that is easily implementable and creates predictive indices. This method trends consumption over time and can identify periods of over- and underuse at the hospital level. Infect Control Hosp Epidemiol 2017;38:860-862.

  17. RANDOM FUNCTIONS AND INTERVAL METHOD FOR PREDICTING THE RESIDUAL RESOURCE OF BUILDING STRUCTURES

    Directory of Open Access Journals (Sweden)

    Shmelev Gennadiy Dmitrievich

    2017-11-01

    Full Text Available Subject: possibility of using random functions and interval prediction method for estimating the residual life of building structures in the currently used buildings. Research objectives: coordination of ranges of values to develop predictions and random functions that characterize the processes being predicted. Materials and methods: when performing this research, the method of random functions and the method of interval prediction were used. Results: in the course of this work, the basic properties of random functions, including the properties of families of random functions, are studied. The coordination of time-varying impacts and loads on building structures is considered from the viewpoint of their influence on structures and representation of the structures’ behavior in the form of random functions. Several models of random functions are proposed for predicting individual parameters of structures. For each of the proposed models, its scope of application is defined. The article notes that the considered approach of forecasting has been used many times at various sites. In addition, the available results allowed the authors to develop a methodology for assessing the technical condition and residual life of building structures for the currently used facilities. Conclusions: we studied the possibility of using random functions and processes for the purposes of forecasting the residual service lives of structures in buildings and engineering constructions. We considered the possibility of using an interval forecasting approach to estimate changes in defining parameters of building structures and their technical condition. A comprehensive technique for forecasting the residual life of building structures using the interval approach is proposed.

  18. A Method to Predict the Structure and Stability of RNA/RNA Complexes.

    Science.gov (United States)

    Xu, Xiaojun; Chen, Shi-Jie

    2016-01-01

    RNA/RNA interactions are essential for genomic RNA dimerization and regulation of gene expression. Intermolecular loop-loop base pairing is a widespread and functionally important tertiary structure motif in RNA machinery. However, computational prediction of intermolecular loop-loop base pairing is challenged by the entropy and free energy calculation due to the conformational constraint and the intermolecular interactions. In this chapter, we describe a recently developed statistical mechanics-based method for the prediction of RNA/RNA complex structures and stabilities. The method is based on the virtual bond RNA folding model (Vfold). The main emphasis in the method is placed on the evaluation of the entropy and free energy for the loops, especially tertiary kissing loops. The method also uses recursive partition function calculations and two-step screening algorithm for large, complicated structures of RNA/RNA complexes. As case studies, we use the HIV-1 Mal dimer and the siRNA/HIV-1 mutant (T4) to illustrate the method.

  19. Prediction of IRI in short and long terms for flexible pavements: ANN and GMDH methods

    NARCIS (Netherlands)

    Ziari, H.; Sobhani, J.; Ayoubinejad, J.; Hartmann, Timo

    2015-01-01

    Prediction of pavement condition is one of the most important issues in pavement management systems. In this paper, capabilities of artificial neural networks (ANNs) and group method of data handling (GMDH) methods in predicting flexible pavement conditions were analysed in three levels: in 1 year,

  20. A noise level prediction method based on electro-mechanical frequency response function for capacitors.

    Science.gov (United States)

    Zhu, Lingyu; Ji, Shengchang; Shen, Qi; Liu, Yuan; Li, Jinyu; Liu, Hao

    2013-01-01

    The capacitors in high-voltage direct-current (HVDC) converter stations radiate a lot of audible noise which can reach higher than 100 dB. The existing noise level prediction methods are not satisfying enough. In this paper, a new noise level prediction method is proposed based on a frequency response function considering both electrical and mechanical characteristics of capacitors. The electro-mechanical frequency response function (EMFRF) is defined as the frequency domain quotient of the vibration response and the squared capacitor voltage, and it is obtained from impulse current experiment. Under given excitations, the vibration response of the capacitor tank is the product of EMFRF and the square of the given capacitor voltage in frequency domain, and the radiated audible noise is calculated by structure acoustic coupling formulas. The noise level under the same excitations is also measured in laboratory, and the results are compared with the prediction. The comparison proves that the noise prediction method is effective.

  1. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    International Nuclear Information System (INIS)

    Nedic, Vladimir; Despotovic, Danijela; Cvetanovic, Slobodan; Despotovic, Milan; Babic, Sasa

    2014-01-01

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. The output variable of the network is the equivalent noise level in the given time period L eq . Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model

  2. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    Energy Technology Data Exchange (ETDEWEB)

    Nedic, Vladimir, E-mail: vnedic@kg.ac.rs [Faculty of Philology and Arts, University of Kragujevac, Jovana Cvijića bb, 34000 Kragujevac (Serbia); Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs [Faculty of Economics, University of Kragujevac, Djure Pucara Starog 3, 34000 Kragujevac (Serbia); Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs [Faculty of Economics, University of Niš, Trg kralja Aleksandra Ujedinitelja, 18000 Niš (Serbia); Despotovic, Milan, E-mail: mdespotovic@kg.ac.rs [Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, 34000 Kragujevac (Serbia); Babic, Sasa, E-mail: babicsf@yahoo.com [College of Applied Mechanical Engineering, Trstenik (Serbia)

    2014-11-15

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. The output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.

  3. Predicting hepatitis B monthly incidence rates using weighted Markov chains and time series methods.

    Science.gov (United States)

    Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam

    2015-01-01

    Hepatitis B (HB) is a major global mortality. Accurately predicting the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to predict monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The results of different applied methods were compared to correct percentages of predicted incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct predicted percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of results of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most accurate prediction of the incidence rates.

  4. Deep learning versus traditional machine learning methods for aggregated energy demand prediction

    NARCIS (Netherlands)

    Paterakis, N.G.; Mocanu, E.; Gibescu, M.; Stappers, B.; van Alst, W.

    2018-01-01

    In this paper the more advanced, in comparison with traditional machine learning approaches, deep learning methods are explored with the purpose of accurately predicting the aggregated energy consumption. Despite the fact that a wide range of machine learning methods have been applied to

  5. TEHRAN AIR POLLUTANTS PREDICTION BASED ON RANDOM FOREST FEATURE SELECTION METHOD

    Directory of Open Access Journals (Sweden)

    A. Shamsoddini

    2017-09-01

    Full Text Available Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  6. Tehran Air Pollutants Prediction Based on Random Forest Feature Selection Method

    Science.gov (United States)

    Shamsoddini, A.; Aboodi, M. R.; Karami, J.

    2017-09-01

    Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  7. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    Science.gov (United States)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  8. Tracking Maneuvering Group Target with Extension Predicted and Best Model Augmentation Method Adapted

    Directory of Open Access Journals (Sweden)

    Linhai Gan

    2017-01-01

    Full Text Available The random matrix (RM method is widely applied for group target tracking. The assumption that the group extension keeps invariant in conventional RM method is not yet valid, as the orientation of the group varies rapidly while it is maneuvering; thus, a new approach with group extension predicted is derived here. To match the group maneuvering, a best model augmentation (BMA method is introduced. The existing BMA method uses a fixed basic model set, which may lead to a poor performance when it could not ensure basic coverage of true motion modes. Here, a maneuvering group target tracking algorithm is proposed, where the group extension prediction and the BMA adaption are exploited. The performance of the proposed algorithm will be illustrated by simulation.

  9. Whole-Genome Regression and Prediction Methods Applied to Plant and Animal Breeding

    Science.gov (United States)

    de los Campos, Gustavo; Hickey, John M.; Pong-Wong, Ricardo; Daetwyler, Hans D.; Calus, Mario P. L.

    2013-01-01

    Genomic-enabled prediction is becoming increasingly important in animal and plant breeding and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of markers concurrently. Methods exist that allow implementing these large-p with small-n regressions, and genome-enabled selection (GS) is being implemented in several plant and animal breeding programs. The list of available methods is long, and the relationships between them have not been fully addressed. In this article we provide an overview of available methods for implementing parametric WGR models, discuss selected topics that emerge in applications, and present a general discussion of lessons learned from simulation and empirical data analysis in the last decade. PMID:22745228

  10. Reliability residual-life prediction method for thermal aging based on performance degradation

    International Nuclear Information System (INIS)

    Ren Shuhong; Xue Fei; Yu Weiwei; Ti Wenxin; Liu Xiaotian

    2013-01-01

    The paper makes the study of the nuclear power plant main pipeline. The residual-life of the main pipeline that failed due to thermal aging has been studied by the use of performance degradation theory and Bayesian updating methods. Firstly, the thermal aging impact property degradation process of the main pipeline austenitic stainless steel has been analyzed by the accelerated thermal aging test data. Then, the thermal aging residual-life prediction model based on the impact property degradation data is built by Bayesian updating methods. Finally, these models are applied in practical situations. It is shown that the proposed methods are feasible and the prediction accuracy meets the needs of the project. Also, it provides a foundation for the scientific management of aging management of the main pipeline. (authors)

  11. Use of simplified methods for predicting natural resource damages

    International Nuclear Information System (INIS)

    Loreti, C.P.; Boehm, P.D.; Gundlach, E.R.; Healy, E.A.; Rosenstein, A.B.; Tsomides, H.J.; Turton, D.J.; Webber, H.M.

    1995-01-01

    To reduce transaction costs and save time, the US Department of the Interior (DOI) and the National Oceanic and Atmospheric Administration (NOAA) have developed simplified methods for assessing natural resource damages from oil and chemical spills. DOI has proposed the use of two computer models, the Natural Resource Damage Assessment Model for Great Lakes Environments (NRDAM/GLE) and a revised Natural Resource Damage Assessment Model for Coastal and Marine Environments (NRDAM/CME) for predicting monetary damages for spills of oils and chemicals into the Great Lakes and coastal and marine environments. NOAA has used versions of these models to create Compensation Formulas, which it has proposed for calculating natural resource damages for oil spills of up to 50,000 gallons anywhere in the US. Based on a review of the documentation supporting the methods, the results of hundreds of sample runs of DOI's models, and the outputs of the thousands of model runs used to create NOAA's Compensation Formulas, this presentation discusses the ability of these simplified assessment procedures to make realistic damage estimates. The limitations of these procedures are described, and the need for validating the assumptions used in predicting natural resource injuries is discussed

  12. COMPARISON OF TREND PROJECTION METHODS AND BACKPROPAGATION PROJECTIONS METHODS TREND IN PREDICTING THE NUMBER OF VICTIMS DIED IN TRAFFIC ACCIDENT IN TIMOR TENGAH REGENCY, NUSA TENGGARA

    Directory of Open Access Journals (Sweden)

    Aleksius Madu

    2016-10-01

    Full Text Available The purpose of this study is to predict the number of traffic accident victims who died in Timor Tengah Regency with Trend Projection method and Backpropagation method, and compare the two methods based on the degree of guilt and predict the number traffic accident victims in the Timor Tengah Regency for the coming year. This research was conducted in Timor Tengah Regency where data used in this study was obtained from Police Unit in Timor Tengah Regency. The data is on the number of traffic accidents in Timor Tengah Regency from 2000 – 2013, which is obtained by a quantitative analysis with Trend Projection and Backpropagation method. The results of the data analysis predicting the number of traffic accidents victims using Trend Projection method obtained the best model which is the quadratic trend model with equation Yk = 39.786 + (3.297 X + (0.13 X2. Whereas by using back propagation method, it is obtained the optimum network that consists of 2 inputs, 3 hidden screens, and 1 output. Based on the error rates obtained, Back propagation method is better than the Trend Projection method which means that the predicting accuracy with Back propagation method is the best method to predict the number of traffic accidents victims in Timor Tengah Regency. Thus obtained predicting the numbers of traffic accident victims for the next 5 years (Years 2014-2018 respectively - are 106 person, 115 person, 115 person, 119 person and 120 person.   Keywords: Trend Projection, Back propagation, Predicting.

  13. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    Science.gov (United States)

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  14. Alternative prediction methods of protein and energy evaluation of pig feeds.

    Science.gov (United States)

    Święch, Ewa

    2017-01-01

    Precise knowledge of the actual nutritional value of individual feedstuffs and complete diets for pigs is important for efficient livestock production. Methods of assessment of protein and energy values in pig feeds have been briefly described. In vivo determination of protein and energy values of feeds in pigs are time-consuming, expensive and very often require the use of surgically-modified animals. There is a need for more simple, rapid, inexpensive and reproducible methods for routine feed evaluation. Protein and energy values of pig feeds can be estimated using the following alternative methods: 1) prediction equations based on chemical composition; 2) animal models as rats, cockerels and growing pigs for adult animals; 3) rapid methods, such as the mobile nylon bag technique and in vitro methods. Alternative methods developed for predicting the total tract and ileal digestibility of nutrients including amino acids in feedstuffs and diets for pigs have been reviewed. This article focuses on two in vitro methods that can be used for the routine evaluation of amino acid ileal digestibility and energy value of pig feeds and on factors affecting digestibility determined in vivo in pigs and by alternative methods. Validation of alternative methods has been carried out by comparing the results obtained using these methods with those acquired in vivo in pigs. In conclusion, energy and protein values of pig feeds may be estimated with satisfactory precision in rats and by the two- or three-step in vitro methods providing equations for the calculation of standardized ileal digestibility of amino acids and metabolizable energy content. The use of alternative methods of feed evaluation is an important way for reduction of stressful animal experiments.

  15. Evaluation of creep-fatigue life prediction methods for low-carbon/nitrogen-added SUS316

    International Nuclear Information System (INIS)

    Takahashi, Yukio

    1998-01-01

    Low-carbon/medium nitrogen 316 stainless steel called 316FR is a principal candidate for the high-temperature structural materials of a demonstration fast reactor plant. Because creep-fatigue damage is a dominant failure mechanism of the high-temperature materials subjected to thermal cycles, it is important to establish a reliable creep-fatigue life prediction method for this steel. Long-term creep tests and strain-controlled creep-fatigue tests have been conducted at various conditions for two different heats of the steel. In the constant load creep tests, both materials showed similar creep rupture strength but different ductility. The material with lower ductility exhibited shorter life under creep-fatigue loading conditions and correlation of creep-fatigue life with rupture ductility, rather than rupture strength, was made clear. Two kinds of creep-fatigue life prediction methods, i.e. time fraction rule and ductility exhaustion method were applied to predict the creep-fatigue life. Accurate description of stress relaxation behavior was achieved by an addition of 'viscous' strain to conventional creep strain and only the latter of which was assumed to contribute to creep damage in the application of ductility exhaustion method. The current version of the ductility exhaustion method was found to have very good accuracy in creep-fatigue life prediction, while the time fraction rule overpredicted creep-fatigue life as large as a factor of 30. To make a reliable estimation of the creep damage in actual components, use of ductility exhaustion method is strongly recommended. (author)

  16. Proximal gamma-ray spectroscopy to predict soil properties using windows and full-spectrum analysis methods.

    Science.gov (United States)

    Mahmood, Hafiz Sultan; Hoogmoed, Willem B; van Henten, Eldert J

    2013-11-27

    Fine-scale spatial information on soil properties is needed to successfully implement precision agriculture. Proximal gamma-ray spectroscopy has recently emerged as a promising tool to collect fine-scale soil information. The objective of this study was to evaluate a proximal gamma-ray spectrometer to predict several soil properties using energy-windows and full-spectrum analysis methods in two differently managed sandy loam fields: conventional and organic. In the conventional field, both methods predicted clay, pH and total nitrogen with a good accuracy (R2 ≥ 0.56) in the top 0-15 cm soil depth, whereas in the organic field, only clay content was predicted with such accuracy. The highest prediction accuracy was found for total nitrogen (R2 = 0.75) in the conventional field in the energy-windows method. Predictions were better in the top 0-15 cm soil depths than in the 15-30 cm soil depths for individual and combined fields. This implies that gamma-ray spectroscopy can generally benefit soil characterisation for annual crops where the condition of the seedbed is important. Small differences in soil structure (conventional vs. organic) cannot be determined. As for the methodology, we conclude that the energy-windows method can establish relations between radionuclide data and soil properties as accurate as the full-spectrum analysis method.

  17. An auxiliary optimization method for complex public transit route network based on link prediction

    Science.gov (United States)

    Zhang, Lin; Lu, Jian; Yue, Xianfei; Zhou, Jialin; Li, Yunxuan; Wan, Qian

    2018-02-01

    Inspired by the missing (new) link prediction and the spurious existing link identification in link prediction theory, this paper establishes an auxiliary optimization method for public transit route network (PTRN) based on link prediction. First, link prediction applied to PTRN is described, and based on reviewing the previous studies, the summary indices set and its algorithms set are collected for the link prediction experiment. Second, through analyzing the topological properties of Jinan’s PTRN established by the Space R method, we found that this is a typical small-world network with a relatively large average clustering coefficient. This phenomenon indicates that the structural similarity-based link prediction will show a good performance in this network. Then, based on the link prediction experiment of the summary indices set, three indices with maximum accuracy are selected for auxiliary optimization of Jinan’s PTRN. Furthermore, these link prediction results show that the overall layout of Jinan’s PTRN is stable and orderly, except for a partial area that requires optimization and reconstruction. The above pattern conforms to the general pattern of the optimal development stage of PTRN in China. Finally, based on the missing (new) link prediction and the spurious existing link identification, we propose optimization schemes that can be used not only to optimize current PTRN but also to evaluate PTRN planning.

  18. An influence function method based subsidence prediction program for longwall mining operations in inclined coal seams

    Energy Technology Data Exchange (ETDEWEB)

    Yi Luo; Jian-wei Cheng [West Virginia University, Morgantown, WV (United States). Department of Mining Engineering

    2009-09-15

    The distribution of the final surface subsidence basin induced by longwall operations in inclined coal seam could be significantly different from that in flat coal seam and demands special prediction methods. Though many empirical prediction methods have been developed, these methods are inflexible for varying geological and mining conditions. An influence function method has been developed to take the advantage of its fundamentally sound nature and flexibility. In developing this method, significant modifications have been made to the original Knothe function to produce an asymmetrical influence function. The empirical equations for final subsidence parameters derived from US subsidence data and Chinese empirical values have been incorporated into the mathematical models to improve the prediction accuracy. A corresponding computer program is developed. A number of subsidence cases for longwall mining operations in coal seams with varying inclination angles have been used to demonstrate the applicability of the developed subsidence prediction model. 9 refs., 8 figs.

  19. Soil-pipe interaction modeling for pipe behavior prediction with super learning based methods

    Science.gov (United States)

    Shi, Fang; Peng, Xiang; Liu, Huan; Hu, Yafei; Liu, Zheng; Li, Eric

    2018-03-01

    Underground pipelines are subject to severe distress from the surrounding expansive soil. To investigate the structural response of water mains to varying soil movements, field data, including pipe wall strains in situ soil water content, soil pressure and temperature, was collected. The research on monitoring data analysis has been reported, but the relationship between soil properties and pipe deformation has not been well-interpreted. To characterize the relationship between soil property and pipe deformation, this paper presents a super learning based approach combining feature selection algorithms to predict the water mains structural behavior in different soil environments. Furthermore, automatic variable selection method, e.i. recursive feature elimination algorithm, were used to identify the critical predictors contributing to the pipe deformations. To investigate the adaptability of super learning to different predictive models, this research employed super learning based methods to three different datasets. The predictive performance was evaluated by R-squared, root-mean-square error and mean absolute error. Based on the prediction performance evaluation, the superiority of super learning was validated and demonstrated by predicting three types of pipe deformations accurately. In addition, a comprehensive understand of the water mains working environments becomes possible.

  20. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    Science.gov (United States)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  1. A prediction method for long-term behavior of prestressed concrete containment vessels

    International Nuclear Information System (INIS)

    Ozaki, M.; Abe, T.; Watanabe, Y.; Kato, A.; Yamaguchi, T.; Yamamoto, M.

    1995-01-01

    This paper presents results of studies on the long-term behavior of PCCVs at Taruga Unit No 2 and Ohi Unit No 3/4 power stations. The objective of this study is to evaluate the measured strain in the concrete and reduction force in the tendons, and to establish the prediction methods for long-term PCCVs behavior. Comparing the measured strains with those calculated due to creep and shrinkage of the concrete, those in contrast were investigated. Furthermore, the reduced tendon forces are calculated considering losses in elasticity, relaxation, creep and shrinkage. The measured reduction in the tendon forces is compared with the calculated. Considering changes in temperature and humidity, the measured strains and tendon forces were in good agreement with those calculated. From the above results, it was confirmed that the residual pre stresses in the PCCVs maintain the predicted values at the design stage, and that the prediction method of long-term behaviors has sufficient reliability. (author). 10 refs., 8 figs., 3 tabs

  2. A rapid colorimetric method for predicting the storage stability of middle distillate fuels

    Energy Technology Data Exchange (ETDEWEB)

    Marshman, S.J. [Defense Research Agency, Surrey (United Kingdom)

    1995-05-01

    Present methods used to predict the storage stability of distillate fuels such as ASTM D2274, ASTM D4625, DEF STAN 05-50 Method 40 and in-house methods are very time consuming, taking a minimum of 16 hours. In addition, some of these methods under- or over-predict the storage stability of the test fuel. A rapid colorimetric test for identifying cracked, straight run or hydrofined fuels was reported at the previous Conference. Further work has shown that while a visual appraisal is acceptable for refinery-fresh fuels, colour development may be masked by other coloured compounds in older fuels. Use of a spectrometric finish to the method has extended the scope of the method to include older fuels. The test can be correlated with total sediment from ASTM D4625 (13 weeks at 43{degrees}C) over a sediment range of 0-60mg/L. A correlation of 0.94 was obtained for 40 fuels.

  3. Efficient operation scheduling for adsorption chillers using predictive optimization-based control methods

    Science.gov (United States)

    Bürger, Adrian; Sawant, Parantapa; Bohlayer, Markus; Altmann-Dieses, Angelika; Braun, Marco; Diehl, Moritz

    2017-10-01

    Within this work, the benefits of using predictive control methods for the operation of Adsorption Cooling Machines (ACMs) are shown on a simulation study. Since the internal control decisions of series-manufactured ACMs often cannot be influenced, the work focuses on optimized scheduling of an ACM considering its internal functioning as well as forecasts for load and driving energy occurrence. For illustration, an assumed solar thermal climate system is introduced and a system model suitable for use within gradient-based optimization methods is developed. The results of a system simulation using a conventional scheme for ACM scheduling are compared to the results of a predictive, optimization-based scheduling approach for the same exemplary scenario of load and driving energy occurrence. The benefits of the latter approach are shown and future actions for application of these methods for system control are addressed.

  4. A method of quantitative prediction for sandstone type uranium deposit in Russia and its application

    International Nuclear Information System (INIS)

    Chang Shushuai; Jiang Minzhong; Li Xiaolu

    2008-01-01

    The paper presents the foundational principle of quantitative predication for sandstone type uranium deposits in Russia. Some key methods such as physical-mathematical model construction and deposits prediction are described. The method has been applied to deposits prediction in Dahongshan region of Chaoshui basin. It is concluded that the technique can fortify the method of quantitative predication for sandstone type uranium deposits, and it could be used as a new technique in China. (authors)

  5. Network-based ranking methods for prediction of novel disease associated microRNAs.

    Science.gov (United States)

    Le, Duc-Hau

    2015-10-01

    Many studies have shown roles of microRNAs on human disease and a number of computational methods have been proposed to predict such associations by ranking candidate microRNAs according to their relevance to a disease. Among them, machine learning-based methods usually have a limitation in specifying non-disease microRNAs as negative training samples. Meanwhile, network-based methods are becoming dominant since they well exploit a "disease module" principle in microRNA functional similarity networks. Of which, random walk with restart (RWR) algorithm-based method is currently state-of-the-art. The use of this algorithm was inspired from its success in predicting disease gene because the "disease module" principle also exists in protein interaction networks. Besides, many algorithms designed for webpage ranking have been successfully applied in ranking disease candidate genes because web networks share topological properties with protein interaction networks. However, these algorithms have not yet been utilized for disease microRNA prediction. We constructed microRNA functional similarity networks based on shared targets of microRNAs, and then we integrated them with a microRNA functional synergistic network, which was recently identified. After analyzing topological properties of these networks, in addition to RWR, we assessed the performance of (i) PRINCE (PRIoritizatioN and Complex Elucidation), which was proposed for disease gene prediction; (ii) PageRank with Priors (PRP) and K-Step Markov (KSM), which were used for studying web networks; and (iii) a neighborhood-based algorithm. Analyses on topological properties showed that all microRNA functional similarity networks are small-worldness and scale-free. The performance of each algorithm was assessed based on average AUC values on 35 disease phenotypes and average rankings of newly discovered disease microRNAs. As a result, the performance on the integrated network was better than that on individual ones. In

  6. Comparison of RF spectrum prediction methods for dynamic spectrum access

    Science.gov (United States)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  7. Prediction of hyperbilirubinemia by noninvasive methods in full-term newborns

    Directory of Open Access Journals (Sweden)

    Danijela Furlan

    2013-02-01

    Full Text Available Introduction: The noninvasive screening methods for bilirubin determination were studied prospectively in a group of full-term healthy newborns with the aim of early prediction of pathological neonatal hyperbilirubinemia. Laboratory determination of bilirubin (Jendrassik-Grof (JG was compared to the noninvasive transcutaneous bilirubin (TcBIL together with the determination of bilirubin in cord blood.Methods: The study group consisted of 284 full-term healthy consecutively born infants in the period from March to June 2011. The whole group was divided into a group of physiological (n=199, and a group of pathological hyperbilirubinemia (n=85 according to the level of total bilirubin (220 μmol/L. Bilirubin in cord blood (CbBIL and from capillary blood at the age of three days was determined according to the JG, on the 3rd day TcBIL was also detected by Bilicheck bilirubinometer. The Kolmogorov-Smirnov and Mann-Whitney tests were used for the statistical analysis.Results: Bilirubin concentrations were statisti cally significantly different (CbBIL (p<0,001 on the 3rd day control sample (p<0,001, TcBil (p<0,001 between the groups of newborns with physiological (n=199 and pathological (n=85 hyperbilirubinemia. Using the cut-off value of cord blood bilirubin 28 μmol/L, we could predict the development of pathological hyperbiliru binemia with 98.8% prognostic specificity, and with 100% sensitivity that newborns will not require a phototherapy (all irradiated newborns were taken into account. We confirmed an excellent agreement between bilirubin concentrations determined by the TcBIL and JG methods for both groups of healthy full-term newborns.Conclusion: Based on our results, we could recommend that determination of the cord blood bilirubin in combination with the measurement of TcBIL should be implemented into practice for early prediction of pathological hyperbilirubinemia in full-term healthy newborns. The advantages of both methods in the routine

  8. Method of predicting surface deformation in the form of sinkholes

    Energy Technology Data Exchange (ETDEWEB)

    Chudek, M.; Arkuszewski, J.

    1980-06-01

    Proposes a method for predicting probability of sinkhole shaped subsidence, number of funnel-shaped subsidences and size of individual funnels. The following factors which influence the sudden subsidence of the surface in the form of funnels are analyzed: geologic structure of the strata between mining workings and the surface, mining depth, time factor, and geologic disolocations. Sudden surface subsidence is observed only in the case of workings situated up to a few dozen meters from the surface. Using the proposed method is explained with some examples. It is suggested that the method produces correct results which can be used in coal mining and in ore mining. (1 ref.) (In Polish)

  9. MAPPIN: a method for annotating, predicting pathogenicity and mode of inheritance for nonsynonymous variants.

    Science.gov (United States)

    Gosalia, Nehal; Economides, Aris N; Dewey, Frederick E; Balasubramanian, Suganthi

    2017-10-13

    Nonsynonymous single nucleotide variants (nsSNVs) constitute about 50% of known disease-causing mutations and understanding their functional impact is an area of active research. Existing algorithms predict pathogenicity of nsSNVs; however, they are unable to differentiate heterozygous, dominant disease-causing variants from heterozygous carrier variants that lead to disease only in the homozygous state. Here, we present MAPPIN (Method for Annotating, Predicting Pathogenicity, and mode of Inheritance for Nonsynonymous variants), a prediction method which utilizes a random forest algorithm to distinguish between nsSNVs with dominant, recessive, and benign effects. We apply MAPPIN to a set of Mendelian disease-causing mutations and accurately predict pathogenicity for all mutations. Furthermore, MAPPIN predicts mode of inheritance correctly for 70.3% of nsSNVs. MAPPIN also correctly predicts pathogenicity for 87.3% of mutations from the Deciphering Developmental Disorders Study with a 78.5% accuracy for mode of inheritance. When tested on a larger collection of mutations from the Human Gene Mutation Database, MAPPIN is able to significantly discriminate between mutations in known dominant and recessive genes. Finally, we demonstrate that MAPPIN outperforms CADD and Eigen in predicting disease inheritance modes for all validation datasets. To our knowledge, MAPPIN is the first nsSNV pathogenicity prediction algorithm that provides mode of inheritance predictions, adding another layer of information for variant prioritization. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. HuMiTar: A sequence-based method for prediction of human microRNA targets

    Directory of Open Access Journals (Sweden)

    Chen Ke

    2008-12-01

    Full Text Available Abstract Background MicroRNAs (miRs are small noncoding RNAs that bind to complementary/partially complementary sites in the 3' untranslated regions of target genes to regulate protein production of the target transcript and to induce mRNA degradation or mRNA cleavage. The ability to perform accurate, high-throughput identification of physiologically active miR targets would enable functional characterization of individual miRs. Current target prediction methods include traditional approaches that are based on specific base-pairing rules in the miR's seed region and implementation of cross-species conservation of the target site, and machine learning (ML methods that explore patterns that contrast true and false miR-mRNA duplexes. However, in the case of the traditional methods research shows that some seed region matches that are conserved are false positives and that some of the experimentally validated target sites are not conserved. Results We present HuMiTar, a computational method for identifying common targets of miRs, which is based on a scoring function that considers base-pairing for both seed and non-seed positions for human miR-mRNA duplexes. Our design shows that certain non-seed miR nucleotides, such as 14, 18, 13, 11, and 17, are characterized by a strong bias towards formation of Watson-Crick pairing. We contrasted HuMiTar with several representative competing methods on two sets of human miR targets and a set of ten glioblastoma oncogenes. Comparison with the two best performing traditional methods, PicTar and TargetScanS, and a representative ML method that considers the non-seed positions, NBmiRTar, shows that HuMiTar predictions include majority of the predictions of the other three methods. At the same time, the proposed method is also capable of finding more true positive targets as a trade-off for an increased number of predictions. Genome-wide predictions show that the proposed method is characterized by 1.99 signal

  11. Methods to improve genomic prediction and GWAS using combined Holstein populations

    DEFF Research Database (Denmark)

    Li, Xiujin

    The thesis focuses on methods to improve GWAS and genomic prediction using combined Holstein populations and investigations G by E interaction. The conclusions are: 1) Prediction reliabilities for Brazilian Holsteins can be increased by adding Nordic and Frensh genotyped bulls and a large G by E...... interaction exists between populations. 2) Combining data from Chinese and Danish Holstein populations increases the power of GWAS and detects new QTL regions for milk fatty acid traits. 3) The novel multi-trait Bayesian model efficiently estimates region-specific genomic variances, covariances...

  12. Prediction method for flow boiling heat transfer in a herringbone microfin tube

    Energy Technology Data Exchange (ETDEWEB)

    Wellsandt, S; Vamling, L [Chalmers University of Technology, Gothenburg (Sweden). Department of Chemical Engineering and Environmental Science, Heat and Power Technology

    2005-09-01

    Based on experimental data for R134a, the present work deals with the development of a prediction method for heat transfer in herringbone microfin tubes. As is shown in earlier works, heat transfer coefficients for the investigated herringbone microfin tube tend to peak at lower vapour qualities than in helical microfin tubes. Correlations developed for other tube types fail to describe this behaviour. A hypothesis that the position of the peak is related to the point where the average film thickness becomes smaller than the fin height is tested and found to be consistent with observed behaviour. The proposed method accounts for this hypothesis and incorporates the well-known Steiner and Taborek correlation for the calculation of flow boiling heat transfer coefficients. The correlation is modified by introducing a surface enhancement factor and adjusting the two-phase multiplier. Experimental data for R134a are predicted with an average residual of 1.5% and a standard deviation of 21%. Tested against experimental data for mixtures R410A and R407C, the proposed method overpredicts experimental data by around 60%. An alternative adjustment of the two-phase multiplier, in order to better predict mixture data, is discussed. (author)

  13. SGC method for predicting the standard enthalpy of formation of pure compounds from their molecular structures

    International Nuclear Information System (INIS)

    Albahri, Tareq A.; Aljasmi, Abdulla F.

    2013-01-01

    Highlights: • ΔH° f is predicted from the molecular structure of the compounds alone. • ANN-SGC model predicts ΔH° f with a correlation coefficient of 0.99. • ANN-MNLR model predicts ΔH° f with a correlation coefficient of 0.90. • Better definition of the atom-type molecular groups is presented. • The method is better than others in terms of combined simplicity, accuracy and generality. - Abstract: A theoretical method for predicting the standard enthalpy of formation of pure compounds from various chemical families is presented. Back propagation artificial neural networks were used to investigate several structural group contribution (SGC) methods available in literature. The networks were used to probe the structural groups that have significant contribution to the overall enthalpy of formation property of pure compounds and arrive at the set of groups that can best represent the enthalpy of formation for about 584 substances. The 51 atom-type structural groups listed provide better definitions of group contributions than others in the literature. The proposed method can predict the standard enthalpy of formation of pure compounds with an AAD of 11.38 kJ/mol and a correlation coefficient of 0.9934 from only their molecular structure. The results are further compared with those of the traditional SGC method based on MNLR as well as other methods in the literature

  14. Prediction Method for the Complete Characteristic Curves of a Francis Pump-Turbine

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2018-02-01

    Full Text Available Complete characteristic curves of a pump-turbine are essential for simulating the hydraulic transients and designing pumped storage power plants but are often unavailable in the preliminary design stage. To solve this issue, a prediction method for the complete characteristics of a Francis pump-turbine was proposed. First, based on Euler equations and the velocity triangles at the runners, a mathematical model describing the complete characteristics of a Francis pump-turbine was derived. According to multiple sets of measured complete characteristic curves, explicit expressions for the characteristic parameters of characteristic operating point sets (COPs, as functions of a specific speed and guide vane opening, were then developed to determine the undetermined coefficients in the mathematical model. Ultimately, by combining the mathematical model with the regression analysis of COPs, the complete characteristic curves for an arbitrary specific speed were predicted. Moreover, a case study shows that the predicted characteristic curves are in good agreement with the measured data. The results obtained by 1D numerical simulation of the hydraulic transient process using the predicted characteristics deviate little from the measured characteristics. This method is effective and sufficient for a priori simulations before obtaining the measured characteristics and provides important support for the preliminary design of pumped storage power plants.

  15. Alternative Testing Methods for Predicting Health Risk from Environmental Exposures

    Directory of Open Access Journals (Sweden)

    Annamaria Colacci

    2014-08-01

    Full Text Available Alternative methods to animal testing are considered as promising tools to support the prediction of toxicological risks from environmental exposure. Among the alternative testing methods, the cell transformation assay (CTA appears to be one of the most appropriate approaches to predict the carcinogenic properties of single chemicals, complex mixtures and environmental pollutants. The BALB/c 3T3 CTA shows a good degree of concordance with the in vivo rodent carcinogenesis tests. Whole-genome transcriptomic profiling is performed to identify genes that are transcriptionally regulated by different kinds of exposures. Its use in cell models representative of target organs may help in understanding the mode of action and predicting the risk for human health. Aiming at associating the environmental exposure to health-adverse outcomes, we used an integrated approach including the 3T3 CTA and transcriptomics on target cells, in order to evaluate the effects of airborne particulate matter (PM on toxicological complex endpoints. Organic extracts obtained from PM2.5 and PM1 samples were evaluated in the 3T3 CTA in order to identify effects possibly associated with different aerodynamic diameters or airborne chemical components. The effects of the PM2.5 extracts on human health were assessed by using whole-genome 44 K oligo-microarray slides. Statistical analysis by GeneSpring GX identified genes whose expression was modulated in response to the cell treatment. Then, modulated genes were associated with pathways, biological processes and diseases through an extensive biological analysis. Data derived from in vitro methods and omics techniques could be valuable for monitoring the exposure to toxicants, understanding the modes of action via exposure-associated gene expression patterns and to highlight the role of genes in key events related to adversity.

  16. Machine learning and statistical methods for the prediction of maximal oxygen uptake: recent advances

    Directory of Open Access Journals (Sweden)

    Abut F

    2015-08-01

    Full Text Available Fatih Abut, Mehmet Fatih AkayDepartment of Computer Engineering, Çukurova University, Adana, TurkeyAbstract: Maximal oxygen uptake (VO2max indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance

  17. Multiplier method may be unreliable to predict the timing of temporary hemiepiphysiodesis for coronal angular deformity.

    Science.gov (United States)

    Wu, Zhenkai; Ding, Jing; Zhao, Dahang; Zhao, Li; Li, Hai; Liu, Jianlin

    2017-07-10

    The multiplier method was introduced by Paley to calculate the timing for temporary hemiepiphysiodesis. However, this method has not been verified in terms of clinical outcome measure. We aimed to (1) predict the rate of angular correction per year (ACPY) at the various corresponding ages by means of multiplier method and verify the reliability based on the data from the published studies and (2) screen out risk factors for deviation of prediction. A comprehensive search was performed in the following electronic databases: Cochrane, PubMed, and EMBASE™. A total of 22 studies met the inclusion criteria. If the actual value of ACPY from the collected date was located out of the range of the predicted value based on the multiplier method, it was considered as the deviation of prediction (DOP). The associations of patient characteristics with DOP were assessed with the use of univariate logistic regression. Only one article was evaluated as moderate evidence; the remaining articles were evaluated as poor quality. The rate of DOP was 31.82%. In the detailed individual data of included studies, the rate of DOP was 55.44%. The multiplier method is not reliable in predicting the timing for temporary hemiepiphysiodesis, even though it is prone to be more reliable for the younger patients with idiopathic genu coronal deformity.

  18. PREDICTION OF DROUGHT IMPACT ON RICE PADDIES IN WEST JAVA USING ANALOGUE DOWNSCALING METHOD

    Directory of Open Access Journals (Sweden)

    Elza Surmaini

    2015-09-01

    Full Text Available Indonesia consistently experiences dry climatic conditions and droughts during El Niño, with significant consequences for rice production. To mitigate the impacts of such droughts, robust, simple and timely rainfall forecast is critically important for predicting drought prior to planting time over rice growing areas in Indonesia. The main objective of this study was to predict drought in rice growing areas using ensemble seasonal prediction. The skill of National Oceanic and Atmospheric Administration’s (NOAA’s seasonal prediction model Climate Forecast System version 2 (CFSv2 for predicting rice drought in West Java was investigated in a series of hindcast experiments in 1989-2010. The Constructed Analogue (CA method was employed to produce downscaled local rainfall prediction with stream function (y and velocity potential (c at 850 hPa as predictors and observed rainfall as predictant. We used forty two rain gauges in northern part of West Java in Indramayu, Cirebon, Sumedang and Majalengka Districts. To be able to quantify the uncertainties, a multi-window scheme for predictors was applied to obtain ensemble rainfall prediction. Drought events in dry season planting were predicted by rainfall thresholds. The skill of downscaled rainfall prediction was assessed using Relative Operating Characteristics (ROC method. Results of the study showed that the skills of the probabilistic seasonal prediction for early detection of rice area drought were found to range from 62% to 82% with an improved lead time of 2-4 months. The lead time of 2-4 months provided sufficient time for practical policy makers, extension workers and farmers to cope with drought by preparing suitable farming practices and equipments.

  19. Method for simulating predictive control of building systems operation in the early stages of building design

    DEFF Research Database (Denmark)

    Petersen, Steffen; Svendsen, Svend

    2011-01-01

    A method for simulating predictive control of building systems operation in the early stages of building design is presented. The method uses building simulation based on weather forecasts to predict whether there is a future heating or cooling requirement. This information enables the thermal...... control systems of the building to respond proactively to keep the operational temperature within the thermal comfort range with the minimum use of energy. The method is implemented in an existing building simulation tool designed to inform decisions in the early stages of building design through...... parametric analysis. This enables building designers to predict the performance of the method and include it as a part of the solution space. The method furthermore facilitates the task of configuring appropriate building systems control schemes in the tool, and it eliminates time consuming manual...

  20. A Novel Method to Predict Genomic Islands Based on Mean Shift Clustering Algorithm.

    Directory of Open Access Journals (Sweden)

    Daniel M de Brito

    Full Text Available Genomic Islands (GIs are regions of bacterial genomes that are acquired from other organisms by the phenomenon of horizontal transfer. These regions are often responsible for many important acquired adaptations of the bacteria, with great impact on their evolution and behavior. Nevertheless, these adaptations are usually associated with pathogenicity, antibiotic resistance, degradation and metabolism. Identification of such regions is of medical and industrial interest. For this reason, different approaches for genomic islands prediction have been proposed. However, none of them are capable of predicting precisely the complete repertory of GIs in a genome. The difficulties arise due to the changes in performance of different algorithms in the face of the variety of nucleotide distribution in different species. In this paper, we present a novel method to predict GIs that is built upon mean shift clustering algorithm. It does not require any information regarding the number of clusters, and the bandwidth parameter is automatically calculated based on a heuristic approach. The method was implemented in a new user-friendly tool named MSGIP--Mean Shift Genomic Island Predictor. Genomes of bacteria with GIs discussed in other papers were used to evaluate the proposed method. The application of this tool revealed the same GIs predicted by other methods and also different novel unpredicted islands. A detailed investigation of the different features related to typical GI elements inserted in these new regions confirmed its effectiveness. Stand-alone and user-friendly versions for this new methodology are available at http://msgip.integrativebioinformatics.me.

  1. A postprocessing method in the HMC framework for predicting gene function based on biological instrumental data

    Science.gov (United States)

    Feng, Shou; Fu, Ping; Zheng, Wenbin

    2018-03-01

    Predicting gene function based on biological instrumental data is a complicated and challenging hierarchical multi-label classification (HMC) problem. When using local approach methods to solve this problem, a preliminary results processing method is usually needed. This paper proposed a novel preliminary results processing method called the nodes interaction method. The nodes interaction method revises the preliminary results and guarantees that the predictions are consistent with the hierarchy constraint. This method exploits the label dependency and considers the hierarchical interaction between nodes when making decisions based on the Bayesian network in its first phase. In the second phase, this method further adjusts the results according to the hierarchy constraint. Implementing the nodes interaction method in the HMC framework also enhances the HMC performance for solving the gene function prediction problem based on the Gene Ontology (GO), the hierarchy of which is a directed acyclic graph that is more difficult to tackle. The experimental results validate the promising performance of the proposed method compared to state-of-the-art methods on eight benchmark yeast data sets annotated by the GO.

  2. Prediction of Chloride Diffusion in Concrete Structure Using Meshless Methods

    Directory of Open Access Journals (Sweden)

    Ling Yao

    2016-01-01

    Full Text Available Degradation of RC structures due to chloride penetration followed by reinforcement corrosion is a serious problem in civil engineering. The numerical simulation methods at present mainly involve finite element methods (FEM, which are based on mesh generation. In this study, element-free Galerkin (EFG and meshless weighted least squares (MWLS methods are used to solve the problem of simulation of chloride diffusion in concrete. The range of a scaling parameter is presented using numerical examples based on meshless methods. One- and two-dimensional numerical examples validated the effectiveness and accuracy of the two meshless methods by comparing results obtained by MWLS with results computed by EFG and FEM and results calculated by an analytical method. A good agreement is obtained among MWLS and EFG numerical simulations and the experimental data obtained from an existing marine concrete structure. These results indicate that MWLS and EFG are reliable meshless methods that can be used for the prediction of chloride ingress in concrete structures.

  3. Combining sequence-based prediction methods and circular dichroism and infrared spectroscopic data to improve protein secondary structure determinations

    Directory of Open Access Journals (Sweden)

    Lees Jonathan G

    2008-01-01

    Full Text Available Abstract Background A number of sequence-based methods exist for protein secondary structure prediction. Protein secondary structures can also be determined experimentally from circular dichroism, and infrared spectroscopic data using empirical analysis methods. It has been proposed that comparable accuracy can be obtained from sequence-based predictions as from these biophysical measurements. Here we have examined the secondary structure determination accuracies of sequence prediction methods with the empirically determined values from the spectroscopic data on datasets of proteins for which both crystal structures and spectroscopic data are available. Results In this study we show that the sequence prediction methods have accuracies nearly comparable to those of spectroscopic methods. However, we also demonstrate that combining the spectroscopic and sequences techniques produces significant overall improvements in secondary structure determinations. In addition, combining the extra information content available from synchrotron radiation circular dichroism data with sequence methods also shows improvements. Conclusion Combining sequence prediction with experimentally determined spectroscopic methods for protein secondary structure content significantly enhances the accuracy of the overall results obtained.

  4. In-depth performance evaluation of PFP and ESG sequence-based function prediction methods in CAFA 2011 experiment

    Directory of Open Access Journals (Sweden)

    Chitale Meghana

    2013-02-01

    Full Text Available Abstract Background Many Automatic Function Prediction (AFP methods were developed to cope with an increasing growth of the number of gene sequences that are available from high throughput sequencing experiments. To support the development of AFP methods, it is essential to have community wide experiments for evaluating performance of existing AFP methods. Critical Assessment of Function Annotation (CAFA is one such community experiment. The meeting of CAFA was held as a Special Interest Group (SIG meeting at the Intelligent Systems in Molecular Biology (ISMB conference in 2011. Here, we perform a detailed analysis of two sequence-based function prediction methods, PFP and ESG, which were developed in our lab, using the predictions submitted to CAFA. Results We evaluate PFP and ESG using four different measures in comparison with BLAST, Prior, and GOtcha. In addition to the predictions submitted to CAFA, we further investigate performance of a different scoring function to rank order predictions by PFP as well as PFP/ESG predictions enriched with Priors that simply adds frequently occurring Gene Ontology terms as a part of predictions. Prediction accuracies of each method were also evaluated separately for different functional categories. Successful and unsuccessful predictions by PFP and ESG are also discussed in comparison with BLAST. Conclusion The in-depth analysis discussed here will complement the overall assessment by the CAFA organizers. Since PFP and ESG are based on sequence database search results, our analyses are not only useful for PFP and ESG users but will also shed light on the relationship of the sequence similarity space and functions that can be inferred from the sequences.

  5. MU-LOC: A Machine-Learning Method for Predicting Mitochondrially Localized Proteins in Plants

    DEFF Research Database (Denmark)

    Zhang, Ning; Rao, R Shyama Prasad; Salvato, Fernanda

    2018-01-01

    -sequence or a multitude of internal signals. Compared with experimental approaches, computational predictions provide an efficient way to infer subcellular localization of a protein. However, it is still challenging to predict plant mitochondrially localized proteins accurately due to various limitations. Consequently......, the performance of current tools can be improved with new data and new machine-learning methods. We present MU-LOC, a novel computational approach for large-scale prediction of plant mitochondrial proteins. We collected a comprehensive dataset of plant subcellular localization, extracted features including amino...

  6. Advanced Materials Test Methods for Improved Life Prediction of Turbine Engine Components

    National Research Council Canada - National Science Library

    Stubbs, Jack

    2000-01-01

    Phase I final report developed under SBIR contract for Topic # AF00-149, "Durability of Turbine Engine Materials/Advanced Material Test Methods for Improved Use Prediction of Turbine Engine Components...

  7. An Improved Method of Predicting Extinction Coefficients for the Determination of Protein Concentration.

    Science.gov (United States)

    Hilario, Eric C; Stern, Alan; Wang, Charlie H; Vargas, Yenny W; Morgan, Charles J; Swartz, Trevor E; Patapoff, Thomas W

    2017-01-01

    Concentration determination is an important method of protein characterization required in the development of protein therapeutics. There are many known methods for determining the concentration of a protein solution, but the easiest to implement in a manufacturing setting is absorption spectroscopy in the ultraviolet region. For typical proteins composed of the standard amino acids, absorption at wavelengths near 280 nm is due to the three amino acid chromophores tryptophan, tyrosine, and phenylalanine in addition to a contribution from disulfide bonds. According to the Beer-Lambert law, absorbance is proportional to concentration and path length, with the proportionality constant being the extinction coefficient. Typically the extinction coefficient of proteins is experimentally determined by measuring a solution absorbance then experimentally determining the concentration, a measurement with some inherent variability depending on the method used. In this study, extinction coefficients were calculated based on the measured absorbance of model compounds of the four amino acid chromophores. These calculated values for an unfolded protein were then compared with an experimental concentration determination based on enzymatic digestion of proteins. The experimentally determined extinction coefficient for the native proteins was consistently found to be 1.05 times the calculated value for the unfolded proteins for a wide range of proteins with good accuracy and precision under well-controlled experimental conditions. The value of 1.05 times the calculated value was termed the predicted extinction coefficient. Statistical analysis shows that the differences between predicted and experimentally determined coefficients are scattered randomly, indicating no systematic bias between the values among the proteins measured. The predicted extinction coefficient was found to be accurate and not subject to the inherent variability of experimental methods. We propose the use of a

  8. Prediction of human core body temperature using non-invasive measurement methods.

    Science.gov (United States)

    Niedermann, Reto; Wyss, Eva; Annaheim, Simon; Psikuta, Agnes; Davey, Sarah; Rossi, René Michel

    2014-01-01

    The measurement of core body temperature is an efficient method for monitoring heat stress amongst workers in hot conditions. However, invasive measurement of core body temperature (e.g. rectal, intestinal, oesophageal temperature) is impractical for such applications. Therefore, the aim of this study was to define relevant non-invasive measures to predict core body temperature under various conditions. We conducted two human subject studies with different experimental protocols, different environmental temperatures (10 °C, 30 °C) and different subjects. In both studies the same non-invasive measurement methods (skin temperature, skin heat flux, heart rate) were applied. A principle component analysis was conducted to extract independent factors, which were then used in a linear regression model. We identified six parameters (three skin temperatures, two skin heat fluxes and heart rate), which were included for the calculation of two factors. The predictive value of these factors for core body temperature was evaluated by a multiple regression analysis. The calculated root mean square deviation (rmsd) was in the range from 0.28 °C to 0.34 °C for all environmental conditions. These errors are similar to previous models using non-invasive measures to predict core body temperature. The results from this study illustrate that multiple physiological parameters (e.g. skin temperature and skin heat fluxes) are needed to predict core body temperature. In addition, the physiological measurements chosen in this study and the algorithm defined in this work are potentially applicable as real-time core body temperature monitoring to assess health risk in broad range of working conditions.

  9. Study on model current predictive control method of PV grid- connected inverters systems with voltage sag

    Science.gov (United States)

    Jin, N.; Yang, F.; Shang, S. Y.; Tao, T.; Liu, J. S.

    2016-08-01

    According to the limitations of the LVRT technology of traditional photovoltaic inverter existed, this paper proposes a low voltage ride through (LVRT) control method based on model current predictive control (MCPC). This method can effectively improve the photovoltaic inverter output characteristics and response speed. The MCPC method of photovoltaic grid-connected inverter designed, the sum of the absolute value of the predictive current and the given current error is adopted as the cost function with the model predictive control method. According to the MCPC, the optimal space voltage vector is selected. Photovoltaic inverter has achieved automatically switches of priority active or reactive power control of two control modes according to the different operating states, which effectively improve the inverter capability of LVRT. The simulation and experimental results proves that the proposed method is correct and effective.

  10. Nonparametric method for genomics-based prediction of performance of quantitative traits involving epistasis in plant breeding.

    Directory of Open Access Journals (Sweden)

    Xiaochun Sun

    Full Text Available Genomic selection (GS procedures have proven useful in estimating breeding value and predicting phenotype with genome-wide molecular marker information. However, issues of high dimensionality, multicollinearity, and the inability to deal effectively with epistasis can jeopardize accuracy and predictive ability. We, therefore, propose a new nonparametric method, pRKHS, which combines the features of supervised principal component analysis (SPCA and reproducing kernel Hilbert spaces (RKHS regression, with versions for traits with no/low epistasis, pRKHS-NE, to high epistasis, pRKHS-E. Instead of assigning a specific relationship to represent the underlying epistasis, the method maps genotype to phenotype in a nonparametric way, thus requiring fewer genetic assumptions. SPCA decreases the number of markers needed for prediction by filtering out low-signal markers with the optimal marker set determined by cross-validation. Principal components are computed from reduced marker matrix (called supervised principal components, SPC and included in the smoothing spline ANOVA model as independent variables to fit the data. The new method was evaluated in comparison with current popular methods for practicing GS, specifically RR-BLUP, BayesA, BayesB, as well as a newer method by Crossa et al., RKHS-M, using both simulated and real data. Results demonstrate that pRKHS generally delivers greater predictive ability, particularly when epistasis impacts trait expression. Beyond prediction, the new method also facilitates inferences about the extent to which epistasis influences trait expression.

  11. Nonparametric method for genomics-based prediction of performance of quantitative traits involving epistasis in plant breeding.

    Science.gov (United States)

    Sun, Xiaochun; Ma, Ping; Mumm, Rita H

    2012-01-01

    Genomic selection (GS) procedures have proven useful in estimating breeding value and predicting phenotype with genome-wide molecular marker information. However, issues of high dimensionality, multicollinearity, and the inability to deal effectively with epistasis can jeopardize accuracy and predictive ability. We, therefore, propose a new nonparametric method, pRKHS, which combines the features of supervised principal component analysis (SPCA) and reproducing kernel Hilbert spaces (RKHS) regression, with versions for traits with no/low epistasis, pRKHS-NE, to high epistasis, pRKHS-E. Instead of assigning a specific relationship to represent the underlying epistasis, the method maps genotype to phenotype in a nonparametric way, thus requiring fewer genetic assumptions. SPCA decreases the number of markers needed for prediction by filtering out low-signal markers with the optimal marker set determined by cross-validation. Principal components are computed from reduced marker matrix (called supervised principal components, SPC) and included in the smoothing spline ANOVA model as independent variables to fit the data. The new method was evaluated in comparison with current popular methods for practicing GS, specifically RR-BLUP, BayesA, BayesB, as well as a newer method by Crossa et al., RKHS-M, using both simulated and real data. Results demonstrate that pRKHS generally delivers greater predictive ability, particularly when epistasis impacts trait expression. Beyond prediction, the new method also facilitates inferences about the extent to which epistasis influences trait expression.

  12. New methods for fall risk prediction.

    Science.gov (United States)

    Ejupi, Andreas; Lord, Stephen R; Delbaere, Kim

    2014-09-01

    Accidental falls are the leading cause of injury-related death and hospitalization in old age, with over one-third of the older adults experiencing at least one fall or more each year. Because of limited healthcare resources, regular objective fall risk assessments are not possible in the community on a large scale. New methods for fall prediction are necessary to identify and monitor those older people at high risk of falling who would benefit from participating in falls prevention programmes. Technological advances have enabled less expensive ways to quantify physical fall risk in clinical practice and in the homes of older people. Recently, several studies have demonstrated that sensor-based fall risk assessments of postural sway, functional mobility, stepping and walking can discriminate between fallers and nonfallers. Recent research has used low-cost, portable and objective measuring instruments to assess fall risk in older people. Future use of these technologies holds promise for assessing fall risk accurately in an unobtrusive manner in clinical and daily life settings.

  13. Evaluation of two methods of predicting MLC leaf positions using EPID measurements

    International Nuclear Information System (INIS)

    Parent, Laure; Seco, Joao; Evans, Phil M.; Dance, David R.; Fielding, Andrew

    2006-01-01

    In intensity modulated radiation treatments (IMRT), the position of the field edges and the modulation within the beam are often achieved with a multileaf collimator (MLC). During the MLC calibration process, due to the finite accuracy of leaf position measurements, a systematic error may be introduced to leaf positions. Thereafter leaf positions of the MLC depend on the systematic error introduced on each leaf during MLC calibration and on the accuracy of the leaf position control system (random errors). This study presents and evaluates two methods to predict the systematic errors on the leaf positions introduced during the MLC calibration. The two presented methods are based on a series of electronic portal imaging device (EPID) measurements. A comparison with film measurements showed that the EPID could be used to measure leaf positions without introducing any bias. The first method, referred to as the 'central leaf method', is based on the method currently used at this center for MLC leaf calibration. It mimics the manner in which leaf calibration parameters are specified in the MLC control system and consequently is also used by other centers. The second method, a new method proposed by the authors and referred to as the ''individual leaf method,'' involves the measurement of two positions for each leaf (-5 and +15 cm) and the interpolation and extrapolation from these two points to any other given position. The central leaf method and the individual leaf method predicted leaf positions at prescribed positions of -11, 0, 5, and 10 cm within 2.3 and 1.0 mm, respectively, with a standard deviation (SD) of 0.3 and 0.2 mm, respectively. The individual leaf method provided a better prediction of the leaf positions than the central leaf method. Reproducibility tests for leaf positions of -5 and +15 cm were performed. The reproducibility was within 0.4 mm on the same day and 0.4 mm six weeks later (1 SD). Measurements at gantry angles of 0 deg., 90 deg., and 270 deg

  14. Application of statistical classification methods for predicting the acceptability of well-water quality

    Science.gov (United States)

    Cameron, Enrico; Pilla, Giorgio; Stella, Fabio A.

    2018-01-01

    The application of statistical classification methods is investigated—in comparison also to spatial interpolation methods—for predicting the acceptability of well-water quality in a situation where an effective quantitative model of the hydrogeological system under consideration cannot be developed. In the example area in northern Italy, in particular, the aquifer is locally affected by saline water and the concentration of chloride is the main indicator of both saltwater occurrence and groundwater quality. The goal is to predict if the chloride concentration in a water well will exceed the allowable concentration so that the water is unfit for the intended use. A statistical classification algorithm achieved the best predictive performances and the results of the study show that statistical classification methods provide further tools for dealing with groundwater quality problems concerning hydrogeological systems that are too difficult to describe analytically or to simulate effectively.

  15. Urban Link Travel Time Prediction Based on a Gradient Boosting Method Considering Spatiotemporal Correlations

    Directory of Open Access Journals (Sweden)

    Faming Zhang

    2016-11-01

    Full Text Available The prediction of travel times is challenging because of the sparseness of real-time traffic data and the intrinsic uncertainty of travel on congested urban road networks. We propose a new gradient–boosted regression tree method to accurately predict travel times. This model accounts for spatiotemporal correlations extracted from historical and real-time traffic data for adjacent and target links. This method can deliver high prediction accuracy by combining simple regression trees with poor performance. It corrects the error found in existing models for improved prediction accuracy. Our spatiotemporal gradient–boosted regression tree model was verified in experiments. The training data were obtained from big data reflecting historic traffic conditions collected by probe vehicles in Wuhan from January to May 2014. Real-time data were extracted from 11 weeks of GPS records collected in Wuhan from 5 May 2014 to 20 July 2014. Based on these data, we predicted link travel time for the period from 21 July 2014 to 25 July 2014. Experiments showed that our proposed spatiotemporal gradient–boosted regression tree model obtained better results than gradient boosting, random forest, or autoregressive integrated moving average approaches. Furthermore, these results indicate the advantages of our model for urban link travel time prediction.

  16. Decision tree methods: applications for classification and prediction.

    Science.gov (United States)

    Song, Yan-Yan; Lu, Ying

    2015-04-25

    Decision tree methodology is a commonly used data mining method for establishing classification systems based on multiple covariates or for developing prediction algorithms for a target variable. This method classifies a population into branch-like segments that construct an inverted tree with a root node, internal nodes, and leaf nodes. The algorithm is non-parametric and can efficiently deal with large, complicated datasets without imposing a complicated parametric structure. When the sample size is large enough, study data can be divided into training and validation datasets. Using the training dataset to build a decision tree model and a validation dataset to decide on the appropriate tree size needed to achieve the optimal final model. This paper introduces frequently used algorithms used to develop decision trees (including CART, C4.5, CHAID, and QUEST) and describes the SPSS and SAS programs that can be used to visualize tree structure.

  17. Power Transformer Operating State Prediction Method Based on an LSTM Network

    Directory of Open Access Journals (Sweden)

    Hui Song

    2018-04-01

    Full Text Available The state of transformer equipment is usually manifested through a variety of information. The characteristic information will change with different types of equipment defects/faults, location, severity, and other factors. For transformer operating state prediction and fault warning, the key influencing factors of the transformer panorama information are analyzed. The degree of relative deterioration is used to characterize the deterioration of the transformer state. The membership relationship between the relative deterioration degree of each indicator and the transformer state is obtained through fuzzy processing. Through the long short-term memory (LSTM network, the evolution of the transformer status is extracted, and a data-driven state prediction model is constructed to realize preliminary warning of a potential fault of the equipment. Through the LSTM network, the quantitative index and qualitative index are organically combined in order to perceive the corresponding relationship between the characteristic parameters and the operating state of the transformer. The results of different time-scale prediction cases show that the proposed method can effectively predict the operation status of power transformers and accurately reflect their status.

  18. Method to predict fatigue lifetimes of GRP wind turbine blades and comparison with experiments

    Energy Technology Data Exchange (ETDEWEB)

    Echtermeyer, A.T. [Det Norske Veritas Research AS, Hoevik (Norway); Kensche, C. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Stuttgart (Germany, F.R); Bach, P. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Poppen, M. [Aeronautical Research Inst. of Sweden, Bromma (Sweden); Lilholt, H.; Andersen, S.I.; Broendsted, P. [Risoe National Lab., Roskilde (Denmark)

    1996-12-01

    This paper describes a method to predict fatigue lifetimes of fiber reinforced plastics in wind turbine blades. It is based on extensive testing within the EU-Joule program. The method takes the measured fatigue properties of a material into account so that credit can be given to materials with improved fatigue properties. The large number of test results should also give confidence in the fatigue calculation method for fiber reinforced plastics. The method uses the Palmgren-Miner sum to predict lifetimes and is verified by tests using well defined load sequences. Even though this approach is generally well known in fatigue analysis, many details in the interpretation and extrapolation of the measurements need to be clearly defined, since they can influence the results considerably. The following subjects will be described: Method to measure SN curves and to obtain tolerance bounds, development of a constant lifetime diagram, evaluation of the load sequence, use of Palmgren-Miner sum, requirements for load sequence testing. The fatigue lifetime calculation method has been compared against measured data for simple loading sequences and the more complex WISPERX loading sequence for blade roots. The comparison is based on predicted mean lifetimes, using the same materials to obtain the basic SN curves and to measure laminates under complicated loading sequences. 24 refs, 7 figs, 5 tabs

  19. Accurate approximation method for prediction of class I MHC affinities for peptides of length 8, 10 and 11 using prediction tools trained on 9mers

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2008-01-01

    Several accurate prediction systems have been developed for prediction of class I major histocompatibility complex (MHC):peptide binding. Most of these are trained on binding affinity data of primarily 9mer peptides. Here, we show how prediction methods trained on 9mer data can be used for accurate...

  20. Performance of Firth-and logF-type penalized methods in risk prediction for small or sparse binary data.

    Science.gov (United States)

    Rahman, M Shafiqur; Sultana, Mahbuba

    2017-02-23

    When developing risk models for binary data with small or sparse data sets, the standard maximum likelihood estimation (MLE) based logistic regression faces several problems including biased or infinite estimate of the regression coefficient and frequent convergence failure of the likelihood due to separation. The problem of separation occurs commonly even if sample size is large but there is sufficient number of strong predictors. In the presence of separation, even if one develops the model, it produces overfitted model with poor predictive performance. Firth-and logF-type penalized regression methods are popular alternative to MLE, particularly for solving separation-problem. Despite the attractive advantages, their use in risk prediction is very limited. This paper evaluated these methods in risk prediction in comparison with MLE and other commonly used penalized methods such as ridge. The predictive performance of the methods was evaluated through assessing calibration, discrimination and overall predictive performance using an extensive simulation study. Further an illustration of the methods were provided using a real data example with low prevalence of outcome. The MLE showed poor performance in risk prediction in small or sparse data sets. All penalized methods offered some improvements in calibration, discrimination and overall predictive performance. Although the Firth-and logF-type methods showed almost equal amount of improvement, Firth-type penalization produces some bias in the average predicted probability, and the amount of bias is even larger than that produced by MLE. Of the logF(1,1) and logF(2,2) penalization, logF(2,2) provides slight bias in the estimate of regression coefficient of binary predictor and logF(1,1) performed better in all aspects. Similarly, ridge performed well in discrimination and overall predictive performance but it often produces underfitted model and has high rate of convergence failure (even the rate is higher than that

  1. Analytical methods for predicting contaminant transport

    International Nuclear Information System (INIS)

    Pigford, T.H.

    1989-09-01

    This paper summarizes some of the previous and recent work at the University of California on analytical solutions for predicting contaminate transport in porous and fractured geologic media. Emphasis is given here to the theories for predicting near-field transport, needed to derive the time-dependent source term for predicting far-field transport and overall repository performance. New theories summarized include solubility-limited release rate with flow backfill in rock, near-field transport of radioactive decay chains, interactive transport of colloid and solute, transport of carbon-14 as carbon dioxide in unsaturated rock, and flow of gases out of and a waste container through cracks and penetrations. 28 refs., 4 figs

  2. Quantitative prediction process and evaluation method for seafloor polymetallic sulfide resources

    Directory of Open Access Journals (Sweden)

    Mengyi Ren

    2016-03-01

    Full Text Available Seafloor polymetallic sulfide resources exhibit significant development potential. In 2011, China received the exploration rights for 10,000 km2 of a polymetallic sulfides area in the Southwest Indian Ocean; China will be permitted to retain only 25% of the area in 2021. However, an exploration of seafloor hydrothermal sulfide deposits in China remains in the initial stage. According to the quantitative prediction theory and the exploration status of seafloor sulfides, this paper systematically proposes a quantitative prediction evaluation process of oceanic polymetallic sulfide resources and divides it into three stages: prediction in a large area, prediction in the prospecting region, and the verification and evaluation of targets. The first two stages of the prediction process have been employed in seafloor sulfides prospecting of the Chinese contract area. The results of stage one suggest that the Chinese contract area is located in the high posterior probability area, which indicates good prospecting potential area in the Indian Ocean. In stage two, the Chinese contract area of 48°–52°E has the highest posterior probability value, which can be selected as the reserved region for additional exploration. In stage three, the method of numerical simulation is employed to reproduce the ore-forming process of sulfides to verify the accuracy of the reserved targets obtained from the three-stage prediction. By narrowing the exploration area and gradually improving the exploration accuracy, the prediction will provide a basis for the exploration and exploitation of seafloor polymetallic sulfide resources.

  3. A finite element modeling method for predicting long term corrosion rates

    International Nuclear Information System (INIS)

    Fu, J.W.; Chan, S.

    1984-01-01

    For the analyses of galvanic corrosion, pitting and crevice corrosion, which have been identified as possible corrosion processes for nuclear waste isolation, a finite element method has been developed for the prediction of corrosion rates. The method uses a finite element mesh to model the corrosive environment and the polarization curves of metals are assigned as the boundary conditions to calculate the corrosion cell current distribution. A subroutine is used to calculate the chemical change with time in the crevice or the pit environments. In this paper, the finite element method is described along with experimental confirmation

  4. Comparative Study of Different Methods for the Prediction of Drug-Polymer Solubility

    DEFF Research Database (Denmark)

    Knopp, Matthias Manne; Tajber, Lidia; Tian, Yiwei

    2015-01-01

    monomer weight ratios. The drug–polymer solubility at 25 °C was predicted using the Flory–Huggins model, from data obtained at elevated temperature using thermal analysis methods based on the recrystallization of a supersaturated amorphous solid dispersion and two variations of the melting point......, which suggests that this method can be used as an initial screening tool if a liquid analogue is available. The learnings of this important comparative study provided general guidance for the selection of the most suitable method(s) for the screening of drug–polymer solubility....

  5. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  6. Geostatistical methods for rock mass quality prediction using borehole and geophysical survey data

    Science.gov (United States)

    Chen, J.; Rubin, Y.; Sege, J. E.; Li, X.; Hehua, Z.

    2015-12-01

    For long, deep tunnels, the number of geotechnical borehole investigations during the preconstruction stage is generally limited. Yet tunnels are often constructed in geological structures with complex geometries, and in which the rock mass is fragmented from past structural deformations. Tunnel Geology Prediction (TGP) is a geophysical technique widely used during tunnel construction in China to ensure safety during construction and to prevent geological disasters. In this paper, geostatistical techniques were applied in order to integrate seismic velocity from TGP and borehole information into spatial predictions of RMR (Rock Mass Rating) in unexcavated areas. This approach is intended to apply conditional probability methods to transform seismic velocities to directly observed RMR values. The initial spatial distribution of RMR, inferred from the boreholes, was updated by including geophysical survey data in a co-kriging approach. The method applied to a real tunnel project shows significant improvements in rock mass quality predictions after including geophysical survey data, leading to better decision-making for construction safety design.

  7. Development of nondestructive method for prediction of crack instability

    International Nuclear Information System (INIS)

    Schroeder, J.L.; Eylon, D.; Shell, E.B.; Matikas, T.E.

    2000-01-01

    A method to characterize the deformation zone at a crack tip and predict upcoming fracture under load using white light interference microscopy was developed and studied. Cracks were initiated in notched Ti-6Al-4V specimens through fatigue loading. Following crack initiation, specimens were subjected to static loading during in-situ observation of the deformation area ahead of the crack. Nondestructive in-situ observations were performed using white light interference microscopy. Profilometer measurements quantified the area, volume, and shape of the deformation ahead of the crack front. Results showed an exponential relationship between the area and volume of deformation and the stress intensity factor of the cracked alloy. These findings also indicate that it is possible to determine a critical rate of change in deformation versus the stress intensity factor that can predict oncoming catastrophic failure. In addition, crack front deformation zones were measured as a function of time under sustained load, and crack tip deformation zone enlargement over time was observed

  8. SVM-PB-Pred: SVM based protein block prediction method using sequence profiles and secondary structures.

    Science.gov (United States)

    Suresh, V; Parthasarathy, S

    2014-01-01

    We developed a support vector machine based web server called SVM-PB-Pred, to predict the Protein Block for any given amino acid sequence. The input features of SVM-PB-Pred include i) sequence profiles (PSSM) and ii) actual secondary structures (SS) from DSSP method or predicted secondary structures from NPS@ and GOR4 methods. There were three combined input features PSSM+SS(DSSP), PSSM+SS(NPS@) and PSSM+SS(GOR4) used to test and train the SVM models. Similarly, four datasets RS90, DB433, LI1264 and SP1577 were used to develop the SVM models. These four SVM models developed were tested using three different benchmarking tests namely; (i) self consistency, (ii) seven fold cross validation test and (iii) independent case test. The maximum possible prediction accuracy of ~70% was observed in self consistency test for the SVM models of both LI1264 and SP1577 datasets, where PSSM+SS(DSSP) input features was used to test. The prediction accuracies were reduced to ~53% for PSSM+SS(NPS@) and ~43% for PSSM+SS(GOR4) in independent case test, for the SVM models of above two same datasets. Using our method, it is possible to predict the protein block letters for any query protein sequence with ~53% accuracy, when the SP1577 dataset and predicted secondary structure from NPS@ server were used. The SVM-PB-Pred server can be freely accessed through http://bioinfo.bdu.ac.in/~svmpbpred.

  9. An improved method for predicting the effects of flight on jet mixing noise

    Science.gov (United States)

    Stone, J. R.

    1979-01-01

    A method for predicting the effects of flight on jet mixing noise has been developed on the basis of the jet noise theory of Ffowcs-Williams (1963) and data derived from model-jet/free-jet simulated flight tests. Predicted and experimental values are compared for the J85 turbojet engine on the Bertin Aerotrain, the low-bypass refanned JT8D engine on a DC-9, and the high-bypass JT9D engine on a DC-10. Over the jet velocity range from 280 to 680 m/sec, the predictions show a standard deviation of 1.5 dB.

  10. Traffic Flow Prediction with Rainfall Impact Using a Deep Learning Method

    Directory of Open Access Journals (Sweden)

    Yuhan Jia

    2017-01-01

    Full Text Available Accurate traffic flow prediction is increasingly essential for successful traffic modeling, operation, and management. Traditional data driven traffic flow prediction approaches have largely assumed restrictive (shallow model architectures and do not leverage the large amount of environmental data available. Inspired by deep learning methods with more complex model architectures and effective data mining capabilities, this paper introduces the deep belief network (DBN and long short-term memory (LSTM to predict urban traffic flow considering the impact of rainfall. The rainfall-integrated DBN and LSTM can learn the features of traffic flow under various rainfall scenarios. Experimental results indicate that, with the consideration of additional rainfall factor, the deep learning predictors have better accuracy than existing predictors and also yield improvements over the original deep learning models without rainfall input. Furthermore, the LSTM can outperform the DBN to capture the time series characteristics of traffic flow data.

  11. DDR: Efficient computational method to predict drug–target interactions using graph mining and machine learning approaches

    KAUST Repository

    Olayan, Rawan S.

    2017-11-23

    Motivation Finding computationally drug-target interactions (DTIs) is a convenient strategy to identify new DTIs at low cost with reasonable accuracy. However, the current DTI prediction methods suffer the high false positive prediction rate. Results We developed DDR, a novel method that improves the DTI prediction accuracy. DDR is based on the use of a heterogeneous graph that contains known DTIs with multiple similarities between drugs and multiple similarities between target proteins. DDR applies non-linear similarity fusion method to combine different similarities. Before fusion, DDR performs a pre-processing step where a subset of similarities is selected in a heuristic process to obtain an optimized combination of similarities. Then, DDR applies a random forest model using different graph-based features extracted from the DTI heterogeneous graph. Using five repeats of 10-fold cross-validation, three testing setups, and the weighted average of area under the precision-recall curve (AUPR) scores, we show that DDR significantly reduces the AUPR score error relative to the next best start-of-the-art method for predicting DTIs by 34% when the drugs are new, by 23% when targets are new, and by 34% when the drugs and the targets are known but not all DTIs between them are not known. Using independent sources of evidence, we verify as correct 22 out of the top 25 DDR novel predictions. This suggests that DDR can be used as an efficient method to identify correct DTIs.

  12. Fast computational methods for predicting protein structure from primary amino acid sequence

    Science.gov (United States)

    Agarwal, Pratul Kumar [Knoxville, TN

    2011-07-19

    The present invention provides a method utilizing primary amino acid sequence of a protein, energy minimization, molecular dynamics and protein vibrational modes to predict three-dimensional structure of a protein. The present invention also determines possible intermediates in the protein folding pathway. The present invention has important applications to the design of novel drugs as well as protein engineering. The present invention predicts the three-dimensional structure of a protein independent of size of the protein, overcoming a significant limitation in the prior art.

  13. Disorder Prediction Methods, Their Applicability to Different Protein Targets and Their Usefulness for Guiding Experimental Studies

    Directory of Open Access Journals (Sweden)

    Jennifer D. Atkins

    2015-08-01

    Full Text Available The role and function of a given protein is dependent on its structure. In recent years, however, numerous studies have highlighted the importance of unstructured, or disordered regions in governing a protein’s function. Disordered proteins have been found to play important roles in pivotal cellular functions, such as DNA binding and signalling cascades. Studying proteins with extended disordered regions is often problematic as they can be challenging to express, purify and crystallise. This means that interpretable experimental data on protein disorder is hard to generate. As a result, predictive computational tools have been developed with the aim of predicting the level and location of disorder within a protein. Currently, over 60 prediction servers exist, utilizing different methods for classifying disorder and different training sets. Here we review several good performing, publicly available prediction methods, comparing their application and discussing how disorder prediction servers can be used to aid the experimental solution of protein structure. The use of disorder prediction methods allows us to adopt a more targeted approach to experimental studies by accurately identifying the boundaries of ordered protein domains so that they may be investigated separately, thereby increasing the likelihood of their successful experimental solution.

  14. An introduction to the application of relaxation method in numerical weather prediction

    International Nuclear Information System (INIS)

    Aquino, E.M.

    1984-11-01

    This paper is intended for workers in the field of numerical weather prediction to acquire experience and gain insight on the use of the relaxation method. Two approaches were carried out, one by explaining the method using hand calculations as applied to a given problem and the second one was the discussion of how the calculations could be carried out on a digital computer. (author)

  15. A Lifetime Prediction Method for LEDs Considering Real Mission Profiles

    DEFF Research Database (Denmark)

    Qu, Xiaohui; Wang, Huai; Zhan, Xiaoqing

    2017-01-01

    operations due to the varying operational and environmental conditions during the entire service time (i.e., mission profiles). To overcome the challenge, this paper proposes an advanced lifetime prediction method, which takes into account the field operation mission profiles and also the statistical......The Light-Emitting Diode (LED) has become a very promising alternative lighting source with the advantages of longer lifetime and higher efficiency than traditional ones. The lifetime prediction of LEDs is important to guide the LED system designers to fulfill the design specifications...... properties of the life data available from accelerated degradation testing. The electrical and thermal characteristics of LEDs are measured by a T3Ster system, used for the electro-thermal modeling. It also identifies key variables (e.g., heat sink parameters) that can be designed to achieve a specified...

  16. Prediction of interactions between viral and host proteins using supervised machine learning methods.

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Barman

    Full Text Available BACKGROUND: Viral-host protein-protein interaction plays a vital role in pathogenesis, since it defines viral infection of the host and regulation of the host proteins. Identification of key viral-host protein-protein interactions (PPIs has great implication for therapeutics. METHODS: In this study, a systematic attempt has been made to predict viral-host PPIs by integrating different features, including domain-domain association, network topology and sequence information using viral-host PPIs from VirusMINT. The three well-known supervised machine learning methods, such as SVM, Naïve Bayes and Random Forest, which are commonly used in the prediction of PPIs, were employed to evaluate the performance measure based on five-fold cross validation techniques. RESULTS: Out of 44 descriptors, best features were found to be domain-domain association and methionine, serine and valine amino acid composition of viral proteins. In this study, SVM-based method achieved better sensitivity of 67% over Naïve Bayes (37.49% and Random Forest (55.66%. However the specificity of Naïve Bayes was the highest (99.52% as compared with SVM (74% and Random Forest (89.08%. Overall, the SVM and Random Forest achieved accuracy of 71% and 72.41%, respectively. The proposed SVM-based method was evaluated on blind dataset and attained a sensitivity of 64%, specificity of 83%, and accuracy of 74%. In addition, unknown potential targets of hepatitis B virus-human and hepatitis E virus-human PPIs have been predicted through proposed SVM model and validated by gene ontology enrichment analysis. Our proposed model shows that, hepatitis B virus "C protein" binds to membrane docking protein, while "X protein" and "P protein" interacts with cell-killing and metabolic process proteins, respectively. CONCLUSION: The proposed method can predict large scale interspecies viral-human PPIs. The nature and function of unknown viral proteins (HBV and HEV, interacting partners of host

  17. An alternative method to predict the S-shaped curve for logistic characteristics of phonon transport in silicon thin film

    International Nuclear Information System (INIS)

    Awad, M.M.

    2014-01-01

    The S-shaped curve was observed by Yilbas and Bin Mansoor (2013). In this study, an alternative method to predict the S-shaped curve for logistic characteristics of phonon transport in silicon thin film is presented by using an analytical prediction method. This analytical prediction method was introduced by Bejan and Lorente in 2011 and 2012. The Bejan and Lorente method is based on two-mechanism flow of fast “invasion” by convection and slow “consolidation” by diffusion.

  18. A lifetime prediction method for LEDs considering mission profiles

    DEFF Research Database (Denmark)

    Qu, Xiaohui; Wang, Huai; Zhan, Xiaoqing

    2016-01-01

    and to benchmark the cost-competitiveness of different lighting technologies. The existing lifetime data released by LED manufacturers or standard organizations are usually applicable only for specific temperature and current levels. Significant lifetime discrepancies may be observed in field operations due...... to the varying operational and environmental conditions during the entire service time (i.e., mission profiles). To overcome the challenge, this paper proposes an advanced lifetime prediction method, which takes into account the field operation mission profiles and the statistical properties of the life data...

  19. Decision support system in Predicting the Best teacher with Multi Atribute Decesion Making Weighted Product (MADMWP Method

    Directory of Open Access Journals (Sweden)

    Solikhun Solikhun

    2017-06-01

    Full Text Available Predicting of the best teacher in Indonesia aims to spur the development of the growth and improve the quality of the education. In this paper, the predicting  of the best teacher is implemented based on predefined criteria. To help the predicting process, a decision support system is needed. This paper employs Multi Atribute Decesion Making Weighted Product (MADMWP method. The result of this method is tested some teachers in  junior high school islamic boarding Al-Barokah school, Simalungun, North Sumatera, Indonesia. This system can be used to help in solving problems of the best teacher prediction.

  20. Machine learning and statistical methods for the prediction of maximal oxygen uptake: recent advances.

    Science.gov (United States)

    Abut, Fatih; Akay, Mehmet Fatih

    2015-01-01

    Maximal oxygen uptake (VO2max) indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R) and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance.

  1. A Review of Computational Methods to Predict the Risk of Rupture of Abdominal Aortic Aneurysms

    Directory of Open Access Journals (Sweden)

    Tejas Canchi

    2015-01-01

    Full Text Available Computational methods have played an important role in health care in recent years, as determining parameters that affect a certain medical condition is not possible in experimental conditions in many cases. Computational fluid dynamics (CFD methods have been used to accurately determine the nature of blood flow in the cardiovascular and nervous systems and air flow in the respiratory system, thereby giving the surgeon a diagnostic tool to plan treatment accordingly. Machine learning or data mining (MLD methods are currently used to develop models that learn from retrospective data to make a prediction regarding factors affecting the progression of a disease. These models have also been successful in incorporating factors such as patient history and occupation. MLD models can be used as a predictive tool to determine rupture potential in patients with abdominal aortic aneurysms (AAA along with CFD-based prediction of parameters like wall shear stress and pressure distributions. A combination of these computer methods can be pivotal in bridging the gap between translational and outcomes research in medicine. This paper reviews the use of computational methods in the diagnosis and treatment of AAA.

  2. Improved methods for predicting peptide binding affinity to MHC class II molecules.

    Science.gov (United States)

    Jensen, Kamilla Kjaergaard; Andreatta, Massimo; Marcatili, Paolo; Buus, Søren; Greenbaum, Jason A; Yan, Zhen; Sette, Alessandro; Peters, Bjoern; Nielsen, Morten

    2018-01-06

    Major histocompatibility complex class II (MHC-II) molecules are expressed on the surface of professional antigen-presenting cells where they display peptides to T helper cells, which orchestrate the onset and outcome of many host immune responses. Understanding which peptides will be presented by the MHC-II molecule is therefore important for understanding the activation of T helper cells and can be used to identify T-cell epitopes. We here present updated versions of two MHC-II-peptide binding affinity prediction methods, NetMHCII and NetMHCIIpan. These were constructed using an extended data set of quantitative MHC-peptide binding affinity data obtained from the Immune Epitope Database covering HLA-DR, HLA-DQ, HLA-DP and H-2 mouse molecules. We show that training with this extended data set improved the performance for peptide binding predictions for both methods. Both methods are publicly available at www.cbs.dtu.dk/services/NetMHCII-2.3 and www.cbs.dtu.dk/services/NetMHCIIpan-3.2. © 2018 John Wiley & Sons Ltd.

  3. Prediction system of hydroponic plant growth and development using algorithm Fuzzy Mamdani method

    Science.gov (United States)

    Sudana, I. Made; Purnawirawan, Okta; Arief, Ulfa Mediaty

    2017-03-01

    Hydroponics is a method of farming without soil. One of the Hydroponic plants is Watercress (Nasturtium Officinale). The development and growth process of hydroponic Watercress was influenced by levels of nutrients, acidity and temperature. The independent variables can be used as input variable system to predict the value level of plants growth and development. The prediction system is using Fuzzy Algorithm Mamdani method. This system was built to implement the function of Fuzzy Inference System (Fuzzy Inference System/FIS) as a part of the Fuzzy Logic Toolbox (FLT) by using MATLAB R2007b. FIS is a computing system that works on the principle of fuzzy reasoning which is similar to humans' reasoning. Basically FIS consists of four units which are fuzzification unit, fuzzy logic reasoning unit, base knowledge unit and defuzzification unit. In addition to know the effect of independent variables on the plants growth and development that can be visualized with the function diagram of FIS output surface that is shaped three-dimensional, and statistical tests based on the data from the prediction system using multiple linear regression method, which includes multiple linear regression analysis, T test, F test, the coefficient of determination and donations predictor that are calculated using SPSS (Statistical Product and Service Solutions) software applications.

  4. Method of critical power prediction based on film flow model coupled with subchannel analysis

    International Nuclear Information System (INIS)

    Tomiyama, Akio; Yokomizo, Osamu; Yoshimoto, Yuichiro; Sugawara, Satoshi.

    1988-01-01

    A new method was developed to predict critical powers for a wide variety of BWR fuel bundle designs. This method couples subchannel analysis with a liquid film flow model, instead of taking the conventional way which couples subchannel analysis with critical heat flux correlations. Flow and quality distributions in a bundle are estimated by the subchannel analysis. Using these distributions, film flow rates along fuel rods are then calculated with the film flow model. Dryout is assumed to occur where one of the film flows disappears. This method is expected to give much better adaptability to variations in geometry, heat flux, flow rate and quality distributions than the conventional methods. In order to verify the method, critical power data under BWR conditions were analyzed. Measured and calculated critical powers agreed to within ±7%. Furthermore critical power data for a tight-latticed bundle obtained by LeTourneau et al. were compared with critical powers calculated by the present method and two conventional methods, CISE correlation and subchannel analysis coupled with the CISE correlation. It was confirmed that the present method can predict critical powers more accurately than the conventional methods. (author)

  5. Fluvial facies reservoir productivity prediction method based on principal component analysis and artificial neural network

    Directory of Open Access Journals (Sweden)

    Pengyu Gao

    2016-03-01

    Full Text Available It is difficult to forecast the well productivity because of the complexity of vertical and horizontal developments in fluvial facies reservoir. This paper proposes a method based on Principal Component Analysis and Artificial Neural Network to predict well productivity of fluvial facies reservoir. The method summarizes the statistical reservoir factors and engineering factors that affect the well productivity, extracts information by applying the principal component analysis method and approximates arbitrary functions of the neural network to realize an accurate and efficient prediction on the fluvial facies reservoir well productivity. This method provides an effective way for forecasting the productivity of fluvial facies reservoir which is affected by multi-factors and complex mechanism. The study result shows that this method is a practical, effective, accurate and indirect productivity forecast method and is suitable for field application.

  6. Basic study on dynamic reactive-power control method with PV output prediction for solar inverter

    Directory of Open Access Journals (Sweden)

    Ryunosuke Miyoshi

    2016-01-01

    Full Text Available To effectively utilize a photovoltaic (PV system, reactive-power control methods for solar inverters have been considered. Among the various methods, the constant-voltage control outputs less reactive power compared with the other methods. We have developed a constant-voltage control to reduce the reactive-power output. However, the developed constant-voltage control still outputs unnecessary reactive power because the control parameter is constant in every waveform of the PV output. To reduce the reactive-power output, we propose a dynamic reactive-power control method with a PV output prediction. In the proposed method, the control parameter is varied according to the properties of the predicted PV waveform. In this study, we performed numerical simulations using a distribution system model, and we confirmed that the proposed method reduces the reactive-power output within the voltage constraint.

  7. Methods and approaches to prediction in the meat industry

    Directory of Open Access Journals (Sweden)

    A. B. Lisitsyn

    2016-01-01

    Full Text Available The modern stage of the agro-industrial complex is characterized by an increasing complexity, intensification of technological processes of complex processing of materials of animal origin also the need for a systematic analysis of the variety of determining factors and relationships between them, complexity of the objective function of product quality and severe restrictions on technological regimes. One of the main tasks that face the employees of the enterprises of the agro-industrial complex, which are engaged in processing biotechnological raw materials, is the further organizational improvement of work at all stages of the food chain, besides an increase in the production volume. The meat industry as a part of the agro-industrial complex has to use the biological raw materials with maximum efficiency, while reducing and even eliminating losses at all stages of processing; rationally use raw material when selecting a type of processing products; steadily increase quality, biological and food value of products; broaden the assortment of manufactured products in order to satisfy increasing consumer requirements and extend the market for their realization in the conditions of uncertainty of external environment, due to the uneven receipt of raw materials, variations in its properties and parameters, limited time sales and fluctuations in demand for products. The challenges facing the meat industry cannot be solved without changes to the strategy for scientific and technological development of the industry. To achieve these tasks, it is necessary to use the prediction as a method of constant improvement of all technological processes and their performance under the rational and optimal regimes, while constantly controlling quality of raw material, semi-prepared products and finished products at all stages of the technological processing by the physico-chemical, physico-mechanical (rheological, microbiological and organoleptic methods. The paper

  8. The steady performance prediction of propeller-rudder-bulb system based on potential iterative method

    International Nuclear Information System (INIS)

    Liu, Y B; Su, Y M; Ju, L; Huang, S L

    2012-01-01

    A new numerical method was developed for predicting the steady hydrodynamic performance of propeller-rudder-bulb system. In the calculation, the rudder and bulb was taken into account as a whole, the potential based surface panel method was applied both to propeller and rudder-bulb system. The interaction between propeller and rudder-bulb was taken into account by velocity potential iteration in which the influence of propeller rotation was considered by the average influence coefficient. In the influence coefficient computation, the singular value should be found and deleted. Numerical results showed that the method presented is effective for predicting the steady hydrodynamic performance of propeller-rudder system and propeller-rudder-bulb system. Comparing with the induced velocity iterative method, the method presented can save programming and calculation time. Changing dimensions, the principal parameter—bulb size that affect energy-saving effect was studied, the results show that the bulb on rudder have a optimal size at the design advance coefficient.

  9. BacHbpred: Support Vector Machine Methods for the Prediction of Bacterial Hemoglobin-Like Proteins

    Directory of Open Access Journals (Sweden)

    MuthuKrishnan Selvaraj

    2016-01-01

    Full Text Available The recent upsurge in microbial genome data has revealed that hemoglobin-like (HbL proteins may be widely distributed among bacteria and that some organisms may carry more than one HbL encoding gene. However, the discovery of HbL proteins has been limited to a small number of bacteria only. This study describes the prediction of HbL proteins and their domain classification using a machine learning approach. Support vector machine (SVM models were developed for predicting HbL proteins based upon amino acid composition (AC, dipeptide composition (DC, hybrid method (AC + DC, and position specific scoring matrix (PSSM. In addition, we introduce for the first time a new prediction method based on max to min amino acid residue (MM profiles. The average accuracy, standard deviation (SD, false positive rate (FPR, confusion matrix, and receiver operating characteristic (ROC were analyzed. We also compared the performance of our proposed models in homology detection databases. The performance of the different approaches was estimated using fivefold cross-validation techniques. Prediction accuracy was further investigated through confusion matrix and ROC curve analysis. All experimental results indicate that the proposed BacHbpred can be a perspective predictor for determination of HbL related proteins. BacHbpred, a web tool, has been developed for HbL prediction.

  10. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  11. ESLpred2: improved method for predicting subcellular localization of eukaryotic proteins

    Directory of Open Access Journals (Sweden)

    Raghava Gajendra PS

    2008-11-01

    Full Text Available Abstract Background The expansion of raw protein sequence databases in the post genomic era and availability of fresh annotated sequences for major localizations particularly motivated us to introduce a new improved version of our previously forged eukaryotic subcellular localizations prediction method namely "ESLpred". Since, subcellular localization of a protein offers essential clues about its functioning, hence, availability of localization predictor would definitely aid and expedite the protein deciphering studies. However, robustness of a predictor is highly dependent on the superiority of dataset and extracted protein attributes; hence, it becomes imperative to improve the performance of presently available method using latest dataset and crucial input features. Results Here, we describe augmentation in the prediction performance obtained for our most popular ESLpred method using new crucial features as an input to Support Vector Machine (SVM. In addition, recently available, highly non-redundant dataset encompassing three kingdoms specific protein sequence sets; 1198 fungi sequences, 2597 from animal and 491 plant sequences were also included in the present study. First, using the evolutionary information in the form of profile composition along with whole and N-terminal sequence composition as an input feature vector of 440 dimensions, overall accuracies of 72.7, 75.8 and 74.5% were achieved respectively after five-fold cross-validation. Further, enhancement in performance was observed when similarity search based results were coupled with whole and N-terminal sequence composition along with profile composition by yielding overall accuracies of 75.9, 80.8, 76.6% respectively; best accuracies reported till date on the same datasets. Conclusion These results provide confidence about the reliability and accurate prediction of SVM modules generated in the present study using sequence and profile compositions along with similarity search

  12. A link prediction method for heterogeneous networks based on BP neural network

    Science.gov (United States)

    Li, Ji-chao; Zhao, Dan-ling; Ge, Bing-Feng; Yang, Ke-Wei; Chen, Ying-Wu

    2018-04-01

    Most real-world systems, composed of different types of objects connected via many interconnections, can be abstracted as various complex heterogeneous networks. Link prediction for heterogeneous networks is of great significance for mining missing links and reconfiguring networks according to observed information, with considerable applications in, for example, friend and location recommendations and disease-gene candidate detection. In this paper, we put forward a novel integrated framework, called MPBP (Meta-Path feature-based BP neural network model), to predict multiple types of links for heterogeneous networks. More specifically, the concept of meta-path is introduced, followed by the extraction of meta-path features for heterogeneous networks. Next, based on the extracted meta-path features, a supervised link prediction model is built with a three-layer BP neural network. Then, the solution algorithm of the proposed link prediction model is put forward to obtain predicted results by iteratively training the network. Last, numerical experiments on the dataset of examples of a gene-disease network and a combat network are conducted to verify the effectiveness and feasibility of the proposed MPBP. It shows that the MPBP with very good performance is superior to the baseline methods.

  13. Predicting residue contacts using pragmatic correlated mutations method: reducing the false positives

    Directory of Open Access Journals (Sweden)

    Alexov Emil G

    2006-11-01

    Full Text Available Abstract Background Predicting residues' contacts using primary amino acid sequence alone is an important task that can guide 3D structure modeling and can verify the quality of the predicted 3D structures. The correlated mutations (CM method serves as the most promising approach and it has been used to predict amino acids pairs that are distant in the primary sequence but form contacts in the native 3D structure of homologous proteins. Results Here we report a new implementation of the CM method with an added set of selection rules (filters. The parameters of the algorithm were optimized against fifteen high resolution crystal structures with optimization criterion that maximized the confidentiality of the predictions. The optimization resulted in a true positive ratio (TPR of 0.08 for the CM without filters and a TPR of 0.14 for the CM with filters. The protocol was further benchmarked against 65 high resolution structures that were not included in the optimization test. The benchmarking resulted in a TPR of 0.07 for the CM without filters and to a TPR of 0.09 for the CM with filters. Conclusion Thus, the inclusion of selection rules resulted to an overall improvement of 30%. In addition, the pair-wise comparison of TPR for each protein without and with filters resulted in an average improvement of 1.7. The methodology was implemented into a web server http://www.ces.clemson.edu/compbio/recon that is freely available to the public. The purpose of this implementation is to provide the 3D structure predictors with a tool that can help with ranking alternative models by satisfying the largest number of predicted contacts, as well as it can provide a confidence score for contacts in cases where structure is known.

  14. Automatic Power Control for Daily Load-following Operation using Model Predictive Control Method

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Keuk Jong; Kim, Han Gon [KH, Daejeon (Korea, Republic of)

    2009-10-15

    Under the circumstances that nuclear power occupies more than 50%, nuclear power plants are required to be operated on load-following operation in order to make the effective management of electric grid system and enhanced responsiveness to rapid changes in power demand. Conventional reactors such as the OPR1000 and APR1400 have a regulating system that controls the average temperature of the reactor core relation to the reference temperature. This conventional method has the advantages of proven technology and ease of implementation. However, this method is unsuitable for controlling the axial power shape, particularly the load following operation. Accordingly, this paper reports on the development of a model predictive control method which is able to control the reactor power and the axial shape index. The purpose of this study is to analyze the behavior of nuclear reactor power and the axial power shape by using a model predictive control method when the power is increased and decreased for a daily load following operation. The study confirms that deviations in the axial shape index (ASI) are within the operating limit.

  15. A Predictive-Control-Based Over-Modulation Method for Conventional Matrix Converters

    DEFF Research Database (Denmark)

    Zhang, Guanguan; Yang, Jian; Sun, Yao

    2018-01-01

    To increase the voltage transfer ratio of the matrix converter and improve the input/output current performance simultaneously, an over-modulation method based on predictive control is proposed in this paper, where the weighting factor is selected by an automatic adjusting mechanism, which is able...... to further enhance the system performance promptly. This method has advantages like the maximum voltage transfer ratio can reach 0.987 in the experiments; the total harmonic distortion of the input and output current are reduced, and the losses in the matrix converter are decreased. Moreover, the specific...

  16. Complex data modeling and computationally intensive methods for estimation and prediction

    CERN Document Server

    Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics

    2015-01-01

    The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...

  17. Regression trees for predicting mortality in patients with cardiovascular disease: What improvement is achieved by using ensemble-based methods?

    Science.gov (United States)

    Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V

    2012-01-01

    In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999

  18. Prediction of the solubility of selected pharmaceuticals in water and alcohols with a group contribution method

    International Nuclear Information System (INIS)

    Pelczarska, Aleksandra; Ramjugernath, Deresh; Rarey, Jurgen; Domańska, Urszula

    2013-01-01

    Highlights: ► The prediction of solubility of pharmaceuticals in water and alcohols was presented. ► Improved group contribution method UNIFAC was proposed for 42 binary mixtures. ► Infinite activity coefficients were used in a model. ► A semi-predictive model with one experimental point was proposed. ► This model qualitatively describes the temperature dependency of Pharms. -- Abstract: An improved group contribution approach using activity coefficients at infinite dilution, which has been proposed by our group, was used for the prediction of the solubility of selected pharmaceuticals in water and alcohols [B. Moller, Activity of complex multifunctional organic compounds in common solvents, PhD Thesis, Chemical Engineering, University of KwaZulu-Natal, 2009]. The solubility of 16 different pharmaceuticals in water, ethanol and octan-1-ol was predicted over a fairly wide range of temperature with this group contribution model. The predicted values, along with values computed with the Schroeder-van Laar equation, are compared to experimental results published by us previously for 42 binary mixtures. The predicted solubility values were lower than those from the experiments for most of the mixtures. In order to improve the prediction method, a semi-predictive calculation using one experimental solubility value was implemented. This one point prediction has given acceptable results when comparison is made to experimental values

  19. Fingerprint image reconstruction for swipe sensor using Predictive Overlap Method

    Directory of Open Access Journals (Sweden)

    Mardiansyah Ahmad Zafrullah

    2018-01-01

    Full Text Available Swipe sensor is one of many biometric authentication sensor types that widely applied to embedded devices. The sensor produces an overlap on every pixel block of the image, so the picture requires a reconstruction process before heading to the feature extraction process. Conventional reconstruction methods require extensive computation, causing difficult to apply to embedded devices that have limited computing process. In this paper, image reconstruction is proposed using predictive overlap method, which determines the image block shift from the previous set of change data. The experiments were performed using 36 images generated by a swipe sensor with 128 x 8 pixels size of the area, where each image has an overlap in each block. The results reveal computation can increase up to 86.44% compared with conventional methods, with accuracy decreasing to 0.008% in average.

  20. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  1. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  2. Prediction of pKa Values for Druglike Molecules Using Semiempirical Quantum Chemical Methods.

    Science.gov (United States)

    Jensen, Jan H; Swain, Christopher J; Olsen, Lars

    2017-01-26

    Rapid yet accurate pK a prediction for druglike molecules is a key challenge in computational chemistry. This study uses PM6-DH+/COSMO, PM6/COSMO, PM7/COSMO, PM3/COSMO, AM1/COSMO, PM3/SMD, AM1/SMD, and DFTB3/SMD to predict the pK a values of 53 amine groups in 48 druglike compounds. The approach uses an isodesmic reaction where the pK a value is computed relative to a chemically related reference compound for which the pK a value has been measured experimentally or estimated using a standard empirical approach. The AM1- and PM3-based methods perform best with RMSE values of 1.4-1.6 pH units that have uncertainties of ±0.2-0.3 pH units, which make them statistically equivalent. However, for all but PM3/SMD and AM1/SMD the RMSEs are dominated by a single outlier, cefadroxil, caused by proton transfer in the zwitterionic protonation state. If this outlier is removed, the RMSE values for PM3/COSMO and AM1/COSMO drop to 1.0 ± 0.2 and 1.1 ± 0.3, whereas PM3/SMD and AM1/SMD remain at 1.5 ± 0.3 and 1.6 ± 0.3/0.4 pH units, making the COSMO-based predictions statistically better than the SMD-based predictions. For pK a calculations where a zwitterionic state is not involved or proton transfer in a zwitterionic state is not observed, PM3/COSMO or AM1/COSMO is the best pK a prediction method; otherwise PM3/SMD or AM1/SMD should be used. Thus, fast and relatively accurate pK a prediction for 100-1000s of druglike amines is feasible with the current setup and relatively modest computational resources.

  3. Extremely Randomized Machine Learning Methods for Compound Activity Prediction

    Directory of Open Access Journals (Sweden)

    Wojciech M. Czarnecki

    2015-11-01

    Full Text Available Speed, a relatively low requirement for computational resources and high effectiveness of the evaluation of the bioactivity of compounds have caused a rapid growth of interest in the application of machine learning methods to virtual screening tasks. However, due to the growth of the amount of data also in cheminformatics and related fields, the aim of research has shifted not only towards the development of algorithms of high predictive power but also towards the simplification of previously existing methods to obtain results more quickly. In the study, we tested two approaches belonging to the group of so-called ‘extremely randomized methods’—Extreme Entropy Machine and Extremely Randomized Trees—for their ability to properly identify compounds that have activity towards particular protein targets. These methods were compared with their ‘non-extreme’ competitors, i.e., Support Vector Machine and Random Forest. The extreme approaches were not only found out to improve the efficiency of the classification of bioactive compounds, but they were also proved to be less computationally complex, requiring fewer steps to perform an optimization procedure.

  4. Comparison and validation of statistical methods for predicting power outage durations in the event of hurricanes.

    Science.gov (United States)

    Nateghi, Roshanak; Guikema, Seth D; Quiring, Steven M

    2011-12-01

    This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out-of-sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy. © 2011 Society for Risk Analysis.

  5. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Study (Prediction of Main Pipes Break Rates in Water Distribution Systems Using Intelligent and Regression Methods

    Directory of Open Access Journals (Sweden)

    Massoud Tabesh

    2011-07-01

    Full Text Available Optimum operation of water distribution networks is one of the priorities of sustainable development of water resources, considering the issues of increasing efficiency and decreasing the water losses. One of the key subjects in optimum operational management of water distribution systems is preparing rehabilitation and replacement schemes, prediction of pipes break rate and evaluation of their reliability. Several approaches have been presented in recent years regarding prediction of pipe failure rates which each one requires especial data sets. Deterministic models based on age and deterministic multi variables and stochastic group modeling are examples of the solutions which relate pipe break rates to parameters like age, material and diameters. In this paper besides the mentioned parameters, more factors such as pipe depth and hydraulic pressures are considered as well. Then using multi variable regression method, intelligent approaches (Artificial neural network and neuro fuzzy models and Evolutionary polynomial Regression method (EPR pipe burst rate are predicted. To evaluate the results of different approaches, a case study is carried out in a part ofMashhadwater distribution network. The results show the capability and advantages of ANN and EPR methods to predict pipe break rates, in comparison with neuro fuzzy and multi-variable regression methods.

  7. Predicting Taxi-Out Time at Congested Airports with Optimization-Based Support Vector Regression Methods

    Directory of Open Access Journals (Sweden)

    Guan Lian

    2018-01-01

    Full Text Available Accurate prediction of taxi-out time is significant precondition for improving the operationality of the departure process at an airport, as well as reducing the long taxi-out time, congestion, and excessive emission of greenhouse gases. Unfortunately, several of the traditional methods of predicting taxi-out time perform unsatisfactorily at congested airports. This paper describes and tests three of those conventional methods which include Generalized Linear Model, Softmax Regression Model, and Artificial Neural Network method and two improved Support Vector Regression (SVR approaches based on swarm intelligence algorithm optimization, which include Particle Swarm Optimization (PSO and Firefly Algorithm. In order to improve the global searching ability of Firefly Algorithm, adaptive step factor and Lévy flight are implemented simultaneously when updating the location function. Six factors are analysed, of which delay is identified as one significant factor in congested airports. Through a series of specific dynamic analyses, a case study of Beijing International Airport (PEK is tested with historical data. The performance measures show that the proposed two SVR approaches, especially the Improved Firefly Algorithm (IFA optimization-based SVR method, not only perform as the best modelling measures and accuracy rate compared with the representative forecast models, but also can achieve a better predictive performance when dealing with abnormal taxi-out time states.

  8. Handling imbalance data in churn prediction using combined SMOTE and RUS with bagging method

    Science.gov (United States)

    Pura Hartati, Eka; Adiwijaya; Arif Bijaksana, Moch

    2018-03-01

    Customer churn has become a significant problem and also a challenge for Telecommunication company such as PT. Telkom Indonesia. It is necessary to evaluate whether the big problems of churn customer and the company’s managements will make appropriate strategies to minimize the churn and retaining the customer. Churn Customer data which categorized churn Atas Permintaan Sendiri (APS) in this Company is an imbalance data, and this issue is one of the challenging tasks in machine learning. This study will investigate how is handling class imbalance in churn prediction using combined Synthetic Minority Over-Sampling (SMOTE) and Random Under-Sampling (RUS) with Bagging method for a better churn prediction performance’s result. The dataset that used is Broadband Internet data which is collected from Telkom Regional 6 Kalimantan. The research firstly using data preprocessing to balance the imbalanced dataset and also to select features by sampling technique SMOTE and RUS, and then building churn prediction model using Bagging methods and C4.5.

  9. Evaluation of the constant pressure panel method (CPM) for unsteady air loads prediction

    Science.gov (United States)

    Appa, Kari; Smith, Michael J. C.

    1988-01-01

    This paper evaluates the capability of the constant pressure panel method (CPM) code to predict unsteady aerodynamic pressures, lift and moment distributions, and generalized forces for general wing-body configurations in supersonic flow. Stability derivatives are computed and correlated for the X-29 and an Oblique Wing Research Aircraft, and a flutter analysis is carried out for a wing wind tunnel test example. Most results are shown to correlate well with test or published data. Although the emphasis of this paper is on evaluation, an improvement in the CPM code's handling of intersecting lifting surfaces is briefly discussed. An attractive feature of the CPM code is that it shares the basic data requirements and computational arrangements of the doublet lattice method. A unified code to predict unsteady subsonic or supersonic airloads is therefore possible.

  10. Yeast prions and human prion-like proteins: sequence features and prediction methods.

    Science.gov (United States)

    Cascarina, Sean M; Ross, Eric D

    2014-06-01

    Prions are self-propagating infectious protein isoforms. A growing number of prions have been identified in yeast, each resulting from the conversion of soluble proteins into an insoluble amyloid form. These yeast prions have served as a powerful model system for studying the causes and consequences of prion aggregation. Remarkably, a number of human proteins containing prion-like domains, defined as domains with compositional similarity to yeast prion domains, have recently been linked to various human degenerative diseases, including amyotrophic lateral sclerosis. This suggests that the lessons learned from yeast prions may help in understanding these human diseases. In this review, we examine what has been learned about the amino acid sequence basis for prion aggregation in yeast, and how this information has been used to develop methods to predict aggregation propensity. We then discuss how this information is being applied to understand human disease, and the challenges involved in applying yeast prediction methods to higher organisms.

  11. Prediction method for cavitation erosion based on measurement of bubble collapse impact loads

    International Nuclear Information System (INIS)

    Hattori, S; Hirose, T; Sugiyama, K

    2009-01-01

    The prediction of cavitation erosion rates is important in order to evaluate the exact life of components. The measurement of impact loads in bubble collapses helps to predict the life under cavitation erosion. In this study, we carried out erosion tests and the measurements of impact loads in bubble collapses with a vibratory apparatus. We evaluated the incubation period based on a cumulative damage rule by measuring the impact loads of cavitation acting on the specimen surface and by using the 'constant impact load - number of impact loads curve' similar to the modified Miner's rule which is employed for fatigue life prediction. We found that the parameter Σ(F i α xn i ) (F i : impact load, n i : number of impacts and α: constant) is suitable for the evaluation of the erosion life. Moreover, we propose a new method that can predict the incubation period under various cavitation conditions.

  12. A Bayesian least-squares support vector machine method for predicting the remaining useful life of a microwave component

    Directory of Open Access Journals (Sweden)

    Fuqiang Sun

    2017-01-01

    Full Text Available Rapid and accurate lifetime prediction of critical components in a system is important to maintaining the system’s reliable operation. To this end, many lifetime prediction methods have been developed to handle various failure-related data collected in different situations. Among these methods, machine learning and Bayesian updating are the most popular ones. In this article, a Bayesian least-squares support vector machine method that combines least-squares support vector machine with Bayesian inference is developed for predicting the remaining useful life of a microwave component. A degradation model describing the change in the component’s power gain over time is developed, and the point and interval remaining useful life estimates are obtained considering a predefined failure threshold. In our case study, the radial basis function neural network approach is also implemented for comparison purposes. The results indicate that the Bayesian least-squares support vector machine method is more precise and stable in predicting the remaining useful life of this type of components.

  13. A standardized method for the calibration of thermodynamic data for the prediction of gas chromatographic retention times.

    Science.gov (United States)

    McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J

    2014-02-21

    A new method for calibrating thermodynamic data to be used in the prediction of analyte retention times is presented. The method allows thermodynamic data collected on one column to be used in making predictions across columns of the same stationary phase but with varying geometries. This calibration is essential as slight variances in the column inner diameter and stationary phase film thickness between columns or as a column ages will adversely affect the accuracy of predictions. The calibration technique uses a Grob standard mixture along with a Nelder-Mead simplex algorithm and a previously developed model of GC retention times based on a three-parameter thermodynamic model to estimate both inner diameter and stationary phase film thickness. The calibration method is highly successful with the predicted retention times for a set of alkanes, ketones and alcohols having an average error of 1.6s across three columns. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Incremental-hinge piping analysis methods for inelastic seismic response prediction

    International Nuclear Information System (INIS)

    Jaquay, K.R.; Castle, W.R.; Larson, J.E.

    1989-01-01

    This paper proposes nonlinear seismic response prediction methods for nuclear piping systems based on simplified plastic hinge analyses. The simplified plastic hinge analyses utilize an incremental series of flat response spectrum loadings and replace yielded components with hinge elements when a predefined hinge moment is reached. These hinge moment values, developed by Rodabaugh, result in inelastic energy dissipation of the same magnitude as observed in seismic tests of piping components. Two definitions of design level equivalent loads are employed: one conservatively based on the peaks of the design acceleration response spectra, the other based on inelastic frequencies determined by the method of Krylov and Bogolyuboff recently extended by Lazzeri to piping. Both definitions account for piping system inelastic energy dissipation using Newmark-Hall inelastic response spectrum reduction factors and the displacement ductility results of the incremental-hinge analysis. Two ratchet-fatigue damage models are used: one developed by Rodabaugh that conservatively correlates Markl static fatigue expressions to seismic tests to failure of piping components; the other developed by Severud that uses the ratchet expression of Bree for elbows and Edmunds and Beer for straights, and defines ratchet-fatigue interaction using Coffin's ductility based fatigue equation. Comparisons of predicted behavior versus experimental results are provided for a high-level seismic test of a segment of a representative nuclear plant piping system. (orig.)

  15. Insights from triangulation of two purchase choice elicitation methods to predict social decision making in healthcare.

    Science.gov (United States)

    Whitty, Jennifer A; Rundle-Thiele, Sharyn R; Scuffham, Paul A

    2012-03-01

    Discrete choice experiments (DCEs) and the Juster scale are accepted methods for the prediction of individual purchase probabilities. Nevertheless, these methods have seldom been applied to a social decision-making context. To gain an overview of social decisions for a decision-making population through data triangulation, these two methods were used to understand purchase probability in a social decision-making context. We report an exploratory social decision-making study of pharmaceutical subsidy in Australia. A DCE and selected Juster scale profiles were presented to current and past members of the Australian Pharmaceutical Benefits Advisory Committee and its Economic Subcommittee. Across 66 observations derived from 11 respondents for 6 different pharmaceutical profiles, there was a small overall median difference of 0.024 in the predicted probability of public subsidy (p = 0.003), with the Juster scale predicting the higher likelihood. While consistency was observed at the extremes of the probability scale, the funding probability differed over the mid-range of profiles. There was larger variability in the DCE than Juster predictions within each individual respondent, suggesting the DCE is better able to discriminate between profiles. However, large variation was observed between individuals in the Juster scale but not DCE predictions. It is important to use multiple methods to obtain a complete picture of the probability of purchase or public subsidy in a social decision-making context until further research can elaborate on our findings. This exploratory analysis supports the suggestion that the mixed logit model, which was used for the DCE analysis, may fail to adequately account for preference heterogeneity in some contexts.

  16. An improved method for predicting the evolution of the characteristic parameters of an information system

    Science.gov (United States)

    Dushkin, A. V.; Kasatkina, T. I.; Novoseltsev, V. I.; Ivanov, S. V.

    2018-03-01

    The article proposes a forecasting method that allows, based on the given values of entropy and error level of the first and second kind, to determine the allowable time for forecasting the development of the characteristic parameters of a complex information system. The main feature of the method under consideration is the determination of changes in the characteristic parameters of the development of the information system in the form of the magnitude of the increment in the ratios of its entropy. When a predetermined value of the prediction error ratio is reached, that is, the entropy of the system, the characteristic parameters of the system and the depth of the prediction in time are estimated. The resulting values of the characteristics and will be optimal, since at that moment the system possessed the best ratio of entropy as a measure of the degree of organization and orderliness of the structure of the system. To construct a method for estimating the depth of prediction, it is expedient to use the maximum principle of the value of entropy.

  17. FUN-LDA: A Latent Dirichlet Allocation Model for Predicting Tissue-Specific Functional Effects of Noncoding Variation: Methods and Applications.

    Science.gov (United States)

    Backenroth, Daniel; He, Zihuai; Kiryluk, Krzysztof; Boeva, Valentina; Pethukova, Lynn; Khurana, Ekta; Christiano, Angela; Buxbaum, Joseph D; Ionita-Laza, Iuliana

    2018-05-03

    We describe a method based on a latent Dirichlet allocation model for predicting functional effects of noncoding genetic variants in a cell-type- and/or tissue-specific way (FUN-LDA). Using this unsupervised approach, we predict tissue-specific functional effects for every position in the human genome in 127 different tissues and cell types. We demonstrate the usefulness of our predictions by using several validation experiments. Using eQTL data from several sources, including the GTEx project, Geuvadis project, and TwinsUK cohort, we show that eQTLs in specific tissues tend to be most enriched among the predicted functional variants in relevant tissues in Roadmap. We further show how these integrated functional scores can be used for (1) deriving the most likely cell or tissue type causally implicated for a complex trait by using summary statistics from genome-wide association studies and (2) estimating a tissue-based correlation matrix of various complex traits. We found large enrichment of heritability in functional components of relevant tissues for various complex traits, and FUN-LDA yielded higher enrichment estimates than existing methods. Finally, using experimentally validated functional variants from the literature and variants possibly implicated in disease by previous studies, we rigorously compare FUN-LDA with state-of-the-art functional annotation methods and show that FUN-LDA has better prediction accuracy and higher resolution than these methods. In particular, our results suggest that tissue- and cell-type-specific functional prediction methods tend to have substantially better prediction accuracy than organism-level prediction methods. Scores for each position in the human genome and for each ENCODE and Roadmap tissue are available online (see Web Resources). Copyright © 2018 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  18. Usefulness of Two-Compartment Model-Assisted and Static Overall Inhibitory-Activity Method for Prediction of Drug-Drug Interaction.

    Science.gov (United States)

    Iga, Katsumi; Kiriyama, Akiko

    2017-12-01

    Our study of drug-drug interaction (DDI) started with the clarification of unusually large DDI observed between ramelteon (RAM) and fluvoxamine (FLV). The main cause of this DDI was shown to be the extremely small hepatic availability of RAM (vF h ). Traditional DDI prediction assuming the well-stirred hepatic extraction kinetic ignores the relative increase of vF h by DDI, while we could solve this problem by use of the tube model. Ultimately, we completed a simple and useful method for prediction of DDI. Currently, DDI prediction becomes more complex and difficult when examining issues such as dynamic changes in perpetrator level, inhibitory metabolites, etc. The regulatory agents recommend DDI prediction by use of some sophisticated methods. However, they seem problematic in requiring plural in vitro data that reduce the flexibility and accuracy of the simulation. In contrast, our method is based on the static and two-compartment models. The two-compartment model has advantages in that it uses common pharmacokinetics (PK) parameters determined from the actual clinical data, guaranteeing the simulation of the reference standard in DDI. Our studies confirmed that dynamic changes in perpetrator level do not make a difference between static and dynamic methods. DDIs perpetrated by FLV and itraconazole were successfully predicted by use of the present method where two DDI predictors [perpetrator-specific inhibitory activities toward CYP isoforms (pA i, CYP s) and victim-specific fractional CYP-isoform contributions to the clearance (vf m, CYP s)] are determined successively as shown in the graphical abstract. Accordingly, this approach will accelerate DDI prediction over the traditional methods.

  19. Prediction of methylmercury accumulation in rice grains by chemical extraction methods

    International Nuclear Information System (INIS)

    Zhu, Dai-Wen; Zhong, Huan; Zeng, Qi-Long; Yin, Ying

    2015-01-01

    To explore the possibility of using chemical extraction methods to predict phytoavailability/bioaccumulation of soil-bound MeHg, MeHg extractions by three widely-used extractants (CaCl 2 , DTPA, and (NH 4 ) 2 S 2 O 3 ) were compared with MeHg accumulation in rice grains. Despite of variations in characteristics of different soils, MeHg extracted by (NH 4 ) 2 S 2 O 3 (highly affinitive to MeHg) correlated well with grain MeHg levels. Thus (NH 4 ) 2 S 2 O 3 extraction, solubilizing not only weakly-bound and but also strongly-bound MeHg, may provide a measure of ‘phytoavailable MeHg pool’ for rice plants. Besides, a better prediction of grain MeHg levels was obtained when growing condition of rice plants was also considered. However, MeHg extracted by CaCl 2 or DTPA, possibly quantifying ‘exchangeable MeHg pool’ or ‘weakly-complexed MeHg pool’ in soils, may not indicate phytoavailable MeHg or predict grain MeHg levels. Our results provided the possibility of predicting MeHg phytoavailability/bioaccumulation by (NH 4 ) 2 S 2 O 3 extraction, which could be useful in screening soils for rice cultivation in contaminated areas. - Highlights: • MeHg extraction by (NH 4 ) 2 S 2 O 3 correlates well with its accumulation in rice grains. • MeHg extraction by (NH 4 ) 2 S 2 O 3 provides a measure of phytoavailable MeHg in soils. • Some strongly-bound MeHg could be desorbed from soils and available to rice plants. • MeHg extraction by CaCl 2 or DTPA could not predict grain MeHg levels. - Methylmercury extraction from soils by (NH 4 ) 2 S 2 O 3 could possibly be used for predicting methylmercury phytoavailability and its bioaccumulation in rice grains

  20. Data-driven modeling and predictive control for boiler-turbine unit using fuzzy clustering and subspace methods.

    Science.gov (United States)

    Wu, Xiao; Shen, Jiong; Li, Yiguo; Lee, Kwang Y

    2014-05-01

    This paper develops a novel data-driven fuzzy modeling strategy and predictive controller for boiler-turbine unit using fuzzy clustering and subspace identification (SID) methods. To deal with the nonlinear behavior of boiler-turbine unit, fuzzy clustering is used to provide an appropriate division of the operation region and develop the structure of the fuzzy model. Then by combining the input data with the corresponding fuzzy membership functions, the SID method is extended to extract the local state-space model parameters. Owing to the advantages of the both methods, the resulting fuzzy model can represent the boiler-turbine unit very closely, and a fuzzy model predictive controller is designed based on this model. As an alternative approach, a direct data-driven fuzzy predictive control is also developed following the same clustering and subspace methods, where intermediate subspace matrices developed during the identification procedure are utilized directly as the predictor. Simulation results show the advantages and effectiveness of the proposed approach. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  1. A hybrid method of prediction of the void fraction during depressurization of diabatic systems

    International Nuclear Information System (INIS)

    Inayatullah, G.; Nicoll, W.B.; Hancox, W.T.

    1977-01-01

    The variation in vapour volumetric fraction during transient pressure, flow and power is of considerable importance in water-cooled nuclear power-reactor safety analysis. The commonly adopted procedure to predict the transient void is to solve the conservation equations using finite differences. This present method is intermediate between numerical and analytic, hence 'hybrid'. Space and time are divided into discrete intervals. Their size, however, is dictated by the imposed heat flux and pressure variations, and not by truncation error, stability or convergence, because within an interval, the solutions applied are analytic. The relatively simple hybrid method presented here can predict the void distribution in a variety of transient, diabatic, two-phase flows with simplicity, accuracy and speed. (Auth.)

  2. Method of predicting Splice Sites based on signal interactions

    Directory of Open Access Journals (Sweden)

    Deogun Jitender S

    2006-04-01

    Full Text Available Abstract Background Predicting and proper ranking of canonical splice sites (SSs is a challenging problem in bioinformatics and machine learning communities. Any progress in SSs recognition will lead to better understanding of splicing mechanism. We introduce several new approaches of combining a priori knowledge for improved SS detection. First, we design our new Bayesian SS sensor based on oligonucleotide counting. To further enhance prediction quality, we applied our new de novo motif detection tool MHMMotif to intronic ends and exons. We combine elements found with sensor information using Naive Bayesian Network, as implemented in our new tool SpliceScan. Results According to our tests, the Bayesian sensor outperforms the contemporary Maximum Entropy sensor for 5' SS detection. We report a number of putative Exonic (ESE and Intronic (ISE Splicing Enhancers found by MHMMotif tool. T-test statistics on mouse/rat intronic alignments indicates, that detected elements are on average more conserved as compared to other oligos, which supports our assumption of their functional importance. The tool has been shown to outperform the SpliceView, GeneSplicer, NNSplice, Genio and NetUTR tools for the test set of human genes. SpliceScan outperforms all contemporary ab initio gene structural prediction tools on the set of 5' UTR gene fragments. Conclusion Designed methods have many attractive properties, compared to existing approaches. Bayesian sensor, MHMMotif program and SpliceScan tools are freely available on our web site. Reviewers This article was reviewed by Manyuan Long, Arcady Mushegian and Mikhail Gelfand.

  3. A prediction model of drug-induced ototoxicity developed by an optimal support vector machine (SVM) method.

    Science.gov (United States)

    Zhou, Shu; Li, Guo-Bo; Huang, Lu-Yi; Xie, Huan-Zhang; Zhao, Ying-Lan; Chen, Yu-Zong; Li, Lin-Li; Yang, Sheng-Yong

    2014-08-01

    Drug-induced ototoxicity, as a toxic side effect, is an important issue needed to be considered in drug discovery. Nevertheless, current experimental methods used to evaluate drug-induced ototoxicity are often time-consuming and expensive, indicating that they are not suitable for a large-scale evaluation of drug-induced ototoxicity in the early stage of drug discovery. We thus, in this investigation, established an effective computational prediction model of drug-induced ototoxicity using an optimal support vector machine (SVM) method, GA-CG-SVM. Three GA-CG-SVM models were developed based on three training sets containing agents bearing different risk levels of drug-induced ototoxicity. For comparison, models based on naïve Bayesian (NB) and recursive partitioning (RP) methods were also used on the same training sets. Among all the prediction models, the GA-CG-SVM model II showed the best performance, which offered prediction accuracies of 85.33% and 83.05% for two independent test sets, respectively. Overall, the good performance of the GA-CG-SVM model II indicates that it could be used for the prediction of drug-induced ototoxicity in the early stage of drug discovery. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Relative proportions of polycyclic aromatic hydrocarbons differ between accumulation bioassays and chemical methods to predict bioavailability

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Eyles, Jose L., E-mail: j.l.gomezeyles@reading.ac.u [University of Reading, School of Human and Environmental Sciences, Department of Soil Science, Reading RG6 6DW, Berkshire (United Kingdom); Collins, Chris D.; Hodson, Mark E. [University of Reading, School of Human and Environmental Sciences, Department of Soil Science, Reading RG6 6DW, Berkshire (United Kingdom)

    2010-01-15

    Chemical methods to predict the bioavailable fraction of organic contaminants are usually validated in the literature by comparison with established bioassays. A soil spiked with polycyclic aromatic hydrocarbons (PAHs) was aged over six months and subjected to butanol, cyclodextrin and tenax extractions as well as an exhaustive extraction to determine total PAH concentrations at several time points. Earthworm (Eisenia fetida) and rye grass root (Lolium multiflorum) accumulation bioassays were conducted in parallel. Butanol extractions gave the best relationship with earthworm accumulation (r{sup 2} <= 0.54, p <= 0.01); cyclodextrin, butanol and acetone-hexane extractions all gave good predictions of accumulation in rye grass roots (r{sup 2} <= 0.86, p <= 0.01). However, the profile of the PAHs extracted by the different chemical methods was significantly different (p < 0.01) to that accumulated in the organisms. Biota accumulated a higher proportion of the heavier 4-ringed PAHs. It is concluded that bioaccumulation is a complex process that cannot be predicted by measuring the bioavailable fraction alone. - The ability of chemical methods to predict PAH accumulation in Eisenia fetida and Lolium multiflorum was hindered by the varied metabolic fate of the different PAHs within the organisms.

  5. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  6. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. A new wind power prediction method based on chaotic theory and Bernstein Neural Network

    International Nuclear Information System (INIS)

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Fan, Xiaochao

    2016-01-01

    The accuracy of wind power prediction is important for assessing the security and economy of the system operation when wind power connects to the grids. However, multiple factors cause a long delay and large errors in wind power prediction. Hence, efficient wind power forecasting approaches are still required for practical applications. In this paper, a new wind power forecasting method based on Chaos Theory and Bernstein Neural Network (BNN) is proposed. Firstly, the largest Lyapunov exponent as a judgment for wind power system's chaotic behavior is made. Secondly, Phase Space Reconstruction (PSR) is used to reconstruct the wind power series' phase space. Thirdly, the prediction model is constructed using the Bernstein polynomial and neural network. Finally, the weights and thresholds of the model are optimized by Primal Dual State Transition Algorithm (PDSTA). The practical hourly data of wind power generation in Xinjiang is used to test this forecaster. The proposed forecaster is compared with several current prominent research findings. Analytical results indicate that the forecasting error of PDSTA + BNN is 3.893% for 24 look-ahead hours, and has lower errors obtained compared with the other forecast methods discussed in this paper. The results of all cases studying confirm the validity of the new forecast method. - Highlights: • Lyapunov exponent is used to verify chaotic behavior of wind power series. • Phase Space Reconstruction is used to reconstruct chaotic wind power series. • A new Bernstein Neural Network to predict wind power series is proposed. • Primal dual state transition algorithm is chosen as the training strategy of BNN.

  8. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.

    2012-01-01

    PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator

  9. Large-scale binding ligand prediction by improved patch-based method Patch-Surfer2.0.

    Science.gov (United States)

    Zhu, Xiaolei; Xiong, Yi; Kihara, Daisuke

    2015-03-01

    Ligand binding is a key aspect of the function of many proteins. Thus, binding ligand prediction provides important insight in understanding the biological function of proteins. Binding ligand prediction is also useful for drug design and examining potential drug side effects. We present a computational method named Patch-Surfer2.0, which predicts binding ligands for a protein pocket. By representing and comparing pockets at the level of small local surface patches that characterize physicochemical properties of the local regions, the method can identify binding pockets of the same ligand even if they do not share globally similar shapes. Properties of local patches are represented by an efficient mathematical representation, 3D Zernike Descriptor. Patch-Surfer2.0 has significant technical improvements over our previous prototype, which includes a new feature that captures approximate patch position with a geodesic distance histogram. Moreover, we constructed a large comprehensive database of ligand binding pockets that will be searched against by a query. The benchmark shows better performance of Patch-Surfer2.0 over existing methods. http://kiharalab.org/patchsurfer2.0/ CONTACT: dkihara@purdue.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Method and timing of tumor volume measurement for outcome prediction in cervical cancer using magnetic resonance imaging

    International Nuclear Information System (INIS)

    Mayr, Nina A.; Taoka, Toshiaki; Yuh, William T.C.; Denning, Leah M.; Zhen, Weining K.; Paulino, Arnold C.; Gaston, Robert C.; Sorosky, Joel I.; Meeks, Sanford L.; Walker, Joan L.; Mannel, Robert S.; Buatti, John M.

    2002-01-01

    Purpose: Recently, imaging-based tumor volume before, during, and after radiation therapy (RT) has been shown to predict tumor response in cervical cancer. However, the effectiveness of different methods and timing of imaging-based tumor size assessment have not been investigated. The purpose of this study was to compare the predictive value for treatment outcome derived from simple diameter-based ellipsoid tumor volume measurement using orthogonal diameters (with ellipsoid computation) with that derived from more complex contour tracing/region-of-interest (ROI) analysis 3D tumor volumetry. Methods and Materials: Serial magnetic resonance imaging (MRI) examinations were prospectively performed in 60 patients with advanced cervical cancer (Stages IB 2 -IVB/recurrent) at the start of RT, during early RT (20-25 Gy), mid-RT (45-50 Gy), and at follow-up (1-2 months after RT completion). ROI-based volumetry was derived by tracing the entire tumor region in each MR slice on the computer work station. For the diameter-based surrogate ''ellipsoid volume,'' the three orthogonal diameters (d 1 , d 2 , d 3 ) were measured on film hard copies to calculate volume as an ellipsoid (d 1 x d 2 x d 3 x π/6). Serial tumor volumes and regression rates determined by each method were correlated with local control, disease-free and overall survival, and the results were compared between the two measuring methods. Median post-therapy follow-up was 4.9 years (range, 2.0-8.2 years). Results: The best method and time point of tumor size measurement for the prediction of outcome was the tumor regression rate in the mid-therapy MRI examination (at 45-50 Gy) using 3D ROI volumetry. For the pre-RT measurement both the diameter-based method and ROI volumetry provided similar predictive accuracy, particularly for patients with small ( 3 ) and large (≥100 cm 3 ) pre-RT tumor size. However, the pre-RT tumor size measured by either method had much less predictive value for the intermediate-size (40

  11. Prediction Model of Collapse Risk Based on Information Entropy and Distance Discriminant Analysis Method

    Directory of Open Access Journals (Sweden)

    Hujun He

    2017-01-01

    Full Text Available The prediction and risk classification of collapse is an important issue in the process of highway construction in mountainous regions. Based on the principles of information entropy and Mahalanobis distance discriminant analysis, we have produced a collapse hazard prediction model. We used the entropy measure method to reduce the influence indexes of the collapse activity and extracted the nine main indexes affecting collapse activity as the discriminant factors of the distance discriminant analysis model (i.e., slope shape, aspect, gradient, and height, along with exposure of the structural face, stratum lithology, relationship between weakness face and free face, vegetation cover rate, and degree of rock weathering. We employ postearthquake collapse data in relation to construction of the Yingxiu-Wolong highway, Hanchuan County, China, as training samples for analysis. The results were analyzed using the back substitution estimation method, showing high accuracy and no errors, and were the same as the prediction result of uncertainty measure. Results show that the classification model based on information entropy and distance discriminant analysis achieves the purpose of index optimization and has excellent performance, high prediction accuracy, and a zero false-positive rate. The model can be used as a tool for future evaluation of collapse risk.

  12. SU-D-BRB-01: A Comparison of Learning Methods for Knowledge Based Dose Prediction for Coplanar and Non-Coplanar Liver Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Tran, A; Ruan, D; Woods, K; Yu, V; Nguyen, D; Sheng, K [UCLA School of Medicine, Los Angeles, CA (United States)

    2016-06-15

    Purpose: The predictive power of knowledge based planning (KBP) has considerable potential in the development of automated treatment planning. Here, we examine the predictive capabilities and accuracy of previously reported KBP methods, as well as an artificial neural networks (ANN) method. Furthermore, we compare the predictive accuracy of these methods on coplanar volumetric-modulated arc therapy (VMAT) and non-coplanar 4π radiotherapy. Methods: 30 liver SBRT patients previously treated using coplanar VMAT were selected for this study. The patients were re-planned using 4π radiotherapy, which involves 20 optimally selected non-coplanar IMRT fields. ANNs were used to incorporate enhanced geometric information including liver and PTV size, prescription dose, patient girth, and proximity to beams. The performance of ANN was compared to three methods from statistical voxel dose learning (SVDL), wherein the doses of voxels sharing the same distance to the PTV are approximated by either taking the median of the distribution, non-parametric fitting, or skew-normal fitting. These three methods were shown to be capable of predicting DVH, but only median approximation can predict 3D dose. Prediction methods were tested using leave-one-out cross-validation tests and evaluated using residual sum of squares (RSS) for DVH and 3D dose predictions. Results: DVH prediction using non-parametric fitting had the lowest average RSS with 0.1176(4π) and 0.1633(VMAT), compared to 0.4879(4π) and 1.8744(VMAT) RSS for ANN. 3D dose prediction with median approximation had lower RSS with 12.02(4π) and 29.22(VMAT), compared to 27.95(4π) and 130.9(VMAT) for ANN. Conclusion: Paradoxically, although the ANNs included geometric features in addition to the distances to the PTV, it did not perform better in predicting DVH or 3D dose compared to simpler, faster methods based on the distances alone. The study further confirms that the prediction of 4π non-coplanar plans were more accurate than

  13. Bearing Procurement Analysis Method by Total Cost of Ownership Analysis and Reliability Prediction

    Science.gov (United States)

    Trusaji, Wildan; Akbar, Muhammad; Sukoyo; Irianto, Dradjad

    2018-03-01

    In making bearing procurement analysis, price and its reliability must be considered as decision criteria, since price determines the direct cost as acquisition cost and reliability of bearing determine the indirect cost such as maintenance cost. Despite the indirect cost is hard to identify and measured, it has high contribution to overall cost that will be incurred. So, the indirect cost of reliability must be considered when making bearing procurement analysis. This paper tries to explain bearing evaluation method with the total cost of ownership analysis to consider price and maintenance cost as decision criteria. Furthermore, since there is a lack of failure data when bearing evaluation phase is conducted, reliability prediction method is used to predict bearing reliability from its dynamic load rating parameter. With this method, bearing with a higher price but has higher reliability is preferable for long-term planning. But for short-term planning the cheaper one but has lower reliability is preferable. This contextuality can give rise to conflict between stakeholders. Thus, the planning horizon needs to be agreed by all stakeholder before making a procurement decision.

  14. K-Line Patterns’ Predictive Power Analysis Using the Methods of Similarity Match and Clustering

    Directory of Open Access Journals (Sweden)

    Lv Tao

    2017-01-01

    Full Text Available Stock price prediction based on K-line patterns is the essence of candlestick technical analysis. However, there are some disputes on whether the K-line patterns have predictive power in academia. To help resolve the debate, this paper uses the data mining methods of pattern recognition, pattern clustering, and pattern knowledge mining to research the predictive power of K-line patterns. The similarity match model and nearest neighbor-clustering algorithm are proposed for solving the problem of similarity match and clustering of K-line series, respectively. The experiment includes testing the predictive power of the Three Inside Up pattern and Three Inside Down pattern with the testing dataset of the K-line series data of Shanghai 180 index component stocks over the latest 10 years. Experimental results show that (1 the predictive power of a pattern varies a great deal for different shapes and (2 each of the existing K-line patterns requires further classification based on the shape feature for improving the prediction performance.

  15. Prediction method for thermal ratcheting of a cylinder subjected to axially moving temperature distribution

    International Nuclear Information System (INIS)

    Wada, Hiroshi; Igari, Toshihide; Kitade, Shoji.

    1989-01-01

    A prediction method was proposed for plastic ratcheting of a cylinder, which was subjected to axially moving temperature distribution without primary stress. First, a mechanism of this ratcheting was proposed, which considered the movement of temperature distribution as a driving force of this phenomenon. Predictive equations of the ratcheting strain for two representative temperature distributions were proposed based on this mechanism by assuming the elastic-perfectly-plastic material behavior. Secondly, an elastic-plastic analysis was made on a cylinder subjected to the representative two temperature distributions. Analytical results coincided well with the predicted results, and the applicability of the proposed equations was confirmed. (author)

  16. In Silico Prediction of Chemicals Binding to Aromatase with Machine Learning Methods.

    Science.gov (United States)

    Du, Hanwen; Cai, Yingchun; Yang, Hongbin; Zhang, Hongxiao; Xue, Yuhan; Liu, Guixia; Tang, Yun; Li, Weihua

    2017-05-15

    Environmental chemicals may affect endocrine systems through multiple mechanisms, one of which is via effects on aromatase (also known as CYP19A1), an enzyme critical for maintaining the normal balance of estrogens and androgens in the body. Therefore, rapid and efficient identification of aromatase-related endocrine disrupting chemicals (EDCs) is important for toxicology and environment risk assessment. In this study, on the basis of the Tox21 10K compound library, in silico classification models for predicting aromatase binders/nonbinders were constructed by machine learning methods. To improve the prediction ability of the models, a combined classifier (CC) strategy that combines different independent machine learning methods was adopted. Performances of the models were measured by test and external validation sets containing 1336 and 216 chemicals, respectively. The best model was obtained with the MACCS (Molecular Access System) fingerprint and CC method, which exhibited an accuracy of 0.84 for the test set and 0.91 for the external validation set. Additionally, several representative substructures for characterizing aromatase binders, such as ketone, lactone, and nitrogen-containing derivatives, were identified using information gain and substructure frequency analysis. Our study provided a systematic assessment of chemicals binding to aromatase. The built models can be helpful to rapidly identify potential EDCs targeting aromatase.

  17. Predicting Solar Activity Using Machine-Learning Methods

    Science.gov (United States)

    Bobra, M.

    2017-12-01

    Of all the activity observed on the Sun, two of the most energetic events are flares and coronal mass ejections. However, we do not, as of yet, fully understand the physical mechanism that triggers solar eruptions. A machine-learning algorithm, which is favorable in cases where the amount of data is large, is one way to [1] empirically determine the signatures of this mechanism in solar image data and [2] use them to predict solar activity. In this talk, we discuss the application of various machine learning algorithms - specifically, a Support Vector Machine, a sparse linear regression (Lasso), and Convolutional Neural Network - to image data from the photosphere, chromosphere, transition region, and corona taken by instruments aboard the Solar Dynamics Observatory in order to predict solar activity on a variety of time scales. Such an approach may be useful since, at the present time, there are no physical models of flares available for real-time prediction. We discuss our results (Bobra and Couvidat, 2015; Bobra and Ilonidis, 2016; Jonas et al., 2017) as well as other attempts to predict flares using machine-learning (e.g. Ahmed et al., 2013; Nishizuka et al. 2017) and compare these results with the more traditional techniques used by the NOAA Space Weather Prediction Center (Crown, 2012). We also discuss some of the challenges in using machine-learning algorithms for space science applications.

  18. Data Based Prediction of Blood Glucose Concentrations Using Evolutionary Methods.

    Science.gov (United States)

    Hidalgo, J Ignacio; Colmenar, J Manuel; Kronberger, Gabriel; Winkler, Stephan M; Garnica, Oscar; Lanchares, Juan

    2017-08-08

    Predicting glucose values on the basis of insulin and food intakes is a difficult task that people with diabetes need to do daily. This is necessary as it is important to maintain glucose levels at appropriate values to avoid not only short-term, but also long-term complications of the illness. Artificial intelligence in general and machine learning techniques in particular have already lead to promising results in modeling and predicting glucose concentrations. In this work, several machine learning techniques are used for the modeling and prediction of glucose concentrations using as inputs the values measured by a continuous monitoring glucose system as well as also previous and estimated future carbohydrate intakes and insulin injections. In particular, we use the following four techniques: genetic programming, random forests, k-nearest neighbors, and grammatical evolution. We propose two new enhanced modeling algorithms for glucose prediction, namely (i) a variant of grammatical evolution which uses an optimized grammar, and (ii) a variant of tree-based genetic programming which uses a three-compartment model for carbohydrate and insulin dynamics. The predictors were trained and tested using data of ten patients from a public hospital in Spain. We analyze our experimental results using the Clarke error grid metric and see that 90% of the forecasts are correct (i.e., Clarke error categories A and B), but still even the best methods produce 5 to 10% of serious errors (category D) and approximately 0.5% of very serious errors (category E). We also propose an enhanced genetic programming algorithm that incorporates a three-compartment model into symbolic regression models to create smoothed time series of the original carbohydrate and insulin time series.

  19. Forecasting method for global radiation time series without training phase: Comparison with other well-known prediction methodologies

    International Nuclear Information System (INIS)

    Voyant, Cyril; Motte, Fabrice; Fouilloy, Alexis; Notton, Gilles; Paoli, Christophe; Nivet, Marie-Laure

    2017-01-01

    Integration of unpredictable renewable energy sources into electrical networks intensifies the complexity of the grid management due to their intermittent and unforeseeable nature. Because of the strong increase of solar power generation the prediction of solar yields becomes more and more important. Electrical operators need an estimation of the future production. For nowcasting and short term forecasting, the usual technics based on machine learning need large historical data sets of good quality during the training phase of predictors. However data are not always available and induce an advanced maintenance of meteorological stations, making the method inapplicable for poor instrumented or isolated sites. In this work, we propose intuitive methodologies based on the Kalman filter use (also known as linear quadratic estimation), able to predict a global radiation time series without the need of historical data. The accuracy of these methods is compared to other classical data driven methods, for different horizons of prediction and time steps. The proposed approach shows interesting capabilities allowing to improve quasi-systematically the prediction. For one to 10 h horizons Kalman model performances are competitive in comparison to more sophisticated models such as ANN which require both consistent historical data sets and computational resources. - Highlights: • Solar radiation forecasting with time series formalism. • Trainless approach compared to machine learning methods. • Very simple method dedicated to solar irradiation forecasting with high accuracy.

  20. On some descriptive and predictive methods for the dynamics of cancer growth

    Directory of Open Access Journals (Sweden)

    Iulian T. Vlad

    2015-09-01

    Full Text Available Cancer is a widely spread disease that affects a large proportion of the human population, and many research teams are developing algorithms to help medics to understand this disease. In particular, tumor growth has been studied from different viewpoints and several mathematical models have been proposed. In this paper, we review a set of comprehensive and modern tools that are useful for prediction of cancer growth in space and time. We comment on three alternative approaches. We first consider spatio-temporal stochastic processes within a Bayesian framework to model spatial heterogeneity, temporal dependence and spatio-temporal interactions amongst the pixels, providing a general modeling framework for such dynamics. We then consider predictions based on geometric properties of plane curves and vectors, and propose two methods of geometric prediction. Finally we focus on functional data analysis to statistically compare tumor contour evolutions. We also analyze real data on brain tumor.

  1. Improved methods of online monitoring and prediction in condensate and feed water system of nuclear power plant

    International Nuclear Information System (INIS)

    Wang, Hang; Peng, Min-jun; Wu, Peng; Cheng, Shou-yu

    2016-01-01

    Highlights: • Different methods for online monitoring and diagnosis are summarized. • Numerical simulation modeling of condensate and feed water system in nuclear power plant are done by FORTRAN programming. • Integrated online monitoring and prediction methods have been developed and tested. • Online monitoring module, fault diagnosis module and trends prediction module can be verified with each other. - Abstract: Faults or accidents may occur in a nuclear power plant (NPP), but it is hard for operators to recognize the situation and take effective measures quickly. So, online monitoring, diagnosis and prediction (OMDP) is used to provide enough information to operators and improve the safety of NPPs. In this paper, distributed conservation equation (DCE) and artificial immunity system (AIS) are proposed for online monitoring and diagnosis. On this basis, quantitative simulation models and interactive database are combined to predict the trends and severity of faults. The effectiveness of OMDP in improving the monitoring and prediction of condensate and feed water system (CFWS) was verified through simulation tests.

  2. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method.

    Science.gov (United States)

    Fogel, Allison R; Rosenberg, Jason C; Lehman, Frank M; Kuperberg, Gina R; Patel, Aniruddh D

    2015-01-01

    Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such 'authentic cadence' melody was matched to a 'non-cadential' (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in

  3. Analysis backpropagation methods with neural network for prediction of children's ability in psychomotoric

    Science.gov (United States)

    Izhari, F.; Dhany, H. W.; Zarlis, M.; Sutarman

    2018-03-01

    A good age in optimizing aspects of development is at the age of 4-6 years, namely with psychomotor development. Psychomotor is broader, more difficult to monitor but has a meaningful value for the child's life because it directly affects his behavior and deeds. Therefore, there is a problem to predict the child's ability level based on psychomotor. This analysis uses backpropagation method analysis with artificial neural network to predict the ability of the child on the psychomotor aspect by generating predictions of the child's ability on psychomotor and testing there is a mean squared error (MSE) value at the end of the training of 0.001. There are 30% of children aged 4-6 years have a good level of psychomotor ability, excellent, less good, and good enough.

  4. Genome-wide prediction methods in highly diverse and heterozygous species: proof-of-concept through simulation in grapevine.

    Directory of Open Access Journals (Sweden)

    Agota Fodor

    Full Text Available Nowadays, genome-wide association studies (GWAS and genomic selection (GS methods which use genome-wide marker data for phenotype prediction are of much potential interest in plant breeding. However, to our knowledge, no studies have been performed yet on the predictive ability of these methods for structured traits when using training populations with high levels of genetic diversity. Such an example of a highly heterozygous, perennial species is grapevine. The present study compares the accuracy of models based on GWAS or GS alone, or in combination, for predicting simple or complex traits, linked or not with population structure. In order to explore the relevance of these methods in this context, we performed simulations using approx 90,000 SNPs on a population of 3,000 individuals structured into three groups and corresponding to published diversity grapevine data. To estimate the parameters of the prediction models, we defined four training populations of 1,000 individuals, corresponding to these three groups and a core collection. Finally, to estimate the accuracy of the models, we also simulated four breeding populations of 200 individuals. Although prediction accuracy was low when breeding populations were too distant from the training populations, high accuracy levels were obtained using the sole core-collection as training population. The highest prediction accuracy was obtained (up to 0.9 using the combined GWAS-GS model. We thus recommend using the combined prediction model and a core-collection as training population for grapevine breeding or for other important economic crops with the same characteristics.

  5. Cox-nnet: An artificial neural network method for prognosis prediction of high-throughput omics data.

    Science.gov (United States)

    Ching, Travers; Zhu, Xun; Garmire, Lana X

    2018-04-01

    Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet.

  6. Combining Pathway Identification and Breast Cancer Survival Prediction via Screening-Network Methods

    Directory of Open Access Journals (Sweden)

    Antonella Iuliano

    2018-06-01

    Full Text Available Breast cancer is one of the most common invasive tumors causing high mortality among women. It is characterized by high heterogeneity regarding its biological and clinical characteristics. Several high-throughput assays have been used to collect genome-wide information for many patients in large collaborative studies. This knowledge has improved our understanding of its biology and led to new methods of diagnosing and treating the disease. In particular, system biology has become a valid approach to obtain better insights into breast cancer biological mechanisms. A crucial component of current research lies in identifying novel biomarkers that can be predictive for breast cancer patient prognosis on the basis of the molecular signature of the tumor sample. However, the high dimension and low sample size of data greatly increase the difficulty of cancer survival analysis demanding for the development of ad-hoc statistical methods. In this work, we propose novel screening-network methods that predict patient survival outcome by screening key survival-related genes and we assess the capability of the proposed approaches using METABRIC dataset. In particular, we first identify a subset of genes by using variable screening techniques on gene expression data. Then, we perform Cox regression analysis by incorporating network information associated with the selected subset of genes. The novelty of this work consists in the improved prediction of survival responses due to the different types of screenings (i.e., a biomedical-driven, data-driven and a combination of the two before building the network-penalized model. Indeed, the combination of the two screening approaches allows us to use the available biological knowledge on breast cancer and complement it with additional information emerging from the data used for the analysis. Moreover, we also illustrate how to extend the proposed approaches to integrate an additional omic layer, such as copy number

  7. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    DEFF Research Database (Denmark)

    Petersen, Bent; Petersen, Thomas Nordahl; Andersen, Pernille

    2009-01-01

    : The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability...... comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0.79 and 0.74 are obtained using our and the compared method, respectively. This tendency is true for any selected subset....

  8. Highway Travel Time Prediction Using Sparse Tensor Completion Tactics and K-Nearest Neighbor Pattern Matching Method

    Directory of Open Access Journals (Sweden)

    Jiandong Zhao

    2018-01-01

    Full Text Available Remote transportation microwave sensor (RTMS technology is being promoted for China’s highways. The distance is about 2 to 5 km between RTMSs, which leads to missing data and data sparseness problems. These two problems seriously restrict the accuracy of travel time prediction. Aiming at the data-missing problem, based on traffic multimode characteristics, a tensor completion method is proposed to recover the lost RTMS speed and volume data. Aiming at the data sparseness problem, virtual sensor nodes are set up between real RTMS nodes, and the two-dimensional linear interpolation and piecewise method are applied to estimate the average travel time between two nodes. Next, compared with the traditional K-nearest neighbor method, an optimal KNN method is proposed for travel time prediction. optimization is made in three aspects. Firstly, the three original state vectors, that is, speed, volume, and time of the day, are subdivided into seven periods. Secondly, the traffic congestion level is added as a new state vector. Thirdly, the cross-validation method is used to calibrate the K value to improve the adaptability of the KNN algorithm. Based on the data collected from Jinggangao highway, all the algorithms are validated. The results show that the proposed method can improve data quality and prediction precision of travel time.

  9. A review on fatigue life prediction methods for anti-vibration rubber materials

    Directory of Open Access Journals (Sweden)

    Xiaoli WANG

    2016-08-01

    Full Text Available Anti-vibration rubber, because of its superior elasticity, plasticity, waterproof and trapping characteristics, is widely used in the automotive industry, national defense, construction and other fields. The theory and technology of predicting fatigue life is of great significance to improve the durability design and manufacturing of anti-vibration rubber products. According to the characteristics of the anti-vibration rubber products in service, the technical difficulties for analyzing fatigue properties of anti-vibration rubber materials are pointed out. The research progress of the fatigue properties of rubber materials is reviewed from three angles including methods of fatigue crack initiation, fatigue crack propagation and fatigue damage accumulation. It is put forward that some nonlinear characteristics of rubber under fatigue loading, including the Mullins effect, permanent deformation and cyclic stress softening, should be considered in the further study of rubber materials. Meanwhile, it is indicated that the fatigue damage accumulation method based on continuum damage mechanics might be more appropriate to solve fatigue damage and life prediction problems for complex rubber materials and structures under fatigue loading.

  10. Simplified Method for Predicting a Functional Class of Proteins in Transcription Factor Complexes

    KAUST Repository

    Piatek, Marek J.; Schramm, Michael C.; Burra, Dharani Dhar; BinShbreen, Abdulaziz; Jankovic, Boris R.; Chowdhary, Rajesh; Archer, John A.C.; Bajic, Vladimir B.

    2013-01-01

    initiation. Such information is not fully available, since not all proteins that act as TFs or TcoFs are yet annotated as such, due to generally partial functional annotation of proteins. In this study we have developed a method to predict, using only

  11. Predicting high risk births with contraceptive prevalence and contraceptive method-mix in an ecologic analysis

    Directory of Open Access Journals (Sweden)

    Jamie Perin

    2017-11-01

    Full Text Available Abstract Background Increased contraceptive use has been associated with a decrease in high parity births, births that occur close together in time, and births to very young or to older women. These types of births are also associated with high risk of under-five mortality. Previous studies have looked at the change in the level of contraception use and the average change in these types of high-risk births. We aim to predict the distribution of births in a specific country when there is a change in the level and method of modern contraception. Methods We used data from full birth histories and modern contraceptive use from 207 nationally representative Demographic and Health Surveys covering 71 countries to describe the distribution of births in each survey based on birth order, preceding birth space, and mother’s age at birth. We estimated the ecologic associations between the prevalence and method-mix of modern contraceptives and the proportion of births in each category. Hierarchical modelling was applied to these aggregated cross sectional proportions, so that random effects were estimated for countries with multiple surveys. We use these results to predict the change in type of births associated with scaling up modern contraception in three different scenarios. Results We observed marked differences between regions, in the absolute rates of contraception, the types of contraceptives in use, and in the distribution of type of birth. Contraceptive method-mix was a significant determinant of proportion of high-risk births, especially for birth spacing, but also for mother’s age and parity. Increased use of modern contraceptives is especially predictive of reduced parity and more births with longer preceding space. However, increased contraception alone is not associated with fewer births to women younger than 18 years or a decrease in short-spaced births. Conclusions Both the level and the type of contraception are important factors in

  12. Effectiveness of the cervical vertebral maturation method to predict postpeak circumpubertal growth of craniofacial structures.

    NARCIS (Netherlands)

    Fudalej, P.S.; Bollen, A.M.

    2010-01-01

    INTRODUCTION: Our aim was to assess effectiveness of the cervical vertebral maturation (CVM) method to predict circumpubertal craniofacial growth in the postpeak period. METHODS: The CVM stage was determined in 176 subjects (51 adolescent boys and 125 adolescent girls) on cephalograms taken at the

  13. Development of a semi-automated method for subspecialty case distribution and prediction of intraoperative consultations in surgical pathology

    Directory of Open Access Journals (Sweden)

    Raul S Gonzalez

    2015-01-01

    Full Text Available Background: In many surgical pathology laboratories, operating room schedules are prospectively reviewed to determine specimen distribution to different subspecialty services and to predict the number and nature of potential intraoperative consultations for which prior medical records and slides require review. At our institution, such schedules were manually converted into easily interpretable, surgical pathology-friendly reports to facilitate these activities. This conversion, however, was time-consuming and arguably a non-value-added activity. Objective: Our goal was to develop a semi-automated method of generating these reports that improved their readability while taking less time to perform than the manual method. Materials and Methods: A dynamic Microsoft Excel workbook was developed to automatically convert published operating room schedules into different tabular formats. Based on the surgical procedure descriptions in the schedule, a list of linked keywords and phrases was utilized to sort cases by subspecialty and to predict potential intraoperative consultations. After two trial-and-optimization cycles, the method was incorporated into standard practice. Results: The workbook distributed cases to appropriate subspecialties and accurately predicted intraoperative requests. Users indicated that they spent 1-2 h fewer per day on this activity than before, and team members preferred the formatting of the newer reports. Comparison of the manual and semi-automatic predictions showed that the mean daily difference in predicted versus actual intraoperative consultations underwent no statistically significant changes before and after implementation for most subspecialties. Conclusions: A well-designed, lean, and simple information technology solution to determine subspecialty case distribution and prediction of intraoperative consultations in surgical pathology is approximately as accurate as the gold standard manual method and requires less

  14. Systems-based biological concordance and predictive reproducibility of gene set discovery methods in cardiovascular disease.

    Science.gov (United States)

    Azuaje, Francisco; Zheng, Huiru; Camargo, Anyela; Wang, Haiying

    2011-08-01

    The discovery of novel disease biomarkers is a crucial challenge for translational bioinformatics. Demonstration of both their classification power and reproducibility across independent datasets are essential requirements to assess their potential clinical relevance. Small datasets and multiplicity of putative biomarker sets may explain lack of predictive reproducibility. Studies based on pathway-driven discovery approaches have suggested that, despite such discrepancies, the resulting putative biomarkers tend to be implicated in common biological processes. Investigations of this problem have been mainly focused on datasets derived from cancer research. We investigated the predictive and functional concordance of five methods for discovering putative biomarkers in four independently-generated datasets from the cardiovascular disease domain. A diversity of biosignatures was identified by the different methods. However, we found strong biological process concordance between them, especially in the case of methods based on gene set analysis. With a few exceptions, we observed lack of classification reproducibility using independent datasets. Partial overlaps between our putative sets of biomarkers and the primary studies exist. Despite the observed limitations, pathway-driven or gene set analysis can predict potentially novel biomarkers and can jointly point to biomedically-relevant underlying molecular mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. High temperature strength data-base of SUS304 steel and a study on life prediction method under ceep-fatigue interaction

    International Nuclear Information System (INIS)

    Matsubara, Masaaki; Nitta, Akito; Ogata, Takashi; Kuwabara, Kazuo

    1985-01-01

    As a part of ''Study for practical use of Tank Type FBR'', ''Practical use of inelastic analysis method to FBR structural design'' is carried out as a cooperative study for three years from 1984. In this cooperative study, to establish the life prediction method under creep-fatigue interaction is one of the most important theme. To attain this purpose, many different type tests are planned and then conducted. By the way, to use these many data rapidly and effectively, it is necessary to make a data base. So in this work, we developed the simple data base of high temperature strength. And the data of SUS304 obtained at this place to this day are inputted into this data base. Next, we investigated about five life prediction methods under creep-fatigue interaction, Frequency Modified Method, Ostergren Method, Strain Range Partitioning Method, Damage Rate Approach and Strain Energy Parameter Method. As a result, Strain Range Partitioning Method can predict the lives within Factor of 2. In the other four methods, it is supported that material constants in the prediction formula are dependent on temperature. (author)

  16. A hybrid measure-correlate-predict method for long-term wind condition assessment

    International Nuclear Information System (INIS)

    Zhang, Jie; Chowdhury, Souma; Messac, Achille; Hodge, Bri-Mathias

    2014-01-01

    Highlights: • A hybrid measure-correlate-predict (MCP) methodology with greater accuracy is developed. • Three sets of performance metrics are proposed to evaluate the hybrid MCP method. • Both wind speed and direction are considered in the hybrid MCP method. • The best combination of MCP algorithms is determined. • The developed hybrid MCP method is uniquely helpful for long-term wind resource assessment. - Abstract: This paper develops a hybrid measure-correlate-predict (MCP) strategy to assess long-term wind resource variations at a farm site. The hybrid MCP method uses recorded data from multiple reference stations to estimate long-term wind conditions at a target wind plant site with greater accuracy than is possible with data from a single reference station. The weight of each reference station in the hybrid strategy is determined by the (i) distance and (ii) elevation differences between the target farm site and each reference station. In this case, the wind data is divided into sectors according to the wind direction, and the MCP strategy is implemented for each wind direction sector separately. The applicability of the proposed hybrid strategy is investigated using five MCP methods: (i) the linear regression; (ii) the variance ratio; (iii) the Weibull scale; (iv) the artificial neural networks; and (v) the support vector regression. To implement the hybrid MCP methodology, we use hourly averaged wind data recorded at five stations in the state of Minnesota between 07-01-1996 and 06-30-2004. Three sets of performance metrics are used to evaluate the hybrid MCP method. The first set of metrics analyze the statistical performance, including the mean wind speed, wind speed variance, root mean square error, and mean absolute error. The second set of metrics evaluate the distribution of long-term wind speed; to this end, the Weibull distribution and the Multivariate and Multimodal Wind Distribution models are adopted. The third set of metrics analyze

  17. Critical assessment of methods of protein structure prediction (CASP)-round IX

    KAUST Repository

    Moult, John; Fidelis, Krzysztof; Kryshtafovych, Andriy; Tramontano, Anna

    2011-01-01

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the ninth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. Methods for modeling protein structure continue to advance, although at a more modest pace than in the early CASP experiments. CASP developments of note are indications of improvement in model accuracy for some classes of target, an improved ability to choose the most accurate of a set of generated models, and evidence of improvement in accuracy for short "new fold" models. In addition, a new analysis of regions of models not derivable from the most obvious template structure has revealed better performance than expected.

  18. Theoretical study on new bias factor methods to effectively use critical experiments for improvement of prediction accuracy of neutronic characteristics

    International Nuclear Information System (INIS)

    Kugo, Teruhiko; Mori, Takamasa; Takeda, Toshikazu

    2007-01-01

    Extended bias factor methods are proposed with two new concepts, the LC method and the PE method, in order to effectively use critical experiments and to enhance the applicability of the bias factor method for the improvement of the prediction accuracy of neutronic characteristics of a target core. Both methods utilize a number of critical experimental results and produce a semifictitious experimental value with them. The LC and PE methods define the semifictitious experimental values by a linear combination of experimental values and the product of exponentiated experimental values, respectively, and the corresponding semifictitious calculation values by those of calculation values. A bias factor is defined by the ratio of the semifictitious experimental value to the semifictitious calculation value in both methods. We formulate how to determine weights for the LC method and exponents for the PE method in order to minimize the variance of the design prediction value obtained by multiplying the design calculation value by the bias factor. From a theoretical comparison of these new methods with the conventional method which utilizes a single experimental result and the generalized bias factor method which was previously proposed to utilize a number of experimental results, it is concluded that the PE method is the most useful method for improving the prediction accuracy. The main advantages of the PE method are summarized as follows. The prediction accuracy is necessarily improved compared with the design calculation value even when experimental results include large experimental errors. This is a special feature that the other methods do not have. The prediction accuracy is most effectively improved by utilizing all the experimental results. From these facts, it can be said that the PE method effectively utilizes all the experimental results and has a possibility to make a full-scale-mockup experiment unnecessary with the use of existing and future benchmark

  19. A comparison of radiosity with current methods of sound level prediction in commercial spaces

    Science.gov (United States)

    Beamer, C. Walter, IV; Muehleisen, Ralph T.

    2002-11-01

    The ray tracing and image methods (and variations thereof) are widely used for the computation of sound fields in architectural spaces. The ray tracing and image methods are best suited for spaces with mostly specular reflecting surfaces. The radiosity method, a method based on solving a system of energy balance equations, is best applied to spaces with mainly diffusely reflective surfaces. Because very few spaces are either purely specular or purely diffuse, all methods must deal with both types of reflecting surfaces. A comparison of the radiosity method to other methods for the prediction of sound levels in commercial environments is presented. [Work supported by NSF.

  20. MU-LOC: A Machine-Learning Method for Predicting Mitochondrially Localized Proteins in Plants

    Directory of Open Access Journals (Sweden)

    Ning Zhang

    2018-05-01

    Full Text Available Targeting and translocation of proteins to the appropriate subcellular compartments are crucial for cell organization and function. Newly synthesized proteins are transported to mitochondria with the assistance of complex targeting sequences containing either an N-terminal pre-sequence or a multitude of internal signals. Compared with experimental approaches, computational predictions provide an efficient way to infer subcellular localization of a protein. However, it is still challenging to predict plant mitochondrially localized proteins accurately due to various limitations. Consequently, the performance of current tools can be improved with new data and new machine-learning methods. We present MU-LOC, a novel computational approach for large-scale prediction of plant mitochondrial proteins. We collected a comprehensive dataset of plant subcellular localization, extracted features including amino acid composition, protein position weight matrix, and gene co-expression information, and trained predictors using deep neural network and support vector machine. Benchmarked on two independent datasets, MU-LOC achieved substantial improvements over six state-of-the-art tools for plant mitochondrial targeting prediction. In addition, MU-LOC has the advantage of predicting plant mitochondrial proteins either possessing or lacking N-terminal pre-sequences. We applied MU-LOC to predict candidate mitochondrial proteins for the whole proteome of Arabidopsis and potato. MU-LOC is publicly available at http://mu-loc.org.

  1. A new method for class prediction based on signed-rank algorithms applied to Affymetrix® microarray experiments

    Directory of Open Access Journals (Sweden)

    Vassal Aurélien

    2008-01-01

    Full Text Available Abstract Background The huge amount of data generated by DNA chips is a powerful basis to classify various pathologies. However, constant evolution of microarray technology makes it difficult to mix data from different chip types for class prediction of limited sample populations. Affymetrix® technology provides both a quantitative fluorescence signal and a decision (detection call: absent or present based on signed-rank algorithms applied to several hybridization repeats of each gene, with a per-chip normalization. We developed a new prediction method for class belonging based on the detection call only from recent Affymetrix chip type. Biological data were obtained by hybridization on U133A, U133B and U133Plus 2.0 microarrays of purified normal B cells and cells from three independent groups of multiple myeloma (MM patients. Results After a call-based data reduction step to filter out non class-discriminative probe sets, the gene list obtained was reduced to a predictor with correction for multiple testing by iterative deletion of probe sets that sequentially improve inter-class comparisons and their significance. The error rate of the method was determined using leave-one-out and 5-fold cross-validation. It was successfully applied to (i determine a sex predictor with the normal donor group classifying gender with no error in all patient groups except for male MM samples with a Y chromosome deletion, (ii predict the immunoglobulin light and heavy chains expressed by the malignant myeloma clones of the validation group and (iii predict sex, light and heavy chain nature for every new patient. Finally, this method was shown powerful when compared to the popular classification method Prediction Analysis of Microarray (PAM. Conclusion This normalization-free method is routinely used for quality control and correction of collection errors in patient reports to clinicians. It can be easily extended to multiple class prediction suitable with

  2. SAAS: Short Amino Acid Sequence - A Promising Protein Secondary Structure Prediction Method of Single Sequence

    Directory of Open Access Journals (Sweden)

    Zhou Yuan Wu

    2013-07-01

    Full Text Available In statistical methods of predicting protein secondary structure, many researchers focus on single amino acid frequencies in α-helices, β-sheets, and so on, or the impact near amino acids on an amino acid forming a secondary structure. But the paper considers a short sequence of amino acids (3, 4, 5 or 6 amino acids as integer, and statistics short sequence's probability forming secondary structure. Also, many researchers select low homologous sequences as statistical database. But this paper select whole PDB database. In this paper we propose a strategy to predict protein secondary structure using simple statistical method. Numerical computation shows that, short amino acids sequence as integer to statistics, which can easy see trend of short sequence forming secondary structure, and it will work well to select large statistical database (whole PDB database without considering homologous, and Q3 accuracy is ca. 74% using this paper proposed simple statistical method, but accuracy of others statistical methods is less than 70%.

  3. Development of formulation Q1As method for quadrupole noise prediction around a submerged cylinder

    Directory of Open Access Journals (Sweden)

    Yo-Seb Choi

    2017-09-01

    Full Text Available Recent research has shown that quadrupole noise has a significant influence on the overall characteristics of flow-induced noise and on the performance of underwater appendages such as sonar domes. However, advanced research generally uses the Ffowcs Williams–Hawkings analogy without considering the quadrupole source to reduce computational cost. In this study, flow-induced noise is predicted by using an LES turbulence model and a developed formulation, called the formulation Q1As method to properly take into account the quadrupole source. The noise around a circular cylinder in an underwater environment is examined for two cases with different velocities. The results from the method are compared to those obtained from the experiments and the permeable FW–H method. The results are in good agreement with the experimental data, with a difference of less than 1 dB, which indicates that the formulation Q1As method is suitable for use in predicting quadrupole noise around underwater appendages.

  4. A control method for agricultural greenhouses heating based on computational fluid dynamics and energy prediction model

    International Nuclear Information System (INIS)

    Chen, Jiaoliao; Xu, Fang; Tan, Dapeng; Shen, Zheng; Zhang, Libin; Ai, Qinglin

    2015-01-01

    Highlights: • A novel control method for the heating greenhouse with SWSHPS is proposed. • CFD is employed to predict the priorities of FCU loops for thermal performance. • EPM is act as an on-line tool to predict the total energy demand of greenhouse. • The CFD–EPM-based method can save energy and improve control accuracy. • The energy savings potential is between 8.7% and 15.1%. - Abstract: As energy heating is one of the main production costs, many efforts have been made to reduce the energy consumption of agricultural greenhouses. Herein, a novel control method of greenhouse heating using computational fluid dynamics (CFD) and energy prediction model (EPM) is proposed for energy savings and system performance. Based on the low-Reynolds number k–ε turbulence principle, a CFD model of heating greenhouse is developed, applying the discrete ordinates model for the radiative heat transfers and porous medium approach for plants considering plants sensible and latent heat exchanges. The CFD simulations have been validated, and used to analyze the greenhouse thermal performance and the priority of fan coil units (FCU) loops under the various heating conditions. According to the heating efficiency and temperature uniformity, the priorities of each FCU loop can be predicted to generate a database with priorities for control system. EPM is built up based on the thermal balance, and used to predict and optimize the energy demand of the greenhouse online. Combined with the priorities of FCU loops from CFD simulations offline, we have developed the CFD–EPM-based heating control system of greenhouse with surface water source heat pumps system (SWSHPS). Compared with conventional multi-zone independent control (CMIC) method, the energy savings potential is between 8.7% and 15.1%, and the control temperature deviation is decreased to between 0.1 °C and 0.6 °C in the investigated greenhouse. These results show the CFD–EPM-based method can improve system

  5. A Method to Predict Compressor Stall in the TF34-100 Turbofan Engine Utilizing Real-Time Performance Data

    Science.gov (United States)

    2015-06-01

    A METHOD TO PREDICT COMPRESSOR STALL IN THE TF34-100 TURBOFAN ENGINE UTILIZING REAL-TIME PERFORMANCE...THE TF34-100 TURBOFAN ENGINE UTILIZING REAL-TIME PERFORMANCE DATA THESIS Presented to the Faculty Department of Systems Engineering and...036 A METHOD TO PREDICT COMPRESSOR STALL IN THE TF34-100 TURBOFAN ENGINE UTILIZING REAL-TIME PERFORMANCE DATA Shuxiang ‘Albert’ Li, BS

  6. Prediction method of seismic residual deformation of caisson quay wall in liquefied foundation

    Science.gov (United States)

    Wang, Li-Yan; Liu, Han-Long; Jiang, Peng-Ming; Chen, Xiang-Xiang

    2011-03-01

    The multi-spring shear mechanism plastic model in this paper is defined in strain space to simulate pore pressure generation and development in sands under cyclic loading and undrained conditions, and the rotation of principal stresses can also be simulated by the model with cyclic behavior of anisotropic consolidated sands. Seismic residual deformations of typical caisson quay walls under different engineering situations are analyzed in detail by the plastic model, and then an index of liquefaction extent is applied to describe the regularity of seismic residual deformation of caisson quay wall top under different engineering situations. Some correlated prediction formulas are derived from the results of regression analysis between seismic residual deformation of quay wall top and extent of liquefaction in the relative safety backfill sand site. Finally, the rationality and the reliability of the prediction methods are validated by test results of a 120 g-centrifuge shaking table, and the comparisons show that some reliable seismic residual deformation of caisson quay can be predicted by appropriate prediction formulas and appropriate index of liquefaction extent.

  7. A review of machine learning methods to predict the solubility of overexpressed recombinant proteins in Escherichia coli.

    Science.gov (United States)

    Habibi, Narjeskhatoon; Mohd Hashim, Siti Z; Norouzi, Alireza; Samian, Mohammed Razip

    2014-05-08

    Over the last 20 years in biotechnology, the production of recombinant proteins has been a crucial bioprocess in both biopharmaceutical and research arena in terms of human health, scientific impact and economic volume. Although logical strategies of genetic engineering have been established, protein overexpression is still an art. In particular, heterologous expression is often hindered by low level of production and frequent fail due to opaque reasons. The problem is accentuated because there is no generic solution available to enhance heterologous overexpression. For a given protein, the extent of its solubility can indicate the quality of its function. Over 30% of synthesized proteins are not soluble. In certain experimental circumstances, including temperature, expression host, etc., protein solubility is a feature eventually defined by its sequence. Until now, numerous methods based on machine learning are proposed to predict the solubility of protein merely from its amino acid sequence. In spite of the 20 years of research on the matter, no comprehensive review is available on the published methods. This paper presents an extensive review of the existing models to predict protein solubility in Escherichia coli recombinant protein overexpression system. The models are investigated and compared regarding the datasets used, features, feature selection methods, machine learning techniques and accuracy of prediction. A discussion on the models is provided at the end. This study aims to investigate extensively the machine learning based methods to predict recombinant protein solubility, so as to offer a general as well as a detailed understanding for researches in the field. Some of the models present acceptable prediction performances and convenient user interfaces. These models can be considered as valuable tools to predict recombinant protein overexpression results before performing real laboratory experiments, thus saving labour, time and cost.

  8. PROXIMAL: a method for Prediction of Xenobiotic Metabolism.

    Science.gov (United States)

    Yousofshahi, Mona; Manteiga, Sara; Wu, Charmian; Lee, Kyongbum; Hassoun, Soha

    2015-12-22

    Contamination of the environment with bioactive chemicals has emerged as a potential public health risk. These substances that may cause distress or disease in humans can be found in air, water and food supplies. An open question is whether these chemicals transform into potentially more active or toxic derivatives via xenobiotic metabolizing enzymes expressed in the body. We present a new prediction tool, which we call PROXIMAL (Prediction of Xenobiotic Metabolism) for identifying possible transformation products of xenobiotic chemicals in the liver. Using reaction data from DrugBank and KEGG, PROXIMAL builds look-up tables that catalog the sites and types of structural modifications performed by Phase I and Phase II enzymes. Given a compound of interest, PROXIMAL searches for substructures that match the sites cataloged in the look-up tables, applies the corresponding modifications to generate a panel of possible transformation products, and ranks the products based on the activity and abundance of the enzymes involved. PROXIMAL generates transformations that are specific for the chemical of interest by analyzing the chemical's substructures. We evaluate the accuracy of PROXIMAL's predictions through case studies on two environmental chemicals with suspected endocrine disrupting activity, bisphenol A (BPA) and 4-chlorobiphenyl (PCB3). Comparisons with published reports confirm 5 out of 7 and 17 out of 26 of the predicted derivatives for BPA and PCB3, respectively. We also compare biotransformation predictions generated by PROXIMAL with those generated by METEOR and Metaprint2D-react, two other prediction tools. PROXIMAL can predict transformations of chemicals that contain substructures recognizable by human liver enzymes. It also has the ability to rank the predicted metabolites based on the activity and abundance of enzymes involved in xenobiotic transformation.

  9. A method for predicting the impact velocity of a projectile fired from a compressed air gun facility

    International Nuclear Information System (INIS)

    Attwood, G.J.

    1988-03-01

    This report describes the development and use of a method for calculating the velocity at impact of a projectile fired from a compressed air gun. The method is based on a simple but effective approach which has been incorporated into a computer program. The method was developed principally for use with the Horizontal Impact Facility at AEE Winfrith but has been adapted so that it can be applied to any compressed air gun of a similar design. The method has been verified by comparison of predicted velocities with test data and the program is currently being used in a predictive manner to specify test conditions for the Horizontal Impact Facility at Winfrith. (author)

  10. A prediction method of temperature distribution and thermal stress for the throttle turbine rotor and its application

    Directory of Open Access Journals (Sweden)

    Yang Yu

    2017-01-01

    Full Text Available In this paper, a prediction method of the temperature distribution for the thermal stress for the throttle-regulated steam turbine rotor is proposed. The rotor thermal stress curve can be calculated according to the preset power requirement, the operation mode and the predicted critical parameters. The results of the 660 MW throttle turbine rotor show that the operators are able to predict the operation results and to adjust the operation parameters in advance with the help of the inertial element method. Meanwhile, it can also raise the operation level, thus providing the technical guarantee for the thermal stress optimization control and the safety of the steam turbine rotor under the variable load operation.

  11. The Relevance Voxel Machine (RVoxM): A Bayesian Method for Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2011-01-01

    This paper presents the Relevance VoxelMachine (RVoxM), a Bayesian multivariate pattern analysis (MVPA) algorithm that is specifically designed for making predictions based on image data. In contrast to generic MVPA algorithms that have often been used for this purpose, the method is designed to ...

  12. Five year prediction of Sea Surface Temperature in the Tropical Atlantic: a comparison of simple statistical methods

    OpenAIRE

    Laepple, Thomas; Jewson, Stephen; Meagher, Jonathan; O'Shay, Adam; Penzer, Jeremy

    2007-01-01

    We are developing schemes that predict future hurricane numbers by first predicting future sea surface temperatures (SSTs), and then apply the observed statistical relationship between SST and hurricane numbers. As part of this overall goal, in this study we compare the historical performance of three simple statistical methods for making five-year SST forecasts. We also present SST forecasts for 2006-2010 using these methods and compare them to forecasts made from two structural time series ...

  13. Recurrence predictive models for patients with hepatocellular carcinoma after radiofrequency ablation using support vector machines with feature selection methods.

    Science.gov (United States)

    Liang, Ja-Der; Ping, Xiao-Ou; Tseng, Yi-Ju; Huang, Guan-Tarn; Lai, Feipei; Yang, Pei-Ming

    2014-12-01

    Recurrence of hepatocellular carcinoma (HCC) is an important issue despite effective treatments with tumor eradication. Identification of patients who are at high risk for recurrence may provide more efficacious screening and detection of tumor recurrence. The aim of this study was to develop recurrence predictive models for HCC patients who received radiofrequency ablation (RFA) treatment. From January 2007 to December 2009, 83 newly diagnosed HCC patients receiving RFA as their first treatment were enrolled. Five feature selection methods including genetic algorithm (GA), simulated annealing (SA) algorithm, random forests (RF) and hybrid methods (GA+RF and SA+RF) were utilized for selecting an important subset of features from a total of 16 clinical features. These feature selection methods were combined with support vector machine (SVM) for developing predictive models with better performance. Five-fold cross-validation was used to train and test SVM models. The developed SVM-based predictive models with hybrid feature selection methods and 5-fold cross-validation had averages of the sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and area under the ROC curve as 67%, 86%, 82%, 69%, 90%, and 0.69, respectively. The SVM derived predictive model can provide suggestive high-risk recurrent patients, who should be closely followed up after complete RFA treatment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. A Comparison of Prediction Methods for Design of Pump as Turbine for Small Hydro Plant: Implemented Plant

    Science.gov (United States)

    Naeimi, Hossein; Nayebi Shahabi, Mina; Mohammadi, Sohrab

    2017-08-01

    In developing countries, small and micro hydropower plants are very effective source for electricity generation with energy pay-back time (EPBT) less than other conventional electricity generation systems. Using pump as turbine (PAT) is an attractive, significant and cost-effective alternative. Pump manufacturers do not normally provide the characteristic curves of their pumps working as turbines. Therefore, choosing an appropriate Pump to work as a turbine is essential in implementing the small-hydro plants. In this paper, in order to find the best fitting method to choose a PAT, the results of a small-hydro plant implemented on the by-pass of a Pressure Reducing Valve (PRV) in Urmia city in Iran are presented. Some of the prediction methods of Best Efficiency Point of PATs are derived. Then, the results of implemented project have been compared to the prediction methods results and the deviation of from measured data were considered and discussed and the best method that predicts the specifications of PAT more accurately determined. Finally, the energy pay-back time for the plant is calculated.

  15. Validation of engineering methods for predicting hypersonic vehicle controls forces and moments

    Science.gov (United States)

    Maughmer, M.; Straussfogel, D.; Long, L.; Ozoroski, L.

    1991-01-01

    This work examines the ability of the aerodynamic analysis methods contained in an industry standard conceptual design code, the Aerodynamic Preliminary Analysis System (APAS II), to estimate the forces and moments generated through control surface deflections from low subsonic to high hypersonic speeds. Predicted control forces and moments generated by various control effectors are compared with previously published wind-tunnel and flight-test data for three vehicles: the North American X-15, a hypersonic research airplane concept, and the Space Shuttle Orbiter. Qualitative summaries of the results are given for each force and moment coefficient and each control derivative in the various speed ranges. Results show that all predictions of longitudinal stability and control derivatives are acceptable for use at the conceptual design stage.

  16. An Adaptive Model Predictive Load Frequency Control Method for Multi-Area Interconnected Power Systems with Photovoltaic Generations

    Directory of Open Access Journals (Sweden)

    Guo-Qiang Zeng

    2017-11-01

    Full Text Available As the penetration level of renewable distributed generations such as wind turbine generator and photovoltaic stations increases, the load frequency control issue of a multi-area interconnected power system becomes more challenging. This paper presents an adaptive model predictive load frequency control method for a multi-area interconnected power system with photovoltaic generation by considering some nonlinear features such as a dead band for governor and generation rate constraint for steam turbine. The dynamic characteristic of this system is formulated as a discrete-time state space model firstly. Then, the predictive dynamic model is obtained by introducing an expanded state vector, and rolling optimization of control signal is implemented based on a cost function by minimizing the weighted sum of square predicted errors and square future control values. The simulation results on a typical two-area power system consisting of photovoltaic and thermal generator have demonstrated the superiority of the proposed model predictive control method to these state-of-the-art control techniques such as firefly algorithm, genetic algorithm, and population extremal optimization-based proportional-integral control methods in cases of normal conditions, load disturbance and parameters uncertainty.

  17. SVMTriP: a method to predict antigenic epitopes using support vector machine to integrate tri-peptide similarity and propensity.

    Directory of Open Access Journals (Sweden)

    Bo Yao

    Full Text Available Identifying protein surface regions preferentially recognizable by antibodies (antigenic epitopes is at the heart of new immuno-diagnostic reagent discovery and vaccine design, and computational methods for antigenic epitope prediction provide crucial means to serve this purpose. Many linear B-cell epitope prediction methods were developed, such as BepiPred, ABCPred, AAP, BCPred, BayesB, BEOracle/BROracle, and BEST, towards this goal. However, effective immunological research demands more robust performance of the prediction method than what the current algorithms could provide. In this work, a new method to predict linear antigenic epitopes is developed; Support Vector Machine has been utilized by combining the Tri-peptide similarity and Propensity scores (SVMTriP. Applied to non-redundant B-cell linear epitopes extracted from IEDB, SVMTriP achieves a sensitivity of 80.1% and a precision of 55.2% with a five-fold cross-validation. The AUC value is 0.702. The combination of similarity and propensity of tri-peptide subsequences can improve the prediction performance for linear B-cell epitopes. Moreover, SVMTriP is capable of recognizing viral peptides from a human protein sequence background. A web server based on our method is constructed for public use. The server and all datasets used in the current study are available at http://sysbio.unl.edu/SVMTriP.

  18. In Silico Prediction of Chemical Toxicity for Drug Design Using Machine Learning Methods and Structural Alerts

    Science.gov (United States)

    Yang, Hongbin; Sun, Lixia; Li, Weihua; Liu, Guixia; Tang, Yun

    2018-02-01

    For a drug, safety is always the most important issue, including a variety of toxicities and adverse drug effects, which should be evaluated in preclinical and clinical trial phases. This review article at first simply introduced the computational methods used in prediction of chemical toxicity for drug design, including machine learning methods and structural alerts. Machine learning methods have been widely applied in qualitative classification and quantitative regression studies, while structural alerts can be regarded as a complementary tool for lead optimization. The emphasis of this article was put on the recent progress of predictive models built for various toxicities. Available databases and web servers were also provided. Though the methods and models are very helpful for drug design, there are still some challenges and limitations to be improved for drug safety assessment in the future.

  19. In Silico Prediction of Chemical Toxicity for Drug Design Using Machine Learning Methods and Structural Alerts

    Directory of Open Access Journals (Sweden)

    Hongbin Yang

    2018-02-01

    Full Text Available During drug development, safety is always the most important issue, including a variety of toxicities and adverse drug effects, which should be evaluated in preclinical and clinical trial phases. This review article at first simply introduced the computational methods used in prediction of chemical toxicity for drug design, including machine learning methods and structural alerts. Machine learning methods have been widely applied in qualitative classification and quantitative regression studies, while structural alerts can be regarded as a complementary tool for lead optimization. The emphasis of this article was put on the recent progress of predictive models built for various toxicities. Available databases and web servers were also provided. Though the methods and models are very helpful for drug design, there are still some challenges and limitations to be improved for drug safety assessment in the future.

  20. In Silico Prediction of Chemical Toxicity for Drug Design Using Machine Learning Methods and Structural Alerts.

    Science.gov (United States)

    Yang, Hongbin; Sun, Lixia; Li, Weihua; Liu, Guixia; Tang, Yun

    2018-01-01

    During drug development, safety is always the most important issue, including a variety of toxicities and adverse drug effects, which should be evaluated in preclinical and clinical trial phases. This review article at first simply introduced the computational methods used in prediction of chemical toxicity for drug design, including machine learning methods and structural alerts. Machine learning methods have been widely applied in qualitative classification and quantitative regression studies, while structural alerts can be regarded as a complementary tool for lead optimization. The emphasis of this article was put on the recent progress of predictive models built for various toxicities. Available databases and web servers were also provided. Though the methods and models are very helpful for drug design, there are still some challenges and limitations to be improved for drug safety assessment in the future.

  1. A measurement-based method for predicting margins and uncertainties for unprotected accidents in the Integral Fast Reactor concept

    International Nuclear Information System (INIS)

    Vilim, R.B.

    1990-01-01

    A measurement-based method for predicting the response of an LMR core to unprotected accidents has been developed. The method processes plant measurements taken at normal operation to generate a stochastic model for the core dynamics. This model can be used to predict three sigma confidence intervals for the core temperature and power response. Preliminary numerical simulations performed for EBR-2 appear promising. 6 refs., 2 figs

  2. Computational methods using weighed-extreme learning machine to predict protein self-interactions with protein evolutionary information.

    Science.gov (United States)

    An, Ji-Yong; Zhang, Lei; Zhou, Yong; Zhao, Yu-Jun; Wang, Da-Fu

    2017-08-18

    Self-interactions Proteins (SIPs) is important for their biological activity owing to the inherent interaction amongst their secondary structures or domains. However, due to the limitations of experimental Self-interactions detection, one major challenge in the study of prediction SIPs is how to exploit computational approaches for SIPs detection based on evolutionary information contained protein sequence. In the work, we presented a novel computational approach named WELM-LAG, which combined the Weighed-Extreme Learning Machine (WELM) classifier with Local Average Group (LAG) to predict SIPs based on protein sequence. The major improvement of our method lies in presenting an effective feature extraction method used to represent candidate Self-interactions proteins by exploring the evolutionary information embedded in PSI-BLAST-constructed position specific scoring matrix (PSSM); and then employing a reliable and robust WELM classifier to carry out classification. In addition, the Principal Component Analysis (PCA) approach is used to reduce the impact of noise. The WELM-LAG method gave very high average accuracies of 92.94 and 96.74% on yeast and human datasets, respectively. Meanwhile, we compared it with the state-of-the-art support vector machine (SVM) classifier and other existing methods on human and yeast datasets, respectively. Comparative results indicated that our approach is very promising and may provide a cost-effective alternative for predicting SIPs. In addition, we developed a freely available web server called WELM-LAG-SIPs to predict SIPs. The web server is available at http://219.219.62.123:8888/WELMLAG/ .

  3. Recursive prediction error methods for online estimation in nonlinear state-space models

    Directory of Open Access Journals (Sweden)

    Dag Ljungquist

    1994-04-01

    Full Text Available Several recursive algorithms for online, combined state and parameter estimation in nonlinear state-space models are discussed in this paper. Well-known algorithms such as the extended Kalman filter and alternative formulations of the recursive prediction error method are included, as well as a new method based on a line-search strategy. A comparison of the algorithms illustrates that they are very similar although the differences can be important for the online tracking capabilities and robustness. Simulation experiments on a simple nonlinear process show that the performance under certain conditions can be improved by including a line-search strategy.

  4. Method for predicting future developments of traffic noise in urban areas in Europe

    NARCIS (Netherlands)

    Salomons, E.; Hout, D. van den; Janssen, S.; Kugler, U.; MacA, V.

    2010-01-01

    Traffic noise in urban areas in Europe is a major environmental stressor. In this study we present a method for predicting how environmental noise can be expected to develop in the future. In the project HEIMTSA scenarios were developed for all relevant environmental stressors to health, for all

  5. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    Science.gov (United States)

    Anggraeni, Novia Antika

    2015-04-01

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano's inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 - 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between -2.86 up to 5.49 days.

  6. Long short-term memory neural network for air pollutant concentration predictions: Method development and evaluation

    International Nuclear Information System (INIS)

    Li, Xiang; Peng, Ling; Yao, Xiaojing; Cui, Shaolong; Hu, Yuan; You, Chengzeng; Chi, Tianhe

    2017-01-01

    Air pollutant concentration forecasting is an effective method of protecting public health by providing an early warning against harmful air pollutants. However, existing methods of air pollutant concentration prediction fail to effectively model long-term dependencies, and most neglect spatial correlations. In this paper, a novel long short-term memory neural network extended (LSTME) model that inherently considers spatiotemporal correlations is proposed for air pollutant concentration prediction. Long short-term memory (LSTM) layers were used to automatically extract inherent useful features from historical air pollutant data, and auxiliary data, including meteorological data and time stamp data, were merged into the proposed model to enhance the performance. Hourly PM 2.5 (particulate matter with an aerodynamic diameter less than or equal to 2.5 μm) concentration data collected at 12 air quality monitoring stations in Beijing City from Jan/01/2014 to May/28/2016 were used to validate the effectiveness of the proposed LSTME model. Experiments were performed using the spatiotemporal deep learning (STDL) model, the time delay neural network (TDNN) model, the autoregressive moving average (ARMA) model, the support vector regression (SVR) model, and the traditional LSTM NN model, and a comparison of the results demonstrated that the LSTME model is superior to the other statistics-based models. Additionally, the use of auxiliary data improved model performance. For the one-hour prediction tasks, the proposed model performed well and exhibited a mean absolute percentage error (MAPE) of 11.93%. In addition, we conducted multiscale predictions over different time spans and achieved satisfactory performance, even for 13–24 h prediction tasks (MAPE = 31.47%). - Highlights: • Regional air pollutant concentration shows an obvious spatiotemporal correlation. • Our prediction model presents superior performance. • Climate data and metadata can significantly

  7. Predictive Manufacturing: A Classification Strategy to Predict Product Failures

    DEFF Research Database (Denmark)

    Khan, Abdul Rauf; Schiøler, Henrik; Kulahci, Murat

    2018-01-01

    manufacturing analytics model that employs a big data approach to predicting product failures; third, we illustrate the issue of high dimensionality, along with statistically redundant information; and, finally, our proposed method will be compared against the well-known classification methods (SVM, K......-nearest neighbor, artificial neural networks). The results from real data show that our predictive manufacturing analytics approach, using genetic algorithms and Voronoi tessellations, is capable of predicting product failure with reasonable accuracy. The potential application of this method contributes...... to accurately predicting product failures, which would enable manufacturers to reduce production costs without compromising product quality....

  8. WALS Prediction

    NARCIS (Netherlands)

    Magnus, J.R.; Wang, W.; Zhang, Xinyu

    2012-01-01

    Abstract: Prediction under model uncertainty is an important and difficult issue. Traditional prediction methods (such as pretesting) are based on model selection followed by prediction in the selected model, but the reported prediction and the reported prediction variance ignore the uncertainty

  9. Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods

    Science.gov (United States)

    Davis, A. D.

    2015-12-01

    The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity

  10. Predictive Methods for Dense Polymer Networks: Combating Bias with Bio-Based Structures

    Science.gov (United States)

    2016-03-16

    Combating bias with bio - based structures 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Andrew J. Guenthner...unlimited. PA Clearance 16152 Integrity  Service  Excellence Predictive methods for dense polymer networks: Combating bias with bio -based...Architectural Bias • Comparison of Petroleum-Based and Bio -Based Chemical Architectures • Continuing Research on Structure-Property Relationships using

  11. Predicting high risk births with contraceptive prevalence and contraceptive method-mix in an ecologic analysis.

    Science.gov (United States)

    Perin, Jamie; Amouzou, Agbessi; Walker, Neff

    2017-11-07

    Increased contraceptive use has been associated with a decrease in high parity births, births that occur close together in time, and births to very young or to older women. These types of births are also associated with high risk of under-five mortality. Previous studies have looked at the change in the level of contraception use and the average change in these types of high-risk births. We aim to predict the distribution of births in a specific country when there is a change in the level and method of modern contraception. We used data from full birth histories and modern contraceptive use from 207 nationally representative Demographic and Health Surveys covering 71 countries to describe the distribution of births in each survey based on birth order, preceding birth space, and mother's age at birth. We estimated the ecologic associations between the prevalence and method-mix of modern contraceptives and the proportion of births in each category. Hierarchical modelling was applied to these aggregated cross sectional proportions, so that random effects were estimated for countries with multiple surveys. We use these results to predict the change in type of births associated with scaling up modern contraception in three different scenarios. We observed marked differences between regions, in the absolute rates of contraception, the types of contraceptives in use, and in the distribution of type of birth. Contraceptive method-mix was a significant determinant of proportion of high-risk births, especially for birth spacing, but also for mother's age and parity. Increased use of modern contraceptives is especially predictive of reduced parity and more births with longer preceding space. However, increased contraception alone is not associated with fewer births to women younger than 18 years or a decrease in short-spaced births. Both the level and the type of contraception are important factors in determining the effects of family planning on changes in distribution of

  12. Improved algorithms and methods for room sound-field prediction by acoustical radiosity in arbitrary polyhedral rooms

    Science.gov (United States)

    Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian

    2004-08-01

    This paper explores acoustical (or time-dependent) radiosity-a geometrical-acoustics sound-field prediction method that assumes diffuse surface reflection. The literature of acoustical radiosity is briefly reviewed and the advantages and disadvantages of the method are discussed. A discrete form of the integral equation that results from meshing the enclosure boundaries into patches is presented and used in a discrete-time algorithm. Furthermore, an averaging technique is used to reduce computational requirements. To generalize to nonrectangular rooms, a spherical-triangle method is proposed as a means of evaluating the integrals over solid angles that appear in the discrete form of the integral equation. The evaluation of form factors, which also appear in the numerical solution, is discussed for rectangular and nonrectangular rooms. This algorithm and associated methods are validated by comparison of the steady-state predictions for a spherical enclosure to analytical solutions.

  13. Machine Learning Methods to Predict Diabetes Complications.

    Science.gov (United States)

    Dagliati, Arianna; Marini, Simone; Sacchi, Lucia; Cogni, Giulia; Teliti, Marsida; Tibollo, Valentina; De Cata, Pasquale; Chiovato, Luca; Bellazzi, Riccardo

    2018-03-01

    One of the areas where Artificial Intelligence is having more impact is machine learning, which develops algorithms able to learn patterns and decision rules from data. Machine learning algorithms have been embedded into data mining pipelines, which can combine them with classical statistical strategies, to extract knowledge from data. Within the EU-funded MOSAIC project, a data mining pipeline has been used to derive a set of predictive models of type 2 diabetes mellitus (T2DM) complications based on electronic health record data of nearly one thousand patients. Such pipeline comprises clinical center profiling, predictive model targeting, predictive model construction and model validation. After having dealt with missing data by means of random forest (RF) and having applied suitable strategies to handle class imbalance, we have used Logistic Regression with stepwise feature selection to predict the onset of retinopathy, neuropathy, or nephropathy, at different time scenarios, at 3, 5, and 7 years from the first visit at the Hospital Center for Diabetes (not from the diagnosis). Considered variables are gender, age, time from diagnosis, body mass index (BMI), glycated hemoglobin (HbA1c), hypertension, and smoking habit. Final models, tailored in accordance with the complications, provided an accuracy up to 0.838. Different variables were selected for each complication and time scenario, leading to specialized models easy to translate to the clinical practice.

  14. Skill forecasting from different wind power ensemble prediction methods

    International Nuclear Information System (INIS)

    Pinson, Pierre; Nielsen, Henrik A; Madsen, Henrik; Kariniotakis, George

    2007-01-01

    This paper presents an investigation on alternative approaches to the providing of uncertainty estimates associated to point predictions of wind generation. Focus is given to skill forecasts in the form of prediction risk indices, aiming at giving a comprehensive signal on the expected level of forecast uncertainty. Ensemble predictions of wind generation are used as input. A proposal for the definition of prediction risk indices is given. Such skill forecasts are based on the dispersion of ensemble members for a single prediction horizon, or over a set of successive look-ahead times. It is shown on the test case of a Danish offshore wind farm how prediction risk indices may be related to several levels of forecast uncertainty (and energy imbalances). Wind power ensemble predictions are derived from the transformation of ECMWF and NCEP ensembles of meteorological variables to power, as well as by a lagged average approach alternative. The ability of risk indices calculated from the various types of ensembles forecasts to resolve among situations with different levels of uncertainty is discussed

  15. A new hybrid method for the prediction of the remaining useful life of a lithium-ion battery

    International Nuclear Information System (INIS)

    Chang, Yang; Fang, Huajing; Zhang, Yong

    2017-01-01

    Highlights: •The proposed prognostic method can make full use of historical information. •The method of obtaining historical error data is discussed in detail. •Comparative experiments based on data-driven and model-based methods are performed. •Battery working with different discharging currents is considered. -- Abstract: The lithium-ion battery has become the main power source of many electronic devices, it is necessary to know its state-of-health and remaining useful life to ensure the reliability of electronic device. In this paper, a novel hybrid method with the thought of error-correction is proposed to predict the remaining useful life of lithium-ion battery, which fuses the algorithms of unscented Kalman filter, complete ensemble empirical mode decomposition (CEEMD) and relevance vector machine. Firstly, the unscented Kalman filter algorithm is adopted to obtain a prognostic result based on an estimated model and produce a raw error series. Secondly, a new error series is constructed by analyzing the decomposition results of the raw error series obtained by CEEMD method. Finally, the new error series is utilized by relevance vector machine regression model to predict the prognostic error which is adopted to correct the prognostic result obtained by unscented Kalman filter. Remaining useful life prediction experiments for batteries with different rated capacities and discharging currents are performed to show the high reliability of the proposed hybrid method.

  16. A novel method for predicting activity of cis-regulatory modules, based on a diverse training set.

    Science.gov (United States)

    Yang, Wei; Sinha, Saurabh

    2017-01-01

    With the rapid emergence of technologies for locating cis-regulatory modules (CRMs) genome-wide, the next pressing challenge is to assign precise functions to each CRM, i.e. to determine the spatiotemporal domains or cell-types where it drives expression. A popular approach to this task is to model the typical k-mer composition of a set of CRMs known to drive a common expression pattern, and assign that pattern to other CRMs exhibiting a similar k-mer composition. This approach does not rely on prior knowledge of transcription factors relevant to the CRM or their binding motifs, and is thus more widely applicable than motif-based methods for predicting CRM activity, but is also prone to false positive predictions. We present a novel strategy to improve the above-mentioned approach: to predict if a CRM drives a specific gene expression pattern, assess not only how similar the CRM is to other CRMs with similar activity but also to CRMs with distinct activities. We use a state-of-the-art statistical method to quantify a CRM's sequence similarity to many different training sets of CRMs, and employ a classification algorithm to integrate these similarity scores into a single prediction of the CRM's activity. This strategy is shown to significantly improve CRM activity prediction over current approaches. Our implementation of the new method, called IMMBoost, is freely available as source code, at https://github.com/weiyangedward/IMMBoost CONTACT: sinhas@illinois.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Protein docking prediction using predicted protein-protein interface

    Directory of Open Access Journals (Sweden)

    Li Bin

    2012-01-01

    Full Text Available Abstract Background Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. Results We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm, is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. Conclusion We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.

  18. Protein docking prediction using predicted protein-protein interface.

    Science.gov (United States)

    Li, Bin; Kihara, Daisuke

    2012-01-10

    Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm), is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.

  19. Validation of a simple method for predicting the disinfection performance in a flow-through contactor.

    Science.gov (United States)

    Pfeiffer, Valentin; Barbeau, Benoit

    2014-02-01

    Despite its shortcomings, the T10 method introduced by the United States Environmental Protection Agency (USEPA) in 1989 is currently the method most frequently used in North America to calculate disinfection performance. Other methods (e.g., the Integrated Disinfection Design Framework, IDDF) have been advanced as replacements, and more recently, the USEPA suggested the Extended T10 and Extended CSTR (Continuous Stirred-Tank Reactor) methods to improve the inactivation calculations within ozone contactors. To develop a method that fully considers the hydraulic behavior of the contactor, two models (Plug Flow with Dispersion and N-CSTR) were successfully fitted with five tracer tests results derived from four Water Treatment Plants and a pilot-scale contactor. A new method based on the N-CSTR model was defined as the Partially Segregated (Pseg) method. The predictions from all the methods mentioned were compared under conditions of poor and good hydraulic performance, low and high disinfectant decay, and different levels of inactivation. These methods were also compared with experimental results from a chlorine pilot-scale contactor used for Escherichia coli inactivation. The T10 and Extended T10 methods led to large over- and under-estimations. The Segregated Flow Analysis (used in the IDDF) also considerably overestimated the inactivation under high disinfectant decay. Only the Extended CSTR and Pseg methods produced realistic and conservative predictions in all cases. Finally, a simple implementation procedure of the Pseg method was suggested for calculation of disinfection performance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Feature selection for splice site prediction: A new method using EDA-based feature ranking

    Directory of Open Access Journals (Sweden)

    Rouzé Pierre

    2004-05-01

    Full Text Available Abstract Background The identification of relevant biological features in large and complex datasets is an important step towards gaining insight in the processes underlying the data. Other advantages of feature selection include the ability of the classification system to attain good or even better solutions using a restricted subset of features, and a faster classification. Thus, robust methods for fast feature selection are of key importance in extracting knowledge from complex biological data. Results In this paper we present a novel method for feature subset selection applied to splice site prediction, based on estimation of distribution algorithms, a more general framework of genetic algorithms. From the estimated distribution of the algorithm, a feature ranking is derived. Afterwards this ranking is used to iteratively discard features. We apply this technique to the problem of splice site prediction, and show how it can be used to gain insight into the underlying biological process of splicing. Conclusion We show that this technique proves to be more robust than the traditional use of estimation of distribution algorithms for feature selection: instead of returning a single best subset of features (as they normally do this method provides a dynamical view of the feature selection process, like the traditional sequential wrapper methods. However, the method is faster than the traditional techniques, and scales better to datasets described by a large number of features.

  1. Assessment of the possibility of using data mining methods to predict sorption isotherms of selected organic compounds on activated carbon

    Directory of Open Access Journals (Sweden)

    Dąbek Lidia

    2017-01-01

    Full Text Available The paper analyses the use of four data mining methods (Support Vector Machines. Cascade Neural Networks. Random Forests and Boosted Trees to predict sorption on activated carbons. The input data for statistical models included the activated carbon parameters, organic substances and equilibrium concentrations in the solution. The assessment of the predictive abilities of the developed models was made with the use of mean absolute error (MAE, mean absolute percentage error (MAPE, and root mean squared error (RMSE. The computations proved that methods of data mining considered in the study can be applied to predict sorption of selected organic compounds 011 activated carbon. The lowest values of sorption prediction errors were obtained with the Cascade Neural Networks method (MAE = 1.23 g/g; MAPE = 7.90% and RMSE = 1.81 g/g, while the highest error values were produced by the Boosted Trees method (MAE=14.31 g/g; MAPE = 39.43% and RMSE = 27.76 g/g.

  2. Validation of the multiplier method for leg-length predictions on a large European cohort and an assessment of the effect of physiological age on predictions.

    Science.gov (United States)

    Aird, J J; Cheesman, C L; Schade, A T; Monsell, F P

    2017-01-01

    The Avon Longitudinal Study of Parents and Children (ALSPAC) prospective cohort was used to determine the accuracy of the Paley multiplier method for predicting leg length. Using menarche as a proxy, physiological age was then used to increase the accuracy of the multiplier. Chronological age was corrected in female patients over the age of eight years with documented date of first menses. Final sub-ischial leg length and predicted final leg length were predicted for all data points. Good correlation was demonstrated between the Paley and ALSPAC data. The average error in prediction depended on the time of assessment, tending to improve as the child got older. It varied from 2.2 cm at the age of seven years to 1.8 cm at the age of 14 years. When chronological age was corrected, the accuracy of multiplier increased. Age correction of 50% improved multiplier predictions by up to 28%. There appears to have been no significant change in growth trajectories of the two populations who were chronologically separated by 40 years. While the Paley data were based on extracting trends from averaged data, the ALSPAC dataset provides descriptive statistics from which it is possible to compare populations and assess the accuracy of the multiplier method. The data suggest that the accuracy improves as the patient gets close to the average skeletal maturity but that results need to be interpreted in conjunction with a radiological assessment of the growth plates. The magnitude of the errors in prediction suggest that when using the multiplier, the clinician must remain vigilant and prepared to perform a contralateral epiphyseodisis if the prediction proves to be wrong. The data suggest a relationship between the multiplier and menarche. There appears to be a factorisation and when accounting for physiological age, one needs to correct by 50% of the difference between chronological and physiological age.

  3. Prediction of Backbreak in Open-Pit Blasting Operations Using the Machine Learning Method

    Science.gov (United States)

    Khandelwal, Manoj; Monjezi, M.

    2013-03-01

    Backbreak is an undesirable phenomenon in blasting operations. It can cause instability of mine walls, falling down of machinery, improper fragmentation, reduced efficiency of drilling, etc. The existence of various effective parameters and their unknown relationships are the main reasons for inaccuracy of the empirical models. Presently, the application of new approaches such as artificial intelligence is highly recommended. In this paper, an attempt has been made to predict backbreak in blasting operations of Soungun iron mine, Iran, incorporating rock properties and blast design parameters using the support vector machine (SVM) method. To investigate the suitability of this approach, the predictions by SVM have been compared with multivariate regression analysis (MVRA). The coefficient of determination (CoD) and the mean absolute error (MAE) were taken as performance measures. It was found that the CoD between measured and predicted backbreak was 0.987 and 0.89 by SVM and MVRA, respectively, whereas the MAE was 0.29 and 1.07 by SVM and MVRA, respectively.

  4. A Method of Calculating Functional Independence Measure at Discharge from Functional Independence Measure Effectiveness Predicted by Multiple Regression Analysis Has a High Degree of Predictive Accuracy.

    Science.gov (United States)

    Tokunaga, Makoto; Watanabe, Susumu; Sonoda, Shigeru

    2017-09-01

    Multiple linear regression analysis is often used to predict the outcome of stroke rehabilitation. However, the predictive accuracy may not be satisfactory. The objective of this study was to elucidate the predictive accuracy of a method of calculating motor Functional Independence Measure (mFIM) at discharge from mFIM effectiveness predicted by multiple regression analysis. The subjects were 505 patients with stroke who were hospitalized in a convalescent rehabilitation hospital. The formula "mFIM at discharge = mFIM effectiveness × (91 points - mFIM at admission) + mFIM at admission" was used. By including the predicted mFIM effectiveness obtained through multiple regression analysis in this formula, we obtained the predicted mFIM at discharge (A). We also used multiple regression analysis to directly predict mFIM at discharge (B). The correlation between the predicted and the measured values of mFIM at discharge was compared between A and B. The correlation coefficients were .916 for A and .878 for B. Calculating mFIM at discharge from mFIM effectiveness predicted by multiple regression analysis had a higher degree of predictive accuracy of mFIM at discharge than that directly predicted. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  5. Prediction strategies in a TV recommender system - Method and experiments

    NARCIS (Netherlands)

    van Setten, M.J.; Veenstra, M.; van Dijk, Elisabeth M.A.G.; Nijholt, Antinus; Isaísas, P.; Karmakar, N.

    2003-01-01

    Predicting the interests of a user in information is an important process in personalized information systems. In this paper, we present a way to create prediction engines that allow prediction techniques to be easily combined into prediction strategies. Prediction strategies choose one or a

  6. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    International Nuclear Information System (INIS)

    Anggraeni, Novia Antika

    2015-01-01

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano’s inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 – 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between −2.86 up to 5.49 days

  7. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    Energy Technology Data Exchange (ETDEWEB)

    Anggraeni, Novia Antika, E-mail: novia.antika.a@gmail.com [Geophysics Sub-department, Physics Department, Faculty of Mathematic and Natural Science, Universitas Gadjah Mada. BLS 21 Yogyakarta 55281 (Indonesia)

    2015-04-24

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano’s inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 – 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between −2.86 up to 5.49 days.

  8. A prediction method for the wax deposition rate based on a radial basis function neural network

    Directory of Open Access Journals (Sweden)

    Ying Xie

    2017-06-01

    Full Text Available The radial basis function neural network is a popular supervised learning tool based on machinery learning technology. Its high precision having been proven, the radial basis function neural network has been applied in many areas. The accumulation of deposited materials in the pipeline may lead to the need for increased pumping power, a decreased flow rate or even to the total blockage of the line, with losses of production and capital investment, so research on predicting the wax deposition rate is significant for the safe and economical operation of an oil pipeline. This paper adopts the radial basis function neural network to predict the wax deposition rate by considering four main influencing factors, the pipe wall temperature gradient, pipe wall wax crystal solubility coefficient, pipe wall shear stress and crude oil viscosity, by the gray correlational analysis method. MATLAB software is employed to establish the RBF neural network. Compared with the previous literature, favorable consistency exists between the predicted outcomes and the experimental results, with a relative error of 1.5%. It can be concluded that the prediction method of wax deposition rate based on the RBF neural network is feasible.

  9. KFC2: a knowledge-based hot spot prediction method based on interface solvation, atomic density, and plasticity features.

    Science.gov (United States)

    Zhu, Xiaolei; Mitchell, Julie C

    2011-09-01

    Hot spots constitute a small fraction of protein-protein interface residues, yet they account for a large fraction of the binding affinity. Based on our previous method (KFC), we present two new methods (KFC2a and KFC2b) that outperform other methods at hot spot prediction. A number of improvements were made in developing these new methods. First, we created a training data set that contained a similar number of hot spot and non-hot spot residues. In addition, we generated 47 different features, and different numbers of features were used to train the models to avoid over-fitting. Finally, two feature combinations were selected: One (used in KFC2a) is composed of eight features that are mainly related to solvent accessible surface area and local plasticity; the other (KFC2b) is composed of seven features, only two of which are identical to those used in KFC2a. The two models were built using support vector machines (SVM). The two KFC2 models were then tested on a mixed independent test set, and compared with other methods such as Robetta, FOLDEF, HotPoint, MINERVA, and KFC. KFC2a showed the highest predictive accuracy for hot spot residues (True Positive Rate: TPR = 0.85); however, the false positive rate was somewhat higher than for other models. KFC2b showed the best predictive accuracy for hot spot residues (True Positive Rate: TPR = 0.62) among all methods other than KFC2a, and the False Positive Rate (FPR = 0.15) was comparable with other highly predictive methods. Copyright © 2011 Wiley-Liss, Inc.

  10. A Multifeatures Fusion and Discrete Firefly Optimization Method for Prediction of Protein Tyrosine Sulfation Residues.

    Science.gov (United States)

    Guo, Song; Liu, Chunhua; Zhou, Peng; Li, Yanling

    2016-01-01

    Tyrosine sulfation is one of the ubiquitous protein posttranslational modifications, where some sulfate groups are added to the tyrosine residues. It plays significant roles in various physiological processes in eukaryotic cells. To explore the molecular mechanism of tyrosine sulfation, one of the prerequisites is to correctly identify possible protein tyrosine sulfation residues. In this paper, a novel method was presented to predict protein tyrosine sulfation residues from primary sequences. By means of informative feature construction and elaborate feature selection and parameter optimization scheme, the proposed predictor achieved promising results and outperformed many other state-of-the-art predictors. Using the optimal features subset, the proposed method achieved mean MCC of 94.41% on the benchmark dataset, and a MCC of 90.09% on the independent dataset. The experimental performance indicated that our new proposed method could be effective in identifying the important protein posttranslational modifications and the feature selection scheme would be powerful in protein functional residues prediction research fields.

  11. Prediction method of unburnt carbon for coal fired utility boiler using image processing technique of combustion flame

    International Nuclear Information System (INIS)

    Shimoda, M.; Sugano, A.; Kimura, T.; Watanabe, Y.; Ishiyama, K.

    1990-01-01

    This paper reports on a method predicting unburnt carbon in a coal fired utility boiler developed using an image processing technique. The method consists of an image processing unit and a furnace model unit. temperature distribution of combustion flames can be obtained through the former unit. The later calculates dynamics of the carbon reduction from the burner stages to the furnace outlet using coal feed rate, air flow rate, chemical and ash content of coal. An experimental study shows that the prediction error of the unburnt carbon can be reduced to 10%

  12. Simplified method to predict mutual interactions of human transcription factors based on their primary structure

    KAUST Repository

    Schmeier, Sebastian

    2011-07-05

    Background: Physical interactions between transcription factors (TFs) are necessary for forming regulatory protein complexes and thus play a crucial role in gene regulation. Currently, knowledge about the mechanisms of these TF interactions is incomplete and the number of known TF interactions is limited. Computational prediction of such interactions can help identify potential new TF interactions as well as contribute to better understanding the complex machinery involved in gene regulation. Methodology: We propose here such a method for the prediction of TF interactions. The method uses only the primary sequence information of the interacting TFs, resulting in a much greater simplicity of the prediction algorithm. Through an advanced feature selection process, we determined a subset of 97 model features that constitute the optimized model in the subset we considered. The model, based on quadratic discriminant analysis, achieves a prediction accuracy of 85.39% on a blind set of interactions. This result is achieved despite the selection for the negative data set of only those TF from the same type of proteins, i.e. TFs that function in the same cellular compartment (nucleus) and in the same type of molecular process (transcription initiation). Such selection poses significant challenges for developing models with high specificity, but at the same time better reflects real-world problems. Conclusions: The performance of our predictor compares well to those of much more complex approaches for predicting TF and general protein-protein interactions, particularly when taking the reduced complexity of model utilisation into account. © 2011 Schmeier et al.

  13. Simplified method to predict mutual interactions of human transcription factors based on their primary structure.

    Directory of Open Access Journals (Sweden)

    Sebastian Schmeier

    Full Text Available BACKGROUND: Physical interactions between transcription factors (TFs are necessary for forming regulatory protein complexes and thus play a crucial role in gene regulation. Currently, knowledge about the mechanisms of these TF interactions is incomplete and the number of known TF interactions is limited. Computational prediction of such interactions can help identify potential new TF interactions as well as contribute to better understanding the complex machinery involved in gene regulation. METHODOLOGY: We propose here such a method for the prediction of TF interactions. The method uses only the primary sequence information of the interacting TFs, resulting in a much greater simplicity of the prediction algorithm. Through an advanced feature selection process, we determined a subset of 97 model features that constitute the optimized model in the subset we considered. The model, based on quadratic discriminant analysis, achieves a prediction accuracy of 85.39% on a blind set of interactions. This result is achieved despite the selection for the negative data set of only those TF from the same type of proteins, i.e. TFs that function in the same cellular compartment (nucleus and in the same type of molecular process (transcription initiation. Such selection poses significant challenges for developing models with high specificity, but at the same time better reflects real-world problems. CONCLUSIONS: The performance of our predictor compares well to those of much more complex approaches for predicting TF and general protein-protein interactions, particularly when taking the reduced complexity of model utilisation into account.

  14. Prediction of protein binding sites using physical and chemical descriptors and the support vector machine regression method

    International Nuclear Information System (INIS)

    Sun Zhong-Hua; Jiang Fan

    2010-01-01

    In this paper a new continuous variable called core-ratio is defined to describe the probability for a residue to be in a binding site, thereby replacing the previous binary description of the interface residue using 0 and 1. So we can use the support vector machine regression method to fit the core-ratio value and predict the protein binding sites. We also design a new group of physical and chemical descriptors to characterize the binding sites. The new descriptors are more effective, with an averaging procedure used. Our test shows that much better prediction results can be obtained by the support vector regression (SVR) method than by the support vector classification method. (rapid communication)

  15. Estimation of uncertainties in predictions of environmental transfer models: evaluation of methods and application to CHERPAC

    International Nuclear Information System (INIS)

    Koch, J.; Peterson, S-R.

    1995-10-01

    Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs

  16. Estimation of uncertainties in predictions of environmental transfer models: evaluation of methods and application to CHERPAC

    Energy Technology Data Exchange (ETDEWEB)

    Koch, J. [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center; Peterson, S-R.

    1995-10-01

    Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs.

  17. Reducing NIR prediction errors with nonlinear methods and large populations of intact compound feedstuffs

    International Nuclear Information System (INIS)

    Fernández-Ahumada, E; Gómez, A; Vallesquino, P; Guerrero, J E; Pérez-Marín, D; Garrido-Varo, A; Fearn, T

    2008-01-01

    According to the current demands of the authorities, the manufacturers and the consumers, controls and assessments of the feed compound manufacturing process have become a key concern. Among others, it must be assured that a given compound feed is well manufactured and labelled in terms of the ingredient composition. When near-infrared spectroscopy (NIRS) together with linear models were used for the prediction of the ingredient composition, the results were not always acceptable. Therefore, the performance of nonlinear methods has been investigated. Artificial neural networks and least squares support vector machines (LS-SVM) have been applied to a large (N = 20 320) and heterogeneous population of non-milled feed compounds for the NIR prediction of the inclusion percentage of wheat and sunflower meal, as representative of two different classes of ingredients. Compared to partial least squares regression, results showed considerable reductions of standard error of prediction values for both methods and ingredients: reductions of 45% with ANN and 49% with LS-SVM for wheat and reductions of 44% with ANN and 46% with LS-SVM for sunflower meal. These improvements together with the facility of NIRS technology to be implemented in the process make it ideal for meeting the requirements of the animal feed industry

  18. Esophageal cancer prediction based on qualitative features using adaptive fuzzy reasoning method

    Directory of Open Access Journals (Sweden)

    Raed I. Hamed

    2015-04-01

    Full Text Available Esophageal cancer is one of the most common cancers world-wide and also the most common cause of cancer death. In this paper, we present an adaptive fuzzy reasoning algorithm for rule-based systems using fuzzy Petri nets (FPNs, where the fuzzy production rules are represented by FPN. We developed an adaptive fuzzy Petri net (AFPN reasoning algorithm as a prognostic system to predict the outcome for esophageal cancer based on the serum concentrations of C-reactive protein and albumin as a set of input variables. The system can perform fuzzy reasoning automatically to evaluate the degree of truth of the proposition representing the risk degree value with a weight value to be optimally tuned based on the observed data. In addition, the implementation process for esophageal cancer prediction is fuzzily deducted by the AFPN algorithm. Performance of the composite model is evaluated through a set of experiments. Simulations and experimental results demonstrate the effectiveness and performance of the proposed algorithms. A comparison of the predictive performance of AFPN models with other methods and the analysis of the curve showed the same results with an intuitive behavior of AFPN models.

  19. Method of fission product beta spectra measurements for predicting reactor anti-neutrino emission

    Energy Technology Data Exchange (ETDEWEB)

    Asner, D.M.; Burns, K.; Campbell, L.W.; Greenfield, B.; Kos, M.S., E-mail: markskos@gmail.com; Orrell, J.L.; Schram, M.; VanDevender, B.; Wood, L.S.; Wootan, D.W.

    2015-03-11

    The nuclear fission process that occurs in the core of nuclear reactors results in unstable, neutron-rich fission products that subsequently beta decay and emit electron antineutrinos. These reactor neutrinos have served neutrino physics research from the initial discovery of the neutrino to today's precision measurements of neutrino mixing angles. The prediction of the absolute flux and energy spectrum of the emitted reactor neutrinos hinges upon a series of seminal papers based on measurements performed in the 1970s and 1980s. The steadily improving reactor neutrino measurement techniques and recent reconsiderations of the agreement between the predicted and observed reactor neutrino flux motivates revisiting the underlying beta spectra measurements. A method is proposed to use an accelerator proton beam delivered to an engineered target to yield a neutron field tailored to reproduce the neutron energy spectrum present in the core of an operating nuclear reactor. Foils of the primary reactor fissionable isotopes placed in this tailored neutron flux will ultimately emit beta particles from the resultant fission products. Measurement of these beta particles in a time projection chamber with a perpendicular magnetic field provides a distinctive set of systematic considerations for comparison to the original seminal beta spectra measurements. Ancillary measurements such as gamma-ray emission and post-irradiation radiochemical analysis will further constrain the absolute normalization of beta emissions per fission. The requirements for unfolding the beta spectra measured with this method into a predicted reactor neutrino spectrum are explored.

  20. Statistical prediction of AVB wear growth and initiation in model F steam generator tubes using Monte Carlo method

    International Nuclear Information System (INIS)

    Lee, Jae Bong; Park, Jae Hak; Kim, Hong Deok; Chung, Han Sub; Kim, Tae Ryong

    2005-01-01

    The growth of AVB wear in Model F steam generator tubes is predicted using the Monte Carlo Method and statistical approaches. The statistical parameters that represent the characteristics of wear growth and wear initiation are derived from In-Service Inspection (ISI) Non-Destructive Evaluation (NDE) data. Based on the statistical approaches, wear growth model are proposed and applied to predict wear distribution at the End Of Cycle (EOC). Probabilistic distributions of the number of wear flaws and maximum wear depth at EOC are obtained from the analysis. Comparing the predicted EOC wear flaw data with the known EOC data the usefulness of the proposed method is examined and satisfactory results are obtained

  1. Statistical prediction of AVB wear growth and initiation in model F steam generator tubes using Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Bong; Park, Jae Hak [Chungbuk National Univ., Cheongju (Korea, Republic of); Kim, Hong Deok; Chung, Han Sub; Kim, Tae Ryong [Korea Electtric Power Research Institute, Daejeon (Korea, Republic of)

    2005-07-01

    The growth of AVB wear in Model F steam generator tubes is predicted using the Monte Carlo Method and statistical approaches. The statistical parameters that represent the characteristics of wear growth and wear initiation are derived from In-Service Inspection (ISI) Non-Destructive Evaluation (NDE) data. Based on the statistical approaches, wear growth model are proposed and applied to predict wear distribution at the End Of Cycle (EOC). Probabilistic distributions of the number of wear flaws and maximum wear depth at EOC are obtained from the analysis. Comparing the predicted EOC wear flaw data with the known EOC data the usefulness of the proposed method is examined and satisfactory results are obtained.

  2. An assessment of prediction methods of CHF in tubes with a large experimental data bank

    International Nuclear Information System (INIS)

    Leung, L.K.H.; Groeneveld, D.C.

    1993-01-01

    An assessment of prediction methods of CHF in tubes has been carried out using an expanded CHF data bank at Chalk River Laboratories (CRL). It includes eight different CHF look-up tables (two AECL versions and six USSR (or Russian) versions) and three empirical correlations. These prediction methods were developed from relatively large data bases and therefore have a wide range of application. Some limitations, however, were imposed in this study, to avoid any invalid predictions due to extrapolation of these methods. Therefore, these comparisons are limited to the specific data base that is tailored to suit the range of an individual method. This has resulted in a different number of data used in each case. The comparison of predictions against the experimental data is based on the constant inlet-condition approach (i.e., the pressure, mass flux, inlet fluid temperature and tube geometry are the primary parameters). Overall, the AECL tables have the widest range of application. They are assessed with 21 771 data points and the root-mean-square error is only 8.3%. About 60% of these data were used in the development of the AECL tables. The best version of the USSR/Russian CHF table is valid for 13 300 data points with a root-mean-square error of 8.8%. The USSR/Russian table that has the widest range of application covers a total of 18 800 data points, but the error increases to 9.3%. The range of application for empirical correlations, however, are generally much narrower than those covered by the CHF tables. The number of data used to assess these correlations is therefore further limited. Among the tested correlations, the Becker and Persson correlation covers the least amount of data (only 7 499 data points), but has the best accuracy (with a root-mean-square error of 9.71%). 33 refs., 2 figs., 3 tabs

  3. Data mining for dengue hemorrhagic fever (DHF) prediction with naive Bayes method

    Science.gov (United States)

    Arafiyah, Ria; Hermin, Fariani

    2018-01-01

    Handling of infectious diseases is determined by the accuracy and speed of diagnosis. Government through the Regulation of the Minister of Health of the Republic of Indonesia No. 82 of 2014 on the Control of Communicable Diseases establishes Dengue Hemorrhagic Fever (DHF) has made DHF prevention a national priority. Various attempts were made to overcome this misdiagnosis. The treatment and diagnosis of DHF using ANFIS has result an application program that can decide whether a patient has dengue fever or not [1]. An expert system of dengue prevention by using ANFIS has predict the weather and the number of sufferers [2]. The large number of data on DHF often cannot affect a person in making decisions. The use of data mining method, able to build data base support in decision makers diagnose DHF disease [3]. This study predicts DHF with the method of Naive Bayes. Parameter of The input variable is the patient’s medical data (temperature, spotting, bleeding, and tornuine test) and the output variable suffers from DBD or not while the system output is diagnosis of the patient suffering from DHF or not. Result of model test by using tools of Orange 3.4.5 obtained level of precision model is 77,3%.

  4. Prediction of high pressure vapor-liquid equilibria with mixing rule using ASOG group contribution method

    Energy Technology Data Exchange (ETDEWEB)

    Tochigi, K.; Kojima, K.; Kurihara, K.

    1985-02-01

    To develop a widely applicable method for predicting high-pressure vapor-liquid equilibria by the equation of state, a mixing rule is proposed in which mixture energy parameter ''..cap alpha..'' of theSoave-RedlichKwong, Peng-Robinson, and Martin cubic equations of state is expressed by using the ASOG group contribution method. The group pair parameters are then determined for 14 group pairs constituted by six groups, i.e. CH/sub 4/, CH/sub 3/, CH/sub 2/, N/sub 2/, H/sub 2/, and CO/sub 2/ groups. By using the group pair parameters determined, high-pressure vapor-liquid equilibria are predicted with good accuracy for binary and ternary systems constituted by n-paraffins, nitrogen, hydrogen, and carbon dioxide in the temperature range of 100 - 450K.

  5. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    Science.gov (United States)

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is

  6. A machine learning method for the prediction of receptor activation in the simulation of synapses.

    Directory of Open Access Journals (Sweden)

    Jesus Montes

    Full Text Available Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of

  7. A novel method for prediction of dynamic smiling expressions after orthodontic treatment: a case report.

    Science.gov (United States)

    Dai, Fanfan; Li, Yangjing; Chen, Gui; Chen, Si; Xu, Tianmin

    2016-02-01

    Smile esthetics has become increasingly important for orthodontic patients, thus prediction of post-treatment smile is necessary for a perfect treatment plan. In this study, with a combination of three-dimensional craniofacial data from the cone beam computed tomography and color-encoded structured light system, a novel method for smile prediction was proposed based on facial expression transfer, in which dynamic facial expression was interpreted as a matrix of facial depth changes. Data extracted from the pre-treatment smile expression record were applied to the post-treatment static model to realize expression transfer. Therefore smile esthetics of the patient after treatment could be evaluated in pre-treatment planning procedure. The positive and negative mean values of error for prediction accuracy were 0.9 and - 1.1 mm respectively, with the standard deviation of ± 1.5 mm, which is clinically acceptable. Further studies would be conducted to reduce the prediction error from both the static and dynamic sides as well as to explore automatically combined prediction from the two sides.

  8. A Bayesian method and its variational approximation for prediction of genomic breeding values in multiple traits

    Directory of Open Access Journals (Sweden)

    Hayashi Takeshi

    2013-01-01

    Full Text Available Abstract Background Genomic selection is an effective tool for animal and plant breeding, allowing effective individual selection without phenotypic records through the prediction of genomic breeding value (GBV. To date, genomic selection has focused on a single trait. However, actual breeding often targets multiple correlated traits, and, therefore, joint analysis taking into consideration the correlation between traits, which might result in more accurate GBV prediction than analyzing each trait separately, is suitable for multi-trait genomic selection. This would require an extension of the prediction model for single-trait GBV to multi-trait case. As the computational burden of multi-trait analysis is even higher than that of single-trait analysis, an effective computational method for constructing a multi-trait prediction model is also needed. Results We described a Bayesian regression model incorporating variable selection for jointly predicting GBVs of multiple traits and devised both an MCMC iteration and variational approximation for Bayesian estimation of parameters in this multi-trait model. The proposed Bayesian procedures with MCMC iteration and variational approximation were referred to as MCBayes and varBayes, respectively. Using simulated datasets of SNP genotypes and phenotypes for three traits with high and low heritabilities, we compared the accuracy in predicting GBVs between multi-trait and single-trait analyses as well as between MCBayes and varBayes. The results showed that, compared to single-trait analysis, multi-trait analysis enabled much more accurate GBV prediction for low-heritability traits correlated with high-heritability traits, by utilizing the correlation structure between traits, while the prediction accuracy for uncorrelated low-heritability traits was comparable or less with multi-trait analysis in comparison with single-trait analysis depending on the setting for prior probability that a SNP has zero

  9. Detailed disc assembly temperature prediction: comparison between CFD and simplified engineering methods

    CSIR Research Space (South Africa)

    Snedden, Glen C

    2003-09-01

    Full Text Available Institute of Aeronautics and Astronautics Inc. All rights reserved. DETAILED DISC ASSEMBLY TEMPERATURE PREDICTION: COMPARISON BETWEEN CFD AND SIMPLIFIED ENGINEERING METHODS ISABE-2005-1130 Glen Snedden, Thomas Roos and Kavendra Naidoo CSIR, Defencetek... transfer and conduction code (Gaugler, 1978) Taw Adiabatic Wall Temperature y+ Near wall Reynolds number Introduction In order to calculate life degradation of gas turbine disc assemblies, it is necessary to model the transient thermal and mechanical...

  10. Ground-State Gas-Phase Structures of Inorganic Molecules Predicted by Density Functional Theory Methods

    KAUST Repository

    Minenkov, Yury; Cavallo, Luigi

    2017-01-01

    -GGA approximations with B3PW91, APF, TPSSh, mPW1PW91, PBE0, mPW1PBE, B972, and B98 functionals, resulting in lowest errors. We recommend using these methods to predict accurate three-dimensional structures of inorganic molecules when intramolecular dispersion

  11. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    Science.gov (United States)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  12. The Expanded FindCore Method for Identification of a Core Atom Set for Assessment of Protein Structure Prediction

    Science.gov (United States)

    Snyder, David A.; Grullon, Jennifer; Huang, Yuanpeng J.; Tejero, Roberto; Montelione, Gaetano T.

    2014-01-01

    Maximizing the scientific impact of NMR-based structure determination requires robust and statistically sound methods for assessing the precision of NMR-derived structures. In particular, a method to define a core atom set for calculating superimpositions and validating structure predictions is critical to the use of NMR-derived structures as targets in the CASP competition. FindCore (D.A. Snyder and G.T. Montelione PROTEINS 2005;59:673–686) is a superimposition independent method for identifying a core atom set, and partitioning that set into domains. However, as FindCore optimizes superimposition by sensitively excluding not-well-defined atoms, the FindCore core may not comprise all atoms suitable for use in certain applications of NMR structures, including the CASP assessment process. Adapting the FindCore approach to assess predicted models against experimental NMR structures in CASP10 required modification of the FindCore method. This paper describes conventions and a standard protocol to calculate an “Expanded FindCore” atom set suitable for validation and application in biological and biophysical contexts. A key application of the Expanded FindCore method is to identify a core set of atoms in the experimental NMR structure for which it makes sense to validate predicted protein structure models. We demonstrate the application of this Expanded FindCore method in characterizing well-defined regions of 18 NMR-derived CASP10 target structures. The Expanded FindCore protocol defines “expanded core atom sets” that match an expert’s intuition of which parts of the structure are sufficiently well-defined to use in assessing CASP model predictions. We also illustrate the impact of this analysis on the CASP GDT assessment scores. PMID:24327305

  13. A Bipartite Network-based Method for Prediction of Long Non-coding RNA–protein Interactions

    Directory of Open Access Journals (Sweden)

    Mengqu Ge

    2016-02-01

    Full Text Available As one large class of non-coding RNAs (ncRNAs, long ncRNAs (lncRNAs have gained considerable attention in recent years. Mutations and dysfunction of lncRNAs have been implicated in human disorders. Many lncRNAs exert their effects through interactions with the corresponding RNA-binding proteins. Several computational approaches have been developed, but only few are able to perform the prediction of these interactions from a network-based point of view. Here, we introduce a computational method named lncRNA–protein bipartite network inference (LPBNI. LPBNI aims to identify potential lncRNA–interacting proteins, by making full use of the known lncRNA–protein interactions. Leave-one-out cross validation (LOOCV test shows that LPBNI significantly outperforms other network-based methods, including random walk (RWR and protein-based collaborative filtering (ProCF. Furthermore, a case study was performed to demonstrate the performance of LPBNI using real data in predicting potential lncRNA–interacting proteins.

  14. A comparative study on prediction methods for China's medium- and long-term coal demand

    International Nuclear Information System (INIS)

    Li, Bing-Bing; Liang, Qiao-Mei; Wang, Jin-Cheng

    2015-01-01

    Given the dominant position of coal in China's energy structure and in order to ensure a safe and stable energy supply, it is essential to perform a scientific and effective prediction of China's medium- and long-term coal demand. Based on the historical data of coal consumption and related factors such as GDP (Gross domestic product), coal price, industrial structure, total population, energy structure, energy efficiency, coal production and urbanization rate from 1987 to 2012, this study compared the prediction effects of five types of models. These models include the VAR (vector autoregressive model), RBF (radial basis function) neural network model, GA-DEM (genetic algorithm demand estimation model), PSO-DEM (particle swarm optimization demand estimation model) and IO (input–output model). By comparing the results of different models with the corresponding actual coal consumption, it is concluded that with a testing period from 2006 to 2012, the PSO-DEM model has a relatively optimal predicted effect on China's total coal demand, where the MAPE (mean absolute percentage error) is close to or below 2%. - Highlights: • The prediction effects of five methods for China's coal demand were compared. • Each model has acceptable prediction results, with MAPE below 5%. • Particle swarm optimization demand estimation model has better forecast efficacy.

  15. Prediction of residual stress using explicit finite element method

    Directory of Open Access Journals (Sweden)

    W.A. Siswanto

    2015-12-01

    Full Text Available This paper presents the residual stress behaviour under various values of friction coefficients and scratching displacement amplitudes. The investigation is based on numerical solution using explicit finite element method in quasi-static condition. Two different aeroengine materials, i.e. Super CMV (Cr-Mo-V and Titanium alloys (Ti-6Al-4V, are examined. The usage of FEM analysis in plate under normal contact is validated with Hertzian theoretical solution in terms of contact pressure distributions. The residual stress distributions along with normal and shear stresses on elastic and plastic regimes of the materials are studied for a simple cylinder-on-flat contact configuration model subjected to normal loading, scratching and followed by unloading. The investigated friction coefficients are 0.3, 0.6 and 0.9, while scratching displacement amplitudes are 0.05 mm, 0.10 mm and 0.20 mm respectively. It is found that friction coefficient of 0.6 results in higher residual stress for both materials. Meanwhile, the predicted residual stress is proportional to the scratching displacement amplitude, higher displacement amplitude, resulting in higher residual stress. It is found that less residual stress is predicted on Super CMV material compared to Ti-6Al-4V material because of its high yield stress and ultimate strength. Super CMV material with friction coefficient of 0.3 and scratching displacement amplitude of 0.10 mm is recommended to be used in contact engineering applications due to its minimum possibility of fatigue.

  16. Comparison of Four Weighting Methods in Fuzzy-based Land Suitability to Predict Wheat Yield

    Directory of Open Access Journals (Sweden)

    Fatemeh Rahmati

    2017-06-01

    Full Text Available Introduction: Land suitability evaluation is a process to examine the degree of land fitness for specific utilization and also makes it possible to estimate land productivity potential. In 1976, FAO provided a general framework for land suitability classification. It has not been proposed a specific method to perform this classification in the framework. In later years, a collection of methods was presented based on the FAO framework. In parametric method, different land suitability aspects are defined as completely discrete groups and are separated from each other by distinguished and consistent ranges. Therefore, land units that have moderate suitability can only choose one of the characteristics of predefined classes of land suitability. Fuzzy logic is an extension of Boolean logic by LotfiZadeh in 1965 based on the mathematical theory of fuzzy sets, which is a generalization of the classical set theory. By introducing the notion of degree in the verification of a condition, fuzzy method enables a condition to be in a state other than true or false, as well as provides a very valuable flexibility for reasoning, which makes it possible to take into account inaccuracies and uncertainties. One advantage of fuzzy logic in order to formalize human reasoning is that the rules are set in natural language. In evaluation method based on fuzzy logic, the weights are used for land characteristics. The objective of this study was to compare four methods of weight calculation in the fuzzy logic to predict the yield of wheat in the study area covering 1500 ha in Kian town in Shahrekord (Chahrmahal and Bakhtiari province, Iran. Materials and Methods: In such investigations, climatic factors, and soil physical and chemical characteristics are studied. This investigation involves several studies including a lab study, and qualitative and quantitative land suitability evaluation with fuzzy logic for wheat. Factors affecting the wheat production consist of

  17. Analysis Of The Method Of Predictive Control Applicable To Active Magnetic Suspension Systems Of Aircraft Engines

    Directory of Open Access Journals (Sweden)

    Kurnyta-Mazurek Paulina

    2015-12-01

    Full Text Available Conventional controllers are usually synthesized on the basis of already known parameters associated with the model developed for the object to be controlled. However, sometimes it proves extremely difficult or even infeasible to find out these parameters, in particular when they subject to changes during the exploitation lifetime. If so, much more sophisticated control methods have to be applied, e.g. the method of predictive control. Thus, the paper deals with application of the predictive control approach to follow-up tracking of an active magnetic suspension where the mathematical and simulation models for such a control system are disclosed with preliminary results from simulation investigations of the control system in question.

  18. An assessment of the validity of inelastic design analysis methods by comparisons of predictions with test results

    International Nuclear Information System (INIS)

    Corum, J.M.; Clinard, J.A.; Sartory, W.K.

    1976-01-01

    The use of computer programs that employ relatively complex constitutive theories and analysis procedures to perform inelastic design calculations on fast reactor system components introduces questions of validation and acceptance of the analysis results. We may ask ourselves, ''How valid are the answers.'' These questions, in turn, involve the concepts of verification of computer programs as well as qualification of the computer programs and of the underlying constitutive theories and analysis procedures. This paper addresses the latter - the qualification of the analysis methods for inelastic design calculations. Some of the work underway in the United States to provide the necessary information to evaluate inelastic analysis methods and computer programs is described, and typical comparisons of analysis predictions with inelastic structural test results are presented. It is emphasized throughout that rather than asking ourselves how valid, or correct, are the analytical predictions, we might more properly question whether or not the combination of the predictions and the associated high-temperature design criteria leads to an acceptable level of structural integrity. It is believed that in this context the analysis predictions are generally valid, even though exact correlations between predictions and actual behavior are not obtained and cannot be expected. Final judgment, however, must be reserved for the design analyst in each specific case. (author)

  19. Long short-term memory neural network for air pollutant concentration predictions: Method development and evaluation.

    Science.gov (United States)

    Li, Xiang; Peng, Ling; Yao, Xiaojing; Cui, Shaolong; Hu, Yuan; You, Chengzeng; Chi, Tianhe

    2017-12-01

    Air pollutant concentration forecasting is an effective method of protecting public health by providing an early warning against harmful air pollutants. However, existing methods of air pollutant concentration prediction fail to effectively model long-term dependencies, and most neglect spatial correlations. In this paper, a novel long short-term memory neural network extended (LSTME) model that inherently considers spatiotemporal correlations is proposed for air pollutant concentration prediction. Long short-term memory (LSTM) layers were used to automatically extract inherent useful features from historical air pollutant data, and auxiliary data, including meteorological data and time stamp data, were merged into the proposed model to enhance the performance. Hourly PM 2.5 (particulate matter with an aerodynamic diameter less than or equal to 2.5 μm) concentration data collected at 12 air quality monitoring stations in Beijing City from Jan/01/2014 to May/28/2016 were used to validate the effectiveness of the proposed LSTME model. Experiments were performed using the spatiotemporal deep learning (STDL) model, the time delay neural network (TDNN) model, the autoregressive moving average (ARMA) model, the support vector regression (SVR) model, and the traditional LSTM NN model, and a comparison of the results demonstrated that the LSTME model is superior to the other statistics-based models. Additionally, the use of auxiliary data improved model performance. For the one-hour prediction tasks, the proposed model performed well and exhibited a mean absolute percentage error (MAPE) of 11.93%. In addition, we conducted multiscale predictions over different time spans and achieved satisfactory performance, even for 13-24 h prediction tasks (MAPE = 31.47%). Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A novel time series link prediction method: Learning automata approach

    Science.gov (United States)

    Moradabadi, Behnaz; Meybodi, Mohammad Reza

    2017-09-01

    Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.

  1. Application of bias factor method using random sampling technique for prediction accuracy improvement of critical eigenvalue of BWR

    International Nuclear Information System (INIS)

    Ito, Motohiro; Endo, Tomohiro; Yamamoto, Akio; Kuroda, Yusuke; Yoshii, Takashi

    2017-01-01

    The bias factor method based on the random sampling technique is applied to the benchmark problem of Peach Bottom Unit 2. Validity and availability of the present method, i.e. correction of calculation results and reduction of uncertainty, are confirmed in addition to features and performance of the present method. In the present study, core characteristics in cycle 3 are corrected with the proposed method using predicted and 'measured' critical eigenvalues in cycles 1 and 2. As the source of uncertainty, variance-covariance of cross sections is considered. The calculation results indicate that bias between predicted and measured results, and uncertainty owing to cross section can be reduced. Extension to other uncertainties such as thermal hydraulics properties will be a future task. (author)

  2. A simple fracture energy prediction method for fiber network based on its morphological features extracted by X-ray tomography

    International Nuclear Information System (INIS)

    Huang, Xiang; Wang, Qinghui; Zhou, Wei; Li, Jingrong

    2013-01-01

    The fracture behavior of a novel porous metal fiber sintered sheet (PMFSS) was predicted using a semi-empirical method combining the knowledge of its morphological characteristics and micro-mechanical responses. The morphological characteristics were systematically summarized based on the analysis of the topologically identical skeleton representation extracted from the X-ray tomography images. The analytical model firstly proposed by Tan et al. [1] was further modified according to the experimental observations from both tensile tests of single fibers and sintered fiber sheets, which built the coupling of single fiber segment and fiber network in terms of fracture energy using a simple prediction method. The efficacy of the prediction model was verified by comparing the predicted results to the experimental measurements. The prediction error that arose at high porosity was analyzed through fiber orientation distribution. Moreover, the tensile fracture process evolving from single fiber segments at micro-scale to the global mechanical performance was investigated

  3. A prediction method of the effect of radial heat flux distribution on critical heat flux in CANDU fuel bundles

    International Nuclear Information System (INIS)

    Yuan, Lan Qin; Yang, Jun; Harrison, Noel

    2014-01-01

    Fuel irradiation experiments to study fuel behaviors have been performed in the experimental loops of the National Research Universal (NRU) Reactor at Atomic Energy of Canada Limited (AECL) Chalk River Laboratories (CRL) in support of the development of new fuel technologies. Before initiating a fuel irradiation experiment, the experimental proposal must be approved to ensure that the test fuel strings put into the NRU loops meet safety margin requirements in critical heat flux (CHF). The fuel strings in irradiation experiments can have varying degrees of fuel enrichment and burnup, resulting in large variations in radial heat flux distribution (RFD). CHF experiments performed in Freon flow at CRL for full-scale bundle strings with a number of RFDs showed a strong effect of RFD on CHF. A prediction method was derived based on experimental CHF data to account for the RFD effect on CHF. It provides good CHF predictions for various RFDs as compared to the data. However, the range of the tested RFDs in the CHF experiments is not as wide as that required in the fuel irradiation experiments. The applicability of the prediction method needs to be examined for the RFDs beyond the range tested by the CHF experiments. The Canadian subchannel code ASSERT-PV was employed to simulate the CHF behavior for RFDs that would be encountered in fuel irradiation experiments. The CHF predictions using the derived method were compared with the ASSERT simulations. It was observed that the CHF predictions agree well with the ASSERT simulations in terms of CHF, confirming the applicability of the prediction method in fuel irradiation experiments. (author)

  4. QSPR studies for predicting polarity parameter of organic compounds in methanol using support vector machine and enhanced replacement method.

    Science.gov (United States)

    Golmohammadi, H; Dashtbozorgi, Z

    2016-12-01

    In the present work, enhanced replacement method (ERM) and support vector machine (SVM) were used for quantitative structure-property relationship (QSPR) studies of polarity parameter (p) of various organic compounds in methanol in reversed phase liquid chromatography based on molecular descriptors calculated from the optimized structures. Diverse kinds of molecular descriptors were calculated to encode the molecular structures of compounds, such as geometric, thermodynamic, electrostatic and quantum mechanical descriptors. The variable selection method of ERM was employed to select an optimum subset of descriptors. The five descriptors selected using ERM were used as inputs of SVM to predict the polarity parameter of organic compounds in methanol. The coefficient of determination, r 2 , between experimental and predicted polarity parameters for the prediction set by ERM and SVM were 0.952 and 0.982, respectively. Acceptable results specified that the ERM approach is a very effective method for variable selection and the predictive aptitude of the SVM model is superior to those obtained by ERM. The obtained results demonstrate that SVM can be used as a substitute influential modeling tool for QSPR studies.

  5. System and methods for predicting transmembrane domains in membrane proteins and mining the genome for recognizing G-protein coupled receptors

    Science.gov (United States)

    Trabanino, Rene J; Vaidehi, Nagarajan; Hall, Spencer E; Goddard, William A; Floriano, Wely

    2013-02-05

    The invention provides computer-implemented methods and apparatus implementing a hierarchical protocol using multiscale molecular dynamics and molecular modeling methods to predict the presence of transmembrane regions in proteins, such as G-Protein Coupled Receptors (GPCR), and protein structural models generated according to the protocol. The protocol features a coarse grain sampling method, such as hydrophobicity analysis, to provide a fast and accurate procedure for predicting transmembrane regions. Methods and apparatus of the invention are useful to screen protein or polynucleotide databases for encoded proteins with transmembrane regions, such as GPCRs.

  6. Multi-model predictive control method for nuclear steam generator water level

    International Nuclear Information System (INIS)

    Hu Ke; Yuan Jingqi

    2008-01-01

    The dynamics of a nuclear steam generator (SG) is very different according to the power levels and changes as time goes on. Therefore, it is an intractable as well as challenging task to improve the water level control system of the SG. In this paper, a robust model predictive control (RMPC) method is developed for the level control problem. Based on a multi-model framework, a combination of a local nominal model with a polytopic uncertain linear parameter varying (LPV) model is built to approximate the system's non-linear behavior. The optimization problem solved here is based on a receding horizon scheme involving the linear matrix inequality (LMI) technique. Closed loop stability and constraints satisfaction in the entire operating range are guaranteed by the feasibility of the optimization problem. Finally, simulation results show the effectiveness and the good performance of the proposed method

  7. Predicting the diversity of internal temperatures from the English residential sector using panel methods

    International Nuclear Information System (INIS)

    Kelly, Scott; Shipworth, Michelle; Shipworth, David; Gentry, Michael; Wright, Andrew; Pollitt, Michael; Crawford-Brown, Doug; Lomas, Kevin

    2013-01-01

    Highlights: ► A new method is proposed incorporating behavioural, environmental and building efficiency variables to explain internal dwelling temperatures. ► It is the first time panel methods have been used to predict internal dwelling temperatures over time. ► The proposed method is able to explain 45% of the variance of internal temperature between heterogeneous dwellings. ► Results support qualitative research on the importance of social, cultural and psychological behaviour in determining internal dwelling temperatures. behaviour. ► This method presents new opportunities to quantify the size of the direct rebound effect between heterogeneous dwellings. -- Abstract: In this paper, panel methods are applied in new and innovative ways to predict daily mean internal temperature demand across a heterogeneous domestic building stock over time. This research not only exploits a rich new dataset but presents new methodological insights and offers important linkages for connecting bottom-up building stock models to human behaviour. It represents the first time a panel model has been used to estimate the dynamics of internal temperature demand from the natural daily fluctuations of external temperature combined with important behavioural, socio-demographic and building efficiency variables. The model is able to predict internal temperatures across a heterogeneous building stock to within ∼0.71 °C at 95% confidence and explain 45% of the variance of internal temperature between dwellings. The model confirms hypothesis from sociology and psychology that habitual behaviours are important drivers of home energy consumption. In addition, the model offers the possibility to quantify take-back (direct rebound effect) owing to increased internal temperatures from the installation of energy efficiency measures. The presence of thermostats or thermostatic radiator valves (TRVs) are shown to reduce average internal temperatures, however, the use of an automatic timer

  8. Development of classification and prediction methods of critical heat flux using fuzzy theory and artificial neural networks

    International Nuclear Information System (INIS)

    Moon, Sang Ki

    1995-02-01

    This thesis applies new information techniques, artificial neural networks, (ANNs) and fuzzy theory, to the investigation of the critical heat flux (CHF) phenomenon for water flow in vertical round tubes. The work performed are (a) classification and prediction of CHF based on fuzzy clustering and ANN, (b) prediction and parametric trends analysis of CHF using ANN with the introduction of dimensionless parameters, and (c) detection of CHF occurrence using fuzzy rule and spatiotemporal neural network (STN). Fuzzy clustering and ANN are used for classification and prediction of the CHF using primary system parameters. The fuzzy clustering classifies the experimental CHF data into a few data clusters (data groups) according to the data characteristics. After classification of the experimental data, the characteristics of the resulted clusters are discussed with emphasis on the distribution of the experimental conditions and physical mechanisms. The CHF data in each group are trained in an artificial neural network to predict the CHF. The artificial neural network adjusts the weight so as to minimize the prediction error within the corresponding cluster. Application of the proposed method to the KAIST CHF data bank shows good prediction capability of the CHF, better than other existing methods. Parametric trends of the CHF are analyzed by applying artificial neural networks to a CHF data base for water flow in uniformly heated vertical round tubes. The analyses are performed from three viewpoints, i.e., for fixed inlet conditions, for fixed exit conditions, and based on local conditions hypothesis. In order to remove the necessity of data classification, Katto and Groeneveld et al.'s dimensionless parameters are introduced in training the ANNs with the experimental CHF data. The trained ANNs predict the CHF better than any other conventional correlations, showing RMS error of 8.9%, 13.1%, and 19.3% for fixed inlet conditions, for fixed exit conditions, and for local

  9. EPMLR: sequence-based linear B-cell epitope prediction method using multiple linear regression.

    Science.gov (United States)

    Lian, Yao; Ge, Meng; Pan, Xian-Ming

    2014-12-19

    B-cell epitopes have been studied extensively due to their immunological applications, such as peptide-based vaccine development, antibody production, and disease diagnosis and therapy. Despite several decades of research, the accurate prediction of linear B-cell epitopes has remained a challenging task. In this work, based on the antigen's primary sequence information, a novel linear B-cell epitope prediction model was developed using the multiple linear regression (MLR). A 10-fold cross-validation test on a large non-redundant dataset was performed to evaluate the performance of our model. To alleviate the problem caused by the noise of negative dataset, 300 experiments utilizing 300 sub-datasets were performed. We achieved overall sensitivity of 81.8%, precision of 64.1% and area under the receiver operating characteristic curve (AUC) of 0.728. We have presented a reliable method for the identification of linear B cell epitope using antigen's primary sequence information. Moreover, a web server EPMLR has been developed for linear B-cell epitope prediction: http://www.bioinfo.tsinghua.edu.cn/epitope/EPMLR/ .

  10. Further evaluation of creep-fatigue life prediction methods for low-carbon nitrogen-added 316 stainless steel

    International Nuclear Information System (INIS)

    Takahashi, Y.

    1999-01-01

    Low-carbon, medium-nitrogen 316 stainless steel is a principal candidate for a main structural material of a demonstration fast breeder reactor plant in Japan. A number of long-term creep tests and creep-fatigue tests have been conducted for four products of this steel. Two representative creep-fatigue life prediction methods, i.e., time fraction rule and ductility exhaustion method were applied. Total stress relaxation behavior was simulated well by an addition of a viscous strain term to the conventional (primary plus secondary) creep strain, but only the letter was assumed to contribute to creep damage in the ductility exhaustion method. The present ductility exhaustion approach was found to have very good accuracy in creep-fatigue life prediction for all materials tested, while the time fraction rule tended to overpredict failure life as large as a factor of 30. Discussion was made on the reason for this notable difference

  11. Predictability of the recent slowdown and subsequent recovery of large-scale surface warming using statistical methods

    Science.gov (United States)

    Mann, Michael E.; Steinman, Byron A.; Miller, Sonya K.; Frankcombe, Leela M.; England, Matthew H.; Cheung, Anson H.

    2016-04-01

    The temporary slowdown in large-scale surface warming during the early 2000s has been attributed to both external and internal sources of climate variability. Using semiempirical estimates of the internal low-frequency variability component in Northern Hemisphere, Atlantic, and Pacific surface temperatures in concert with statistical hindcast experiments, we investigate whether the slowdown and its recent recovery were predictable. We conclude that the internal variability of the North Pacific, which played a critical role in the slowdown, does not appear to have been predictable using statistical forecast methods. An additional minor contribution from the North Atlantic, by contrast, appears to exhibit some predictability. While our analyses focus on combining semiempirical estimates of internal climatic variability with statistical hindcast experiments, possible implications for initialized model predictions are also discussed.

  12. Accurate diffraction data integration by the EVAL15 profile prediction method : Application in chemical and biological crystallography

    NARCIS (Netherlands)

    Xian, X.

    2009-01-01

    Accurate integration of reflection intensities plays an essential role in structure determination of the crystallized compound. A new diffraction data integration method, EVAL15, is presented in this thesis. This method uses the principle of general impacts to predict ab inito three-dimensional

  13. Mixed price and load forecasting of electricity markets by a new iterative prediction method

    International Nuclear Information System (INIS)

    Amjady, Nima; Daraeepour, Ali

    2009-01-01

    Load and price forecasting are the two key issues for the participants of current electricity markets. However, load and price of electricity markets have complex characteristics such as nonlinearity, non-stationarity and multiple seasonality, to name a few (usually, more volatility is seen in the behavior of electricity price signal). For these reasons, much research has been devoted to load and price forecast, especially in the recent years. However, previous research works in the area separately predict load and price signals. In this paper, a mixed model for load and price forecasting is presented, which can consider interactions of these two forecast processes. The mixed model is based on an iterative neural network based prediction technique. It is shown that the proposed model can present lower forecast errors for both load and price compared with the previous separate frameworks. Another advantage of the mixed model is that all required forecast features (from load or price) are predicted within the model without assuming known values for these features. So, the proposed model can better be adapted to real conditions of an electricity market. The forecast accuracy of the proposed mixed method is evaluated by means of real data from the New York and Spanish electricity markets. The method is also compared with some of the most recent load and price forecast techniques. (author)

  14. Evaluation of a Lagrangian Soot Tracking Method for the prediction of primary soot particle size under engine-like conditions

    DEFF Research Database (Denmark)

    Cai Ong, Jiun; Pang, Kar Mun; Walther, Jens Honore

    2018-01-01

    This paper reports the implementation and evaluation of a Lagrangian soot tracking (LST) method for the modeling of soot in diesel engines. The LST model employed here has the tracking capability of a Lagrangian method and the ability to predict primary soot particle sizing. The Moss-Brookes soot...... in predicting temporal soot cloud development, mean soot diameter and primary soot size distribution is evaluated using measurements of n-heptane and n-dodecane spray combustion obtained under diesel engine-like conditions. In addition, sensitivity studies are carried out to investigate the influence of soot....... A higher rate of soot oxidation due to OH causes the soot particles to be fully oxidized downstream of the flame. In general, the LST model performs better than the Eulerian method in terms of predicting soot sizing and accessing information of individual soot particles, both of which are shortcomings...

  15. HemeBIND: a novel method for heme binding residue prediction by combining structural and sequence information

    Directory of Open Access Journals (Sweden)

    Hu Jianjun

    2011-05-01

    Full Text Available Abstract Background Accurate prediction of binding residues involved in the interactions between proteins and small ligands is one of the major challenges in structural bioinformatics. Heme is an essential and commonly used ligand that plays critical roles in electron transfer, catalysis, signal transduction and gene expression. Although much effort has been devoted to the development of various generic algorithms for ligand binding site prediction over the last decade, no algorithm has been specifically designed to complement experimental techniques for identification of heme binding residues. Consequently, an urgent need is to develop a computational method for recognizing these important residues. Results Here we introduced an efficient algorithm HemeBIND for predicting heme binding residues by integrating structural and sequence information. We systematically investigated the characteristics of binding interfaces based on a non-redundant dataset of heme-protein complexes. It was found that several sequence and structural attributes such as evolutionary conservation, solvent accessibility, depth and protrusion clearly illustrate the differences between heme binding and non-binding residues. These features can then be separately used or combined to build the structure-based classifiers using support vector machine (SVM. The results showed that the information contained in these features is largely complementary and their combination achieved the best performance. To further improve the performance, an attempt has been made to develop a post-processing procedure to reduce the number of false positives. In addition, we built a sequence-based classifier based on SVM and sequence profile as an alternative when only sequence information can be used. Finally, we employed a voting method to combine the outputs of structure-based and sequence-based classifiers, which demonstrated remarkably better performance than the individual classifier alone

  16. A unified and comprehensible view of parametric and kernel methods for genomic prediction with application to rice

    Directory of Open Access Journals (Sweden)

    Laval Jacquin

    2016-08-01

    Full Text Available One objective of this study was to provide readers with a clear and unified understanding ofparametric statistical and kernel methods, used for genomic prediction, and to compare some ofthese in the context of rice breeding for quantitative traits. Furthermore, another objective wasto provide a simple and user-friendly R package, named KRMM, which allows users to performRKHS regression with several kernels. After introducing the concept of regularized empiricalrisk minimization, the connections between well-known parametric and kernel methods suchas Ridge regression (i.e. genomic best linear unbiased predictor (GBLUP and reproducingkernel Hilbert space (RKHS regression were reviewed. Ridge regression was then reformulatedso as to show and emphasize the advantage of the kernel trick concept, exploited by kernelmethods in the context of epistatic genetic architectures, over parametric frameworks used byconventional methods. Some parametric and kernel methods; least absolute shrinkage andselection operator (LASSO, GBLUP, support vector machine regression (SVR and RKHSregression were thereupon compared for their genomic predictive ability in the context of ricebreeding using three real data sets. Among the compared methods, RKHS regression and SVRwere often the most accurate methods for prediction followed by GBLUP and LASSO. An Rfunction which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression,with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time hasbeen developed. Moreover, a modified version of this function, which allows users to tune kernelsfor RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  17. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    Science.gov (United States)

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  18. Early Diagnosis of Breas Cancer Dissemination by Tumor Markers Follow-Up and Method of Prediction

    Czech Academy of Sciences Publication Activity Database

    Nekulová, M.; Šimíčková, M.; Pecen, Ladislav; Eben, Kryštof; Vermousek, I.; Stratil, P.; Černoch, M.; Lang, B.

    1994-01-01

    Roč. 41, č. 2 (1994), s. 113-118 ISSN 0028-2685 R&D Projects: GA AV ČR IAA230106 Keywords : breast cancer * progression * CEA * CA 15-3 * MCA * TPA * mathematical method of prediction Impact factor: 0.354, year: 1994

  19. Predictive equation of state method for heavy materials based on the Dirac equation and density functional theory

    Science.gov (United States)

    Wills, John M.; Mattsson, Ann E.

    2012-02-01

    Density functional theory (DFT) provides a formally predictive base for equation of state properties. Available approximations to the exchange/correlation functional provide accurate predictions for many materials in the periodic table. For heavy materials however, DFT calculations, using available functionals, fail to provide quantitative predictions, and often fail to be even qualitative. This deficiency is due both to the lack of the appropriate confinement physics in the exchange/correlation functional and to approximations used to evaluate the underlying equations. In order to assess and develop accurate functionals, it is essential to eliminate all other sources of error. In this talk we describe an efficient first-principles electronic structure method based on the Dirac equation and compare the results obtained with this method with other methods generally used. Implications for high-pressure equation of state of relativistic materials are demonstrated in application to Ce and the light actinides. Sandia National Laboratories is a multi-program laboratory managed andoperated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  20. Robust Navier-Stokes method for predicting unsteady flowfield and aerodynamic characteristics of helicopter rotor

    Directory of Open Access Journals (Sweden)

    Qijun ZHAO

    2018-02-01

    Full Text Available A robust unsteady rotor flowfield solver CLORNS code is established to predict the complex unsteady aerodynamic characteristics of rotor flowfield. In order to handle the difficult problem about grid generation around rotor with complex aerodynamic shape in this CFD code, a parameterized grid generated method is established, and the moving-embedded grids are constructed by several proposed universal methods. In this work, the unsteady Reynolds-Averaged Navier-Stokes (RANS equations with Spalart-Allmaras are selected as the governing equations to predict the unsteady flowfield of helicopter rotor. The discretization of convective fluxes is accomplished by employing the second-order central difference scheme, third-order MUSCL-Roe scheme, and fifth-order WENO-Roe scheme. Aimed at simulating the unsteady aerodynamic characteristics of helicopter rotor, the dual-time scheme with implicit LU-SGS scheme is employed to accomplish the temporal discretization. In order to improve the computational efficiency of hole-cells and donor elements searching of the moving-embedded grid technology, the “disturbance diffraction method” and “minimum distance scheme of donor elements method” are established in this work. To improve the computational efficiency, Message Passing Interface (MPI parallel method based on subdivision of grid, local preconditioning method and Full Approximation Storage (FAS multi-grid method are combined in this code. By comparison of the numerical results simulated by CLORNS code with test data, it is illustrated that the present code could simulate the aerodynamic loads and aerodynamic noise characteristics of helicopter rotor accurately. Keywords: Aerodynamic characteristics, Helicopter rotor, Moving-embedded grid, Navier-Stokes equations, Upwind schemes

  1. SU-G-BRC-01: A Data-Driven Pre-Optimization Method for Prediction of Achievability of Clinical Objectives in IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Ranganathan, V; Kumar, P [Philips India Limited, Bangalore, Karnataka (India); Bzdusek, K [Philips, Fitchburg, WI (United States); Das, J Maria [Sanjay Gandhi PG Inst Med Scienes, Lucknow (India)

    2016-06-15

    Purpose: We propose a novel data-driven method to predict the achievability of clinical objectives upfront before invoking the IMRT optimization. Methods: A new metric called “Geometric Complexity (GC)” is used to estimate the achievability of clinical objectives. Here, GC is the measure of the number of “unmodulated” beamlets or rays that intersect the Region-of-interest (ROI) and the target volume. We first compute the geometric complexity ratio (GCratio) between the GC of a ROI (say, parotid) in a reference plan and the GC of the same ROI in a given plan. The GCratio of a ROI indicates the relative geometric complexity of the ROI as compared to the same ROI in the reference plan. Hence GCratio can be used to predict if a defined clinical objective associated with the ROI can be met by the optimizer for a given case. Basically a higher GCratio indicates a lesser likelihood for the optimizer to achieve the clinical objective defined for a given ROI. Similarly, a lower GCratio indicates a higher likelihood for the optimizer to achieve the clinical objective defined for the given ROI. We have evaluated the proposed method on four Head and Neck cases using Pinnacle3 (version 9.10.0) Treatment Planning System (TPS). Results: Out of the total of 28 clinical objectives from four head and neck cases included in the study, 25 were in agreement with the prediction, which implies an agreement of about 85% between predicted and obtained results. The Pearson correlation test shows a positive correlation between predicted and obtained results (Correlation = 0.82, r2 = 0.64, p < 0.005). Conclusion: The study demonstrates the feasibility of the proposed method in head and neck cases for predicting the achievability of clinical objectives with reasonable accuracy.

  2. Advanced validation of CFD-FDTD combined method using highly applicable solver for reentry blackout prediction

    International Nuclear Information System (INIS)

    Takahashi, Yusuke

    2016-01-01

    An analysis model of plasma flow and electromagnetic waves around a reentry vehicle for radio frequency blackout prediction during aerodynamic heating was developed in this study. The model was validated based on experimental results from the radio attenuation measurement program. The plasma flow properties, such as electron number density, in the shock layer and wake region were obtained using a newly developed unstructured grid solver that incorporated real gas effect models and could treat thermochemically non-equilibrium flow. To predict the electromagnetic waves in plasma, a frequency-dependent finite-difference time-domain method was used. Moreover, the complicated behaviour of electromagnetic waves in the plasma layer during atmospheric reentry was clarified at several altitudes. The prediction performance of the combined model was evaluated with profiles and peak values of the electron number density in the plasma layer. In addition, to validate the models, the signal losses measured during communication with the reentry vehicle were directly compared with the predicted results. Based on the study, it was suggested that the present analysis model accurately predicts the radio frequency blackout and plasma attenuation of electromagnetic waves in plasma in communication. (paper)

  3. High-accuracy CFD prediction methods for fluid and structure temperature fluctuations at T-junction for thermal fatigue evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Qian, Shaoxiang, E-mail: qian.shaoxiang@jgc.com [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kanamaru, Shinichiro [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kasahara, Naoto [Nuclear Engineering and Management, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-07-15

    Highlights: • Numerical methods for accurate prediction of thermal loading were proposed. • Predicted fluid temperature fluctuation (FTF) intensity is close to the experiment. • Predicted structure temperature fluctuation (STF) range is close to the experiment. • Predicted peak frequencies of FTF and STF also agree well with the experiment. • CFD results show the proposed numerical methods are of sufficiently high accuracy. - Abstract: Temperature fluctuations generated by the mixing of hot and cold fluids at a T-junction, which is widely used in nuclear power and process plants, can cause thermal fatigue failure. The conventional methods for evaluating thermal fatigue tend to provide insufficient accuracy, because they were developed based on limited experimental data and a simplified one-dimensional finite element analysis (FEA). CFD/FEA coupling analysis is expected as a useful tool for the more accurate evaluation of thermal fatigue. The present paper aims to verify the accuracy of proposed numerical methods of simulating fluid and structure temperature fluctuations at a T-junction for thermal fatigue evaluation. The dynamic Smagorinsky model (DSM) is used for large eddy simulation (LES) sub-grid scale (SGS) turbulence model, and a hybrid scheme (HS) is adopted for the calculation of convective terms in the governing equations. Also, heat transfer between fluid and structure is calculated directly through thermal conduction by creating a mesh with near wall resolution (NWR) by allocating grid points within the thermal boundary sub-layer. The simulation results show that the distribution of fluid temperature fluctuation intensity and the range of structure temperature fluctuation are remarkably close to the experimental results. Moreover, the peak frequencies of power spectrum density (PSD) of both fluid and structure temperature fluctuations also agree well with the experimental results. Therefore, the numerical methods used in the present paper are

  4. Effects of source and receiver locations in predicting room transfer functions by a phased beam tracing method

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho; Ih, Jeong-Guon

    2012-01-01

    The accuracy of a phased beam tracing method in predicting transfer functions is investigated with a special focus on the positions of the source and receiver. Simulated transfer functions for various source-receiver pairs using the phased beam tracing method were compared with analytical Green’s...

  5. Prediction of Physicochemical Properties of Organic Molecules Using Semi-Empirical Methods

    International Nuclear Information System (INIS)

    Kim, Chan Kyung; Kim, Chang Kon; Kim, Miri; Lee, Hai Whang; Cho, Soo Gyeong

    2013-01-01

    Prediction of physicochemical properties of organic molecules is an important process in chemistry and chemical engineering. The MSEP approach developed in our lab calculates the molecular surface electrostatic potential (ESP) on van der Waals (vdW) surfaces of molecules. This approach includes geometry optimization and frequency calculation using hybrid density functional theory, B3LYP, at the 6-31G(d) basis set to find minima on the potential energy surface, and is known to give satisfactory QSPR results for various properties of organic molecules. However, this MSEP method is not applicable to screen large database because geometry optimization and frequency calculation require considerable computing time. To develop a fast but yet reliable approach, we have re-examined our previous work on organic molecules using two semi-empirical methods, AM1 and PM3. This new approach can be an efficient protocol in designing new molecules with improved properties

  6. Prediction of allosteric sites on protein surfaces with an elastic-network-model-based thermodynamic method.

    Science.gov (United States)

    Su, Ji Guo; Qi, Li Sheng; Li, Chun Hua; Zhu, Yan Ying; Du, Hui Jing; Hou, Yan Xue; Hao, Rui; Wang, Ji Hua

    2014-08-01

    Allostery is a rapid and efficient way in many biological processes to regulate protein functions, where binding of an effector at the allosteric site alters the activity and function at a distant active site. Allosteric regulation of protein biological functions provides a promising strategy for novel drug design. However, how to effectively identify the allosteric sites remains one of the major challenges for allosteric drug design. In the present work, a thermodynamic method based on the elastic network model was proposed to predict the allosteric sites on the protein surface. In our method, the thermodynamic coupling between the allosteric and active sites was considered, and then the allosteric sites were identified as those where the binding of an effector molecule induces a large change in the binding free energy of the protein with its ligand. Using the proposed method, two proteins, i.e., the 70 kD heat shock protein (Hsp70) and GluA2 alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid (AMPA) receptor, were studied and the allosteric sites on the protein surface were successfully identified. The predicted results are consistent with the available experimental data, which indicates that our method is a simple yet effective approach for the identification of allosteric sites on proteins.

  7. Automatically explaining machine learning prediction results: a demonstration on type 2 diabetes risk prediction.

    Science.gov (United States)

    Luo, Gang

    2016-01-01

    Predictive modeling is a key component of solutions to many healthcare problems. Among all predictive modeling approaches, machine learning methods often achieve the highest prediction accuracy, but suffer from a long-standing open problem precluding their widespread use in healthcare. Most machine learning models give no explanation for their prediction results, whereas interpretability is essential for a predictive model to be adopted in typical healthcare settings. This paper presents the first complete method for automatically explaining results for any machine learning predictive model without degrading accuracy. We did a computer coding implementation of the method. Using the electronic medical record data set from the Practice Fusion diabetes classification competition containing patient records from all 50 states in the United States, we demonstrated the method on predicting type 2 diabetes diagnosis within the next year. For the champion machine learning model of the competition, our method explained prediction results for 87.4 % of patients who were correctly predicted by the model to have type 2 diabetes diagnosis within the next year. Our demonstration showed the feasibility of automatically explaining results for any machine learning predictive model without degrading accuracy.

  8. Use of predictive models and rapid methods to nowcast bacteria levels at coastal beaches

    Science.gov (United States)

    Francy, Donna S.

    2009-01-01

    The need for rapid assessments of recreational water quality to better protect public health is well accepted throughout the research and regulatory communities. Rapid analytical methods, such as quantitative polymerase chain reaction (qPCR) and immunomagnetic separation/adenosine triphosphate (ATP) analysis, are being tested but are not yet ready for widespread use.Another solution is the use of predictive models, wherein variable(s) that are easily and quickly measured are surrogates for concentrations of fecal-indicator bacteria. Rainfall-based alerts, the simplest type of model, have been used by several communities for a number of years. Deterministic models use mathematical representations of the processes that affect bacteria concentrations; this type of model is being used for beach-closure decisions at one location in the USA. Multivariable statistical models are being developed and tested in many areas of the USA; however, they are only used in three areas of the Great Lakes to aid in notifications of beach advisories or closings. These “operational” statistical models can result in more accurate assessments of recreational water quality than use of the previous day's Escherichia coli (E. coli)concentration as determined by traditional culture methods. The Ohio Nowcast, at Huntington Beach, Bay Village, Ohio, is described in this paper as an example of an operational statistical model. Because predictive modeling is a dynamic process, water-resource managers continue to collect additional data to improve the predictive ability of the nowcast and expand the nowcast to other Ohio beaches and a recreational river. Although predictive models have been shown to work well at some beaches and are becoming more widely accepted, implementation in many areas is limited by funding, lack of coordinated technical leadership, and lack of supporting epidemiological data.

  9. A METHOD OF PREDICTING BREAST CANCER USING QUESTIONNAIRES

    Directory of Open Access Journals (Sweden)

    V. N. Malashenko

    2017-01-01

    Full Text Available Purpose. Simplify and increase the accuracy of the questionnaire method of predicting breast cancer (BC for subsequent computer processing and Automated dispensary at risk without the doctor.Materials and methods. The work was based on statistical data obtained by surveying 305 women. The questionnaire included 63 items: 17 open-ended questions, 46 — with a choice of response. It was established multifactor model, the development of which, in addition to the survey data were used materials from the medical histories of patients and respondents data immuno-histochemical studies. Data analysis was performed using Statistica 10.0 and MedCalc 12.7.0 programs.Results. The ROC analysis was performas and the questionnaire data revealed 8 significant predictors of breast cancer. On their basis we created the formula for calculating the prognostic factor of risk of development of breast cancer with a sensitivity 83,12% and a specificity of 91,43%.Conclusions. The completed developments allow to create a computer program for automated processing of profiles on the formation of groups at risk of breast cancer and clinical supervision. The introduction of a screening questionnaire over the Internet with subsequent computer processing of the results, without the direct involvement of doctors, will increase the coverage of the female population of the Russian Federation activities related to the prevention of breast cancer. It can free up time for physicians to receive primary patients, as well as improve oncological vigilance of the female population of the Russian Federation.

  10. The prediction of the incidence rate of upper limb musculoskeletal disorders, with CTD risk index method on potters of Meybod city

    Directory of Open Access Journals (Sweden)

    Reza Khani Jazani

    2012-02-01

    Full Text Available Background: The objective of this study was to predict the incidence of musculoskeletal disorders in potters of Meybod city by performing CTD risk index method.Materials and Method: This is a descriptive cross-sectional study. Target society was all workers in pottery workshops which were located in the Meybod. Information related to musculoskeletal disorders was obtained by the Nordic questionnaire and we used CTD risk index method to predict the incidence of musculoskeletal disorders.Results: We observed in this study that 59.3% of the potters had symptoms of musculoskeletal disorders in at least in one of their upper extremities. Also significant differences between mean CTD risk index on potters with and without symptoms of the upper limb musculoskeletal disorders, respectively (p=0.038.Conclusion: CTD risk index method can be as a suitable method for predicting the incidence of musculoskeletal disorders used in the potters

  11. Variability of dose predictions for cesium-137 and radium-226 using the PRISM method

    International Nuclear Information System (INIS)

    Bergstroem, U.; Andersson, K.; Roejder, B.

    1984-01-01

    The uncertainty associated with dose predictions for cesium-137 and radium-226 in a specific ecosystem has been studied. The method used is a systematic method for determining the effect of parameter uncertainties on model prediction called PRISM. The ecosystems studied are different types of lakes where the following transport processes are included: runoff of water in the lake, irrigation, transport in soil, in groundwater and in sediment. The ecosystems are modelled by the compartment principle, using the BIOPATH-code. Seven different internal exposure pathways are included. The total dose commitment for both nuclides varies about two orders of magnitude. For cesium-137 the total dose and the uncertainty are dominated by the consumption of fish. The most important factor to the total uncertainty is the concentration factor water-fish. For radium-226 the largest contributions to the total dose are the exposure pathways, fish, milk and drinking-water. Half of the uncertainty lies in the milk dose. This uncertainty is dominated by the distribution factor for milk. (orig.)

  12. Method of predicting the mean lung dose based on a patient's anatomy and dose-volume histograms

    Energy Technology Data Exchange (ETDEWEB)

    Zawadzka, Anna, E-mail: a.zawadzka@zfm.coi.pl [Medical Physics Department, Centre of Oncology, Maria Sklodowska-Curie Memorial Cancer Center, Warsaw (Poland); Nesteruk, Marta [Faculty of Physics, University of Warsaw, Warsaw (Poland); Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich (Switzerland); Brzozowska, Beata [Faculty of Physics, University of Warsaw, Warsaw (Poland); Kukołowicz, Paweł F. [Medical Physics Department, Centre of Oncology, Maria Sklodowska-Curie Memorial Cancer Center, Warsaw (Poland)

    2017-04-01

    The aim of this study was to propose a method to predict the minimum achievable mean lung dose (MLD) and corresponding dosimetric parameters for organs-at-risk (OAR) based on individual patient anatomy. For each patient, the dose for 36 equidistant individual multileaf collimator shaped fields in the treatment planning system (TPS) was calculated. Based on these dose matrices, the MLD for each patient was predicted by the homemade DosePredictor software in which the solution of linear equations was implemented. The software prediction results were validated based on 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT) plans previously prepared for 16 patients with stage III non–small-cell lung cancer (NSCLC). For each patient, dosimetric parameters derived from plans and the results calculated by DosePredictor were compared. The MLD, the maximum dose to the spinal cord (D{sub max} {sub cord}) and the mean esophageal dose (MED) were analyzed. There was a strong correlation between the MLD calculated by the DosePredictor and those obtained in treatment plans regardless of the technique used. The correlation coefficient was 0.96 for both 3D-CRT and VMAT techniques. In a similar manner, MED correlations of 0.98 and 0.96 were obtained for 3D-CRT and VMAT plans, respectively. The maximum dose to the spinal cord was not predicted very well. The correlation coefficient was 0.30 and 0.61 for 3D-CRT and VMAT, respectively. The presented method allows us to predict the minimum MLD and corresponding dosimetric parameters to OARs without the necessity of plan preparation. The method can serve as a guide during the treatment planning process, for example, as initial constraints in VMAT optimization. It allows the probability of lung pneumonitis to be predicted.

  13. Prediction of forces and moments for flight vehicle control effectors. Part 1: Validation of methods for predicting hypersonic vehicle controls forces and moments

    Science.gov (United States)

    Maughmer, Mark D.; Ozoroski, L.; Ozoroski, T.; Straussfogel, D.

    1990-01-01

    Many types of hypersonic aircraft configurations are currently being studied for feasibility of future development. Since the control of the hypersonic configurations throughout the speed range has a major impact on acceptable designs, it must be considered in the conceptual design stage. The ability of the aerodynamic analysis methods contained in an industry standard conceptual design system, APAS II, to estimate the forces and moments generated through control surface deflections from low subsonic to high hypersonic speeds is considered. Predicted control forces and moments generated by various control effectors are compared with previously published wind tunnel and flight test data for three configurations: the North American X-15, the Space Shuttle Orbiter, and a hypersonic research airplane concept. Qualitative summaries of the results are given for each longitudinal force and moment and each control derivative in the various speed ranges. Results show that all predictions of longitudinal stability and control derivatives are acceptable for use at the conceptual design stage. Results for most lateral/directional control derivatives are acceptable for conceptual design purposes; however, predictions at supersonic Mach numbers for the change in yawing moment due to aileron deflection and the change in rolling moment due to rudder deflection are found to be unacceptable. Including shielding effects in the analysis is shown to have little effect on lift and pitching moment predictions while improving drag predictions.

  14. Predicting welding distortion in a panel structure with longitudinal stiffeners using inherent deformations obtained by inverse analysis method.

    Science.gov (United States)

    Liang, Wei; Murakawa, Hidekazu

    2014-01-01

    Welding-induced deformation not only negatively affects dimension accuracy but also degrades the performance of product. If welding deformation can be accurately predicted beforehand, the predictions will be helpful for finding effective methods to improve manufacturing accuracy. Till now, there are two kinds of finite element method (FEM) which can be used to simulate welding deformation. One is the thermal elastic plastic FEM and the other is elastic FEM based on inherent strain theory. The former only can be used to calculate welding deformation for small or medium scale welded structures due to the limitation of computing speed. On the other hand, the latter is an effective method to estimate the total welding distortion for large and complex welded structures even though it neglects the detailed welding process. When the elastic FEM is used to calculate the welding-induced deformation for a large structure, the inherent deformations in each typical joint should be obtained beforehand. In this paper, a new method based on inverse analysis was proposed to obtain the inherent deformations for weld joints. Through introducing the inherent deformations obtained by the proposed method into the elastic FEM based on inherent strain theory, we predicted the welding deformation of a panel structure with two longitudinal stiffeners. In addition, experiments were carried out to verify the simulation results.

  15. Review of the status of reactor physics predictive methods for burnable poisons in CAGRs

    International Nuclear Information System (INIS)

    Edens, D.J.; McEllin, M.

    1983-01-01

    An essential component of the design of Commercial Advanced Gas Cooled Reactor fuel necessary to achieve higher discharge irradiations is the incorporation of burnable poisons. The poisons enable the more highly enriched fuel required to reach higher irradiation to be loaded without increasing the peak channel power. The optimum choice of fuel enrichment and poison loading will be made using reactor physics predictive methods developed by Berkeley Nuclear Laboratories. These methods and the evidence available to support them from theoretical comparisons, zero energy experiments, WAGR irradiations, and measurements on operating CAGRs are described. (author)

  16. Review of the status of reactor physics predictive methods for burnable poisons in CAGRs

    International Nuclear Information System (INIS)

    Edens, D.J.; McEllin, M.

    1983-01-01

    An essential component of the design of Commercial Advanced Gas Cooled Reactor fuel necessary to achieve higher discharge irradiations is the incorporation of burnable poisons. The poisons enable the more highly enriched fuel required to reach higher irradiation to be loaded without increasing the peak channel power. The optimum choice of fuel enrichment and poison loading will be made using reactor physics predictive methods developed by Berkeley Nuclear Laboratories. The paper describes these methods and the evidence available to support them from theoretical comparisons, zero energy experiments, WAGR irradiations, and measurements on operating CAGR's. (author)

  17. Predicting lattice thermal conductivity with help from ab initio methods

    Science.gov (United States)

    Broido, David

    2015-03-01

    The lattice thermal conductivity is a fundamental transport parameter that determines the utility a material for specific thermal management applications. Materials with low thermal conductivity find applicability in thermoelectric cooling and energy harvesting. High thermal conductivity materials are urgently needed to help address the ever-growing heat dissipation problem in microelectronic devices. Predictive computational approaches can provide critical guidance in the search and development of new materials for such applications. Ab initio methods for calculating lattice thermal conductivity have demonstrated predictive capability, but while they are becoming increasingly efficient, they are still computationally expensive particularly for complex crystals with large unit cells . In this talk, I will review our work on first principles phonon transport for which the intrinsic lattice thermal conductivity is limited only by phonon-phonon scattering arising from anharmonicity. I will examine use of the phase space for anharmonic phonon scattering and the Grüneisen parameters as measures of the thermal conductivities for a range of materials and compare these to the widely used guidelines stemming from the theory of Liebfried and Schölmann. This research was supported primarily by the NSF under Grant CBET-1402949, and by the S3TEC, an Energy Frontier Research Center funded by the US DOE, office of Basic Energy Sciences under Award No. DE-SC0001299.

  18. Evaluation of NO2 predictions by the plume volume molar ratio method (PVMRM) and ozone limiting method (OLM) in AERMOD using new field observations.

    Science.gov (United States)

    Hendrick, Elizabeth M; Tino, Vincent R; Hanna, Steven R; Egan, Bruce A

    2013-07-01

    The U.S. Environmental Protection Agency (EPA) plume volume molar ratio method (PVMRM) and the ozone limiting method (OLM) are in the AERMOD model to predict the 1-hr average NO2/NO(x) concentration ratio. These ratios are multiplied by the AERMOD predicted NO(x) concentration to predict the 1-hr average NO2 concentration. This paper first briefly reviews PVMRM and OLM and points out some scientific parameterizations that could be improved (such as specification of relative dispersion coefficients) and then discusses an evaluation of the PVMRM and OLM methods as implemented in AERMOD using a new data set. While AERMOD has undergone many model evaluation studies in its default mode, PVMRM and OLM are nondefault options, and to date only three NO2 field data sets have been used in their evaluations. Here AERMOD/PVMRM and AERMOD/OLM codes are evaluated with a new data set from a northern Alaskan village with a small power plant. Hourly pollutant concentrations (NO, NO2, ozone) as well as meteorological variables were measured at a single monitor 500 m from the plant. Power plant operating parameters and emissions were calculated based on hourly operator logs. Hourly observations covering 1 yr were considered, but the evaluations only used hours when the wind was in a 60 degrees sector including the monitor and when concentrations were above a threshold. PVMRM is found to have little bias in predictions of the C(NO2)/C(NO(x)) ratio, which mostly ranged from 0.2 to 0.4 at this site. OLM overpredicted the ratio. AERMOD overpredicts the maximum NO(x) concentration but has an underprediction bias for lower concentrations. AERMOD/PVMRM overpredicts the maximum C(NO2) by about 50%, while AERMOD/OLM overpredicts by a factor of 2. For 381 hours evaluated, there is a relative mean bias in C(NO2) predictions of near zero for AERMOD/PVMRM, while the relative mean bias reflects a factor of 2 overprediction for AERMOD/OLM. This study was initiated because the new stringent 1-hr NO2

  19. Method and apparatus to predict the remaining service life of an operating system

    Science.gov (United States)

    Greitzer, Frank L.; Kangas, Lars J.; Terrones, Kristine M.; Maynard, Melody A.; Pawlowski, Ronald A. , Ferryman; Thomas A.; Skorpik, James R.; Wilson, Bary W.

    2008-11-25

    A method and computer-based apparatus for monitoring the degradation of, predicting the remaining service life of, and/or planning maintenance for, an operating system are disclosed. Diagnostic information on degradation of the operating system is obtained through measurement of one or more performance characteristics by one or more sensors onboard and/or proximate the operating system. Though not required, it is preferred that the sensor data are validated to improve the accuracy and reliability of the service life predictions. The condition or degree of degradation of the operating system is presented to a user by way of one or more calculated, numeric degradation figures of merit that are trended against one or more independent variables using one or more mathematical techniques. Furthermore, more than one trendline and uncertainty interval may be generated for a given degradation figure of merit/independent variable data set. The trendline(s) and uncertainty interval(s) are subsequently compared to one or more degradation figure of merit thresholds to predict the remaining service life of the operating system. The present invention enables multiple mathematical approaches in determining which trendline(s) to use to provide the best estimate of the remaining service life.

  20. e-Bitter: Bitterant Prediction by the Consensus Voting From the Machine-Learning Methods.

    Science.gov (United States)

    Zheng, Suqing; Jiang, Mengying; Zhao, Chengwei; Zhu, Rui; Hu, Zhicheng; Xu, Yong; Lin, Fu

    2018-01-01

    In-silico bitterant prediction received the considerable attention due to the expensive and laborious experimental-screening of the bitterant. In this work, we collect the fully experimental dataset containing 707 bitterants and 592 non-bitterants, which is distinct from the fully or partially hypothetical non-bitterant dataset used in the previous works. Based on this experimental dataset, we harness the consensus votes from the multiple machine-learning methods (e.g., deep learning etc.) combined with the molecular fingerprint to build the bitter/bitterless classification models with five-fold cross-validation, which are further inspected by the Y-randomization test and applicability domain analysis. One of the best consensus models affords the accuracy, precision, specificity, sensitivity, F1-score, and Matthews correlation coefficient (MCC) of 0.929, 0.918, 0.898, 0.954, 0.936, and 0.856 respectively on our test set. For the automatic prediction of bitterant, a graphic program "e-Bitter" is developed for the convenience of users via the simple mouse click. To our best knowledge, it is for the first time to adopt the consensus model for the bitterant prediction and develop the first free stand-alone software for the experimental food scientist.