WorldWideScience

Sample records for method predicts formaldonitrone

  1. Epitope prediction methods

    DEFF Research Database (Denmark)

    Karosiene, Edita

    Analysis. The chapter provides detailed explanations on how to use different methods for T cell epitope discovery research, explaining how input should be given as well as how to interpret the output. In the last chapter, I present the results of a bioinformatics analysis of epitopes from the yellow fever...... peptide-MHC interactions. Furthermore, using yellow fever virus epitopes, we demonstrated the power of the %Rank score when compared with the binding affinity score of MHC prediction methods, suggesting that this score should be considered to be used for selecting potential T cell epitopes. In summary...... immune responses. Therefore, it is of great importance to be able to identify peptides that bind to MHC molecules, in order to understand the nature of immune responses and discover T cell epitopes useful for designing new vaccines and immunotherapies. MHC molecules in humans, referred to as human...

  2. Motor degradation prediction methods

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, J.R.; Kelly, J.F.; Delzingaro, M.J.

    1996-12-01

    Motor Operated Valve (MOV) squirrel cage AC motor rotors are susceptible to degradation under certain conditions. Premature failure can result due to high humidity/temperature environments, high running load conditions, extended periods at locked rotor conditions (i.e. > 15 seconds) or exceeding the motor`s duty cycle by frequent starts or multiple valve stroking. Exposure to high heat and moisture due to packing leaks, pressure seal ring leakage or other causes can significantly accelerate the degradation. ComEd and Liberty Technologies have worked together to provide and validate a non-intrusive method using motor power diagnostics to evaluate MOV rotor condition and predict failure. These techniques have provided a quick, low radiation dose method to evaluate inaccessible motors, identify degradation and allow scheduled replacement of motors prior to catastrophic failures.

  3. Motor degradation prediction methods

    International Nuclear Information System (INIS)

    Arnold, J.R.; Kelly, J.F.; Delzingaro, M.J.

    1996-01-01

    Motor Operated Valve (MOV) squirrel cage AC motor rotors are susceptible to degradation under certain conditions. Premature failure can result due to high humidity/temperature environments, high running load conditions, extended periods at locked rotor conditions (i.e. > 15 seconds) or exceeding the motor's duty cycle by frequent starts or multiple valve stroking. Exposure to high heat and moisture due to packing leaks, pressure seal ring leakage or other causes can significantly accelerate the degradation. ComEd and Liberty Technologies have worked together to provide and validate a non-intrusive method using motor power diagnostics to evaluate MOV rotor condition and predict failure. These techniques have provided a quick, low radiation dose method to evaluate inaccessible motors, identify degradation and allow scheduled replacement of motors prior to catastrophic failures

  4. Empirical Flutter Prediction Method.

    Science.gov (United States)

    1988-03-05

    been used in this way to discover species or subspecies of animals, and to discover different types of voter or comsumer requiring different persuasions...respect to behavior or performance or response variables. Once this were done, corresponding clusters might be sought among descriptive or predictive or...jump in a response. The first sort of usage does not apply to the flutter prediction problem. Here the types of behavior are the different kinds of

  5. Prediction method abstracts

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-31

    This conference was held December 4--8, 1994 in Asilomar, California. The purpose of this meeting was to provide a forum for exchange of state-of-the-art information concerning the prediction of protein structure. Attention if focused on the following: comparative modeling; sequence to fold assignment; and ab initio folding.

  6. Earthquake prediction by Kina Method

    International Nuclear Information System (INIS)

    Kianoosh, H.; Keypour, H.; Naderzadeh, A.; Motlagh, H.F.

    2005-01-01

    Earthquake prediction has been one of the earliest desires of the man. Scientists have worked hard to predict earthquakes for a long time. The results of these efforts can generally be divided into two methods of prediction: 1) Statistical Method, and 2) Empirical Method. In the first method, earthquakes are predicted using statistics and probabilities, while the second method utilizes variety of precursors for earthquake prediction. The latter method is time consuming and more costly. However, the result of neither method has fully satisfied the man up to now. In this paper a new method entitled 'Kiana Method' is introduced for earthquake prediction. This method offers more accurate results yet lower cost comparing to other conventional methods. In Kiana method the electrical and magnetic precursors are measured in an area. Then, the time and the magnitude of an earthquake in the future is calculated using electrical, and in particular, electrical capacitors formulas. In this method, by daily measurement of electrical resistance in an area we make clear that the area is capable of earthquake occurrence in the future or not. If the result shows a positive sign, then the occurrence time and the magnitude can be estimated by the measured quantities. This paper explains the procedure and details of this prediction method. (authors)

  7. Predictive Methods of Pople

    Indian Academy of Sciences (India)

    Chemistry for their pioneering contri butions to the development of computational methods in quantum chemistry and density functional theory .... program of Pop Ie for ab-initio electronic structure calculation of molecules. This ab-initio MO ...

  8. Rainfall prediction with backpropagation method

    Science.gov (United States)

    Wahyuni, E. G.; Fauzan, L. M. F.; Abriyani, F.; Muchlis, N. F.; Ulfa, M.

    2018-03-01

    Rainfall is an important factor in many fields, such as aviation and agriculture. Although it has been assisted by technology but the accuracy can not reach 100% and there is still the possibility of error. Though current rainfall prediction information is needed in various fields, such as agriculture and aviation fields. In the field of agriculture, to obtain abundant and quality yields, farmers are very dependent on weather conditions, especially rainfall. Rainfall is one of the factors that affect the safety of aircraft. To overcome the problems above, then it’s required a system that can accurately predict rainfall. In predicting rainfall, artificial neural network modeling is applied in this research. The method used in modeling this artificial neural network is backpropagation method. Backpropagation methods can result in better performance in repetitive exercises. This means that the weight of the ANN interconnection can approach the weight it should be. Another advantage of this method is the ability in the learning process adaptively and multilayer owned on this method there is a process of weight changes so as to minimize error (fault tolerance). Therefore, this method can guarantee good system resilience and consistently work well. The network is designed using 4 input variables, namely air temperature, air humidity, wind speed, and sunshine duration and 3 output variables ie low rainfall, medium rainfall, and high rainfall. Based on the research that has been done, the network can be used properly, as evidenced by the results of the prediction of the system precipitation is the same as the results of manual calculations.

  9. Ensemble method for dengue prediction.

    Science.gov (United States)

    Buczak, Anna L; Baugher, Benjamin; Moniz, Linda J; Bagley, Thomas; Babin, Steven M; Guven, Erhan

    2018-01-01

    In the 2015 NOAA Dengue Challenge, participants made three dengue target predictions for two locations (Iquitos, Peru, and San Juan, Puerto Rico) during four dengue seasons: 1) peak height (i.e., maximum weekly number of cases during a transmission season; 2) peak week (i.e., week in which the maximum weekly number of cases occurred); and 3) total number of cases reported during a transmission season. A dengue transmission season is the 12-month period commencing with the location-specific, historical week with the lowest number of cases. At the beginning of the Dengue Challenge, participants were provided with the same input data for developing the models, with the prediction testing data provided at a later date. Our approach used ensemble models created by combining three disparate types of component models: 1) two-dimensional Method of Analogues models incorporating both dengue and climate data; 2) additive seasonal Holt-Winters models with and without wavelet smoothing; and 3) simple historical models. Of the individual component models created, those with the best performance on the prior four years of data were incorporated into the ensemble models. There were separate ensembles for predicting each of the three targets at each of the two locations. Our ensemble models scored higher for peak height and total dengue case counts reported in a transmission season for Iquitos than all other models submitted to the Dengue Challenge. However, the ensemble models did not do nearly as well when predicting the peak week. The Dengue Challenge organizers scored the dengue predictions of the Challenge participant groups. Our ensemble approach was the best in predicting the total number of dengue cases reported for transmission season and peak height for Iquitos, Peru.

  10. Ensemble method for dengue prediction.

    Directory of Open Access Journals (Sweden)

    Anna L Buczak

    Full Text Available In the 2015 NOAA Dengue Challenge, participants made three dengue target predictions for two locations (Iquitos, Peru, and San Juan, Puerto Rico during four dengue seasons: 1 peak height (i.e., maximum weekly number of cases during a transmission season; 2 peak week (i.e., week in which the maximum weekly number of cases occurred; and 3 total number of cases reported during a transmission season. A dengue transmission season is the 12-month period commencing with the location-specific, historical week with the lowest number of cases. At the beginning of the Dengue Challenge, participants were provided with the same input data for developing the models, with the prediction testing data provided at a later date.Our approach used ensemble models created by combining three disparate types of component models: 1 two-dimensional Method of Analogues models incorporating both dengue and climate data; 2 additive seasonal Holt-Winters models with and without wavelet smoothing; and 3 simple historical models. Of the individual component models created, those with the best performance on the prior four years of data were incorporated into the ensemble models. There were separate ensembles for predicting each of the three targets at each of the two locations.Our ensemble models scored higher for peak height and total dengue case counts reported in a transmission season for Iquitos than all other models submitted to the Dengue Challenge. However, the ensemble models did not do nearly as well when predicting the peak week.The Dengue Challenge organizers scored the dengue predictions of the Challenge participant groups. Our ensemble approach was the best in predicting the total number of dengue cases reported for transmission season and peak height for Iquitos, Peru.

  11. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  12. NEURAL METHODS FOR THE FINANCIAL PREDICTION

    OpenAIRE

    Jerzy Balicki; Piotr Dryja; Waldemar Korłub; Piotr Przybyłek; Maciej Tyszka; Marcin Zadroga; Marcin Zakidalski

    2016-01-01

    Artificial neural networks can be used to predict share investment on the stock market, assess the reliability of credit client or predicting banking crises. Moreover, this paper discusses the principles of cooperation neural network algorithms with evolutionary method, and support vector machines. In addition, a reference is made to other methods of artificial intelligence, which are used in finance prediction.

  13. NEURAL METHODS FOR THE FINANCIAL PREDICTION

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2016-06-01

    Full Text Available Artificial neural networks can be used to predict share investment on the stock market, assess the reliability of credit client or predicting banking crises. Moreover, this paper discusses the principles of cooperation neural network algorithms with evolutionary method, and support vector machines. In addition, a reference is made to other methods of artificial intelligence, which are used in finance prediction.

  14. Prediction methods and databases within chemoinformatics

    DEFF Research Database (Denmark)

    Jónsdóttir, Svava Osk; Jørgensen, Flemming Steen; Brunak, Søren

    2005-01-01

    MOTIVATION: To gather information about available databases and chemoinformatics methods for prediction of properties relevant to the drug discovery and optimization process. RESULTS: We present an overview of the most important databases with 2-dimensional and 3-dimensional structural information...... about drugs and drug candidates, and of databases with relevant properties. Access to experimental data and numerical methods for selecting and utilizing these data is crucial for developing accurate predictive in silico models. Many interesting predictive methods for classifying the suitability...

  15. Machine learning methods for metabolic pathway prediction

    Directory of Open Access Journals (Sweden)

    Karp Peter D

    2010-01-01

    Full Text Available Abstract Background A key challenge in systems biology is the reconstruction of an organism's metabolic network from its genome sequence. One strategy for addressing this problem is to predict which metabolic pathways, from a reference database of known pathways, are present in the organism, based on the annotated genome of the organism. Results To quantitatively validate methods for pathway prediction, we developed a large "gold standard" dataset of 5,610 pathway instances known to be present or absent in curated metabolic pathway databases for six organisms. We defined a collection of 123 pathway features, whose information content we evaluated with respect to the gold standard. Feature data were used as input to an extensive collection of machine learning (ML methods, including naïve Bayes, decision trees, and logistic regression, together with feature selection and ensemble methods. We compared the ML methods to the previous PathoLogic algorithm for pathway prediction using the gold standard dataset. We found that ML-based prediction methods can match the performance of the PathoLogic algorithm. PathoLogic achieved an accuracy of 91% and an F-measure of 0.786. The ML-based prediction methods achieved accuracy as high as 91.2% and F-measure as high as 0.787. The ML-based methods output a probability for each predicted pathway, whereas PathoLogic does not, which provides more information to the user and facilitates filtering of predicted pathways. Conclusions ML methods for pathway prediction perform as well as existing methods, and have qualitative advantages in terms of extensibility, tunability, and explainability. More advanced prediction methods and/or more sophisticated input features may improve the performance of ML methods. However, pathway prediction performance appears to be limited largely by the ability to correctly match enzymes to the reactions they catalyze based on genome annotations.

  16. Machine learning methods for metabolic pathway prediction

    Science.gov (United States)

    2010-01-01

    Background A key challenge in systems biology is the reconstruction of an organism's metabolic network from its genome sequence. One strategy for addressing this problem is to predict which metabolic pathways, from a reference database of known pathways, are present in the organism, based on the annotated genome of the organism. Results To quantitatively validate methods for pathway prediction, we developed a large "gold standard" dataset of 5,610 pathway instances known to be present or absent in curated metabolic pathway databases for six organisms. We defined a collection of 123 pathway features, whose information content we evaluated with respect to the gold standard. Feature data were used as input to an extensive collection of machine learning (ML) methods, including naïve Bayes, decision trees, and logistic regression, together with feature selection and ensemble methods. We compared the ML methods to the previous PathoLogic algorithm for pathway prediction using the gold standard dataset. We found that ML-based prediction methods can match the performance of the PathoLogic algorithm. PathoLogic achieved an accuracy of 91% and an F-measure of 0.786. The ML-based prediction methods achieved accuracy as high as 91.2% and F-measure as high as 0.787. The ML-based methods output a probability for each predicted pathway, whereas PathoLogic does not, which provides more information to the user and facilitates filtering of predicted pathways. Conclusions ML methods for pathway prediction perform as well as existing methods, and have qualitative advantages in terms of extensibility, tunability, and explainability. More advanced prediction methods and/or more sophisticated input features may improve the performance of ML methods. However, pathway prediction performance appears to be limited largely by the ability to correctly match enzymes to the reactions they catalyze based on genome annotations. PMID:20064214

  17. Method for Predicting Thermal Buckling in Rails

    Science.gov (United States)

    2018-01-01

    A method is proposed herein for predicting the onset of thermal buckling in rails in such a way as to provide a means of avoiding this type of potentially devastating failure. The method consists of the development of a thermomechanical model of rail...

  18. Prediction Methods for Blood Glucose Concentration

    DEFF Research Database (Denmark)

    “Recent Results on Glucose–Insulin Predictions by Means of a State Observer for Time-Delay Systems” by Pasquale Palumbo et al. introduces a prediction model which in real time predicts the insulin concentration in blood which in turn is used in a control system. The method is tested in simulation...... EEG signals to predict upcoming hypoglycemic situations in real-time by employing artificial neural networks. The results of a 30-day long clinical study with the implanted device and the developed algorithm are presented. The chapter “Meta-Learning Based Blood Glucose Predictor for Diabetic......, but the insulin amount is chosen using factors that account for this expectation. The increasing availability of more accurate continuous blood glucose measurement (CGM) systems is attracting much interest to the possibilities of explicit prediction of future BG values. Against this background, in 2014 a two...

  19. A method for predicting monthly rainfall patterns

    International Nuclear Information System (INIS)

    Njau, E.C.

    1987-11-01

    A brief survey is made of previous methods that have been used to predict rainfall trends or drought spells in different parts of the earth. The basic methodologies or theoretical strategies used in these methods are compared with contents of a recent theory of Sun-Weather/Climate links (Njau, 1985a; 1985b; 1986; 1987a; 1987b; 1987c) which point towards the possibility of practical climatic predictions. It is shown that not only is the theoretical basis of each of these methodologies or strategies fully incorporated into the above-named theory, but also this theory may be used to develop a technique by which future monthly rainfall patterns can be predicted in further and finer details. We describe the latter technique and then illustrate its workability by means of predictions made on monthly rainfall patterns in some East African meteorological stations. (author). 43 refs, 11 figs, 2 tabs

  20. Investigation into Methods for Predicting Connection Temperatures

    Directory of Open Access Journals (Sweden)

    K. Anderson

    2009-01-01

    Full Text Available The mechanical response of connections in fire is largely based on material strength degradation and the interactions between the various components of the connection. In order to predict connection performance in fire, temperature profiles must initially be established in order to evaluate the material strength degradation over time. This paper examines two current methods for predicting connection temperatures: The percentage method, where connection temperatures are calculated as a percentage of the adjacent beam lower-flange, mid-span temperatures; and the lumped capacitance method, based on the lumped mass of the connection. Results from the percentage method do not correlate well with experimental results, whereas the lumped capacitance method shows much better agreement with average connection temperatures. A 3D finite element heat transfer model was also created in Abaqus, and showed good correlation with experimental results. 

  1. Soft Computing Methods for Disulfide Connectivity Prediction.

    Science.gov (United States)

    Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.

  2. New prediction methods for collaborative filtering

    Directory of Open Access Journals (Sweden)

    Hasan BULUT

    2016-05-01

    Full Text Available Companies, in particular e-commerce companies, aims to increase customer satisfaction, hence in turn increase their profits, using recommender systems. Recommender Systems are widely used nowadays and they provide strategic advantages to the companies that use them. These systems consist of different stages. In the first stage, the similarities between the active user and other users are computed using the user-product ratings matrix. Then, the neighbors of the active user are found from these similarities. In prediction calculation stage, the similarities computed at the first stage are used to generate the weight vector of the closer neighbors. Neighbors affect the prediction value by the corresponding value of the weight vector. In this study, we developed two new methods for the prediction calculation stage which is the last stage of collaborative filtering. The performance of these methods are measured with evaluation metrics used in the literature and compared with other studies in this field.

  3. Novel hyperspectral prediction method and apparatus

    Science.gov (United States)

    Kemeny, Gabor J.; Crothers, Natalie A.; Groth, Gard A.; Speck, Kathy A.; Marbach, Ralf

    2009-05-01

    Both the power and the challenge of hyperspectral technologies is the very large amount of data produced by spectral cameras. While off-line methodologies allow the collection of gigabytes of data, extended data analysis sessions are required to convert the data into useful information. In contrast, real-time monitoring, such as on-line process control, requires that compression of spectral data and analysis occur at a sustained full camera data rate. Efficient, high-speed practical methods for calibration and prediction are therefore sought to optimize the value of hyperspectral imaging. A novel method of matched filtering known as science based multivariate calibration (SBC) was developed for hyperspectral calibration. Classical (MLR) and inverse (PLS, PCR) methods are combined by spectroscopically measuring the spectral "signal" and by statistically estimating the spectral "noise." The accuracy of the inverse model is thus combined with the easy interpretability of the classical model. The SBC method is optimized for hyperspectral data in the Hyper-CalTM software used for the present work. The prediction algorithms can then be downloaded into a dedicated FPGA based High-Speed Prediction EngineTM module. Spectral pretreatments and calibration coefficients are stored on interchangeable SD memory cards, and predicted compositions are produced on a USB interface at real-time camera output rates. Applications include minerals, pharmaceuticals, food processing and remote sensing.

  4. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  5. Prediction methods environmental-effect reporting

    International Nuclear Information System (INIS)

    Jonker, R.J.; Koester, H.W.

    1987-12-01

    This report provides a survey of prediction methods which can be applied to the calculation of emissions in cuclear-reactor accidents, in the framework of environment-effect reports (dutch m.e.r.) or risk analyses. Also emissions during normal operation are important for m.e.r.. These can be derived from measured emissions of power plants being in operation. Data concerning the latter are reported. The report consists of an introduction into reactor technology, among which a description of some reactor types, the corresponding fuel cycle and dismantling scenarios - a discussion of risk-analyses for nuclear power plants and the physical processes which can play a role during accidents - a discussion of prediction methods to be employed and the expected developments in this area - some background information. (aughor). 145 refs.; 21 figs.; 20 tabs

  6. A comparison of methods for cascade prediction

    OpenAIRE

    Guo, Ruocheng; Shakarian, Paulo

    2016-01-01

    Information cascades exist in a wide variety of platforms on Internet. A very important real-world problem is to identify which information cascades can go viral. A system addressing this problem can be used in a variety of applications including public health, marketing and counter-terrorism. As a cascade can be considered as compound of the social network and the time series. However, in related literature where methods for solving the cascade prediction problem were proposed, the experimen...

  7. Hybrid methods for airframe noise numerical prediction

    Energy Technology Data Exchange (ETDEWEB)

    Terracol, M.; Manoha, E.; Herrero, C.; Labourasse, E.; Redonnet, S. [ONERA, Department of CFD and Aeroacoustics, BP 72, Chatillon (France); Sagaut, P. [Laboratoire de Modelisation en Mecanique - UPMC/CNRS, Paris (France)

    2005-07-01

    This paper describes some significant steps made towards the numerical simulation of the noise radiated by the high-lift devices of a plane. Since the full numerical simulation of such configuration is still out of reach for present supercomputers, some hybrid strategies have been developed to reduce the overall cost of such simulations. The proposed strategy relies on the coupling of an unsteady nearfield CFD with an acoustic propagation solver based on the resolution of the Euler equations for midfield propagation in an inhomogeneous field, and the use of an integral solver for farfield acoustic predictions. In the first part of this paper, this CFD/CAA coupling strategy is presented. In particular, the numerical method used in the propagation solver is detailed, and two applications of this coupling method to the numerical prediction of the aerodynamic noise of an airfoil are presented. Then, a hybrid RANS/LES method is proposed in order to perform some unsteady simulations of complex noise sources. This method allows for significant reduction of the cost of such a simulation by considerably reducing the extent of the LES zone. This method is described and some results of the numerical simulation of the three-dimensional unsteady flow in the slat cove of a high-lift profile are presented. While these results remain very difficult to validate with experiments on similar configurations, they represent up to now the first 3D computations of this kind of flow. (orig.)

  8. Mechatronics technology in predictive maintenance method

    Science.gov (United States)

    Majid, Nurul Afiqah A.; Muthalif, Asan G. A.

    2017-11-01

    This paper presents recent mechatronics technology that can help to implement predictive maintenance by combining intelligent and predictive maintenance instrument. Vibration Fault Simulation System (VFSS) is an example of mechatronics system. The focus of this study is the prediction on the use of critical machines to detect vibration. Vibration measurement is often used as the key indicator of the state of the machine. This paper shows the choice of the appropriate strategy in the vibration of diagnostic process of the mechanical system, especially rotating machines, in recognition of the failure during the working process. In this paper, the vibration signature analysis is implemented to detect faults in rotary machining that includes imbalance, mechanical looseness, bent shaft, misalignment, missing blade bearing fault, balancing mass and critical speed. In order to perform vibration signature analysis for rotating machinery faults, studies have been made on how mechatronics technology is used as predictive maintenance methods. Vibration Faults Simulation Rig (VFSR) is designed to simulate and understand faults signatures. These techniques are based on the processing of vibrational data in frequency-domain. The LabVIEW-based spectrum analyzer software is developed to acquire and extract frequency contents of faults signals. This system is successfully tested based on the unique vibration fault signatures that always occur in a rotating machinery.

  9. Prediction Methods for Blood Glucose Concentration

    DEFF Research Database (Denmark)

    -day workshop on the design, use and evaluation of prediction methods for blood glucose concentration was held at the Johannes Kepler University Linz, Austria. One intention of the workshop was to bring together experts working in various fields on the same topic, in order to shed light from different angles...... discussions which allowed to receive direct feedback from the point of view of different disciplines. This book is based on the contributions of that workshop and is intended to convey an overview of the different aspects involved in the prediction. The individual chapters are based on the presentations given...... in the process of writing this book: All authors for their individual contributions, all reviewers of the book chapters, Daniela Hummer for the entire organization of the workshop, Boris Tasevski for helping with the typesetting, Florian Reiterer for his help editing the book, as well as Oliver Jackson and Karin...

  10. Computational predictive methods for fracture and fatigue

    Science.gov (United States)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-09-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  11. Seminal quality prediction using data mining methods.

    Science.gov (United States)

    Sahoo, Anoop J; Kumar, Yugal

    2014-01-01

    Now-a-days, some new classes of diseases have come into existences which are known as lifestyle diseases. The main reasons behind these diseases are changes in the lifestyle of people such as alcohol drinking, smoking, food habits etc. After going through the various lifestyle diseases, it has been found that the fertility rates (sperm quantity) in men has considerably been decreasing in last two decades. Lifestyle factors as well as environmental factors are mainly responsible for the change in the semen quality. The objective of this paper is to identify the lifestyle and environmental features that affects the seminal quality and also fertility rate in man using data mining methods. The five artificial intelligence techniques such as Multilayer perceptron (MLP), Decision Tree (DT), Navie Bayes (Kernel), Support vector machine+Particle swarm optimization (SVM+PSO) and Support vector machine (SVM) have been applied on fertility dataset to evaluate the seminal quality and also to predict the person is either normal or having altered fertility rate. While the eight feature selection techniques such as support vector machine (SVM), neural network (NN), evolutionary logistic regression (LR), support vector machine plus particle swarm optimization (SVM+PSO), principle component analysis (PCA), chi-square test, correlation and T-test methods have been used to identify more relevant features which affect the seminal quality. These techniques are applied on fertility dataset which contains 100 instances with nine attribute with two classes. The experimental result shows that SVM+PSO provides higher accuracy and area under curve (AUC) rate (94% & 0.932) among multi-layer perceptron (MLP) (92% & 0.728), Support Vector Machines (91% & 0.758), Navie Bayes (Kernel) (89% & 0.850) and Decision Tree (89% & 0.735) for some of the seminal parameters. This paper also focuses on the feature selection process i.e. how to select the features which are more important for prediction of

  12. New methods for fall risk prediction.

    Science.gov (United States)

    Ejupi, Andreas; Lord, Stephen R; Delbaere, Kim

    2014-09-01

    Accidental falls are the leading cause of injury-related death and hospitalization in old age, with over one-third of the older adults experiencing at least one fall or more each year. Because of limited healthcare resources, regular objective fall risk assessments are not possible in the community on a large scale. New methods for fall prediction are necessary to identify and monitor those older people at high risk of falling who would benefit from participating in falls prevention programmes. Technological advances have enabled less expensive ways to quantify physical fall risk in clinical practice and in the homes of older people. Recently, several studies have demonstrated that sensor-based fall risk assessments of postural sway, functional mobility, stepping and walking can discriminate between fallers and nonfallers. Recent research has used low-cost, portable and objective measuring instruments to assess fall risk in older people. Future use of these technologies holds promise for assessing fall risk accurately in an unobtrusive manner in clinical and daily life settings.

  13. Analytical methods for predicting contaminant transport

    International Nuclear Information System (INIS)

    Pigford, T.H.

    1989-09-01

    This paper summarizes some of the previous and recent work at the University of California on analytical solutions for predicting contaminate transport in porous and fractured geologic media. Emphasis is given here to the theories for predicting near-field transport, needed to derive the time-dependent source term for predicting far-field transport and overall repository performance. New theories summarized include solubility-limited release rate with flow backfill in rock, near-field transport of radioactive decay chains, interactive transport of colloid and solute, transport of carbon-14 as carbon dioxide in unsaturated rock, and flow of gases out of and a waste container through cracks and penetrations. 28 refs., 4 figs

  14. Machine Learning Methods to Predict Diabetes Complications.

    Science.gov (United States)

    Dagliati, Arianna; Marini, Simone; Sacchi, Lucia; Cogni, Giulia; Teliti, Marsida; Tibollo, Valentina; De Cata, Pasquale; Chiovato, Luca; Bellazzi, Riccardo

    2018-03-01

    One of the areas where Artificial Intelligence is having more impact is machine learning, which develops algorithms able to learn patterns and decision rules from data. Machine learning algorithms have been embedded into data mining pipelines, which can combine them with classical statistical strategies, to extract knowledge from data. Within the EU-funded MOSAIC project, a data mining pipeline has been used to derive a set of predictive models of type 2 diabetes mellitus (T2DM) complications based on electronic health record data of nearly one thousand patients. Such pipeline comprises clinical center profiling, predictive model targeting, predictive model construction and model validation. After having dealt with missing data by means of random forest (RF) and having applied suitable strategies to handle class imbalance, we have used Logistic Regression with stepwise feature selection to predict the onset of retinopathy, neuropathy, or nephropathy, at different time scenarios, at 3, 5, and 7 years from the first visit at the Hospital Center for Diabetes (not from the diagnosis). Considered variables are gender, age, time from diagnosis, body mass index (BMI), glycated hemoglobin (HbA1c), hypertension, and smoking habit. Final models, tailored in accordance with the complications, provided an accuracy up to 0.838. Different variables were selected for each complication and time scenario, leading to specialized models easy to translate to the clinical practice.

  15. Different Methods of Predicting Permeability in Shale

    DEFF Research Database (Denmark)

    Mbia, Ernest Ncha; Fabricius, Ida Lykke; Krogsbøll, Anette

    by two to five orders of magnitudes at lower vertical effective stress below 40 MPa as the content of clay minerals increases causing heterogeneity in shale material. Indirect permeability from consolidation can give maximum and minimum values of shale permeability needed in simulating fluid flow......Permeability is often very difficult to measure or predict in shale lithology. In this work we are determining shale permeability from consolidation tests data using Wissa et al., (1971) approach and comparing the results with predicted permeability from Kozeny’s model. Core and cuttings materials...... effective stress to 9 μD at high vertical effective stress of 100 MPa. The indirect permeability calculated from consolidation tests falls in the same magnitude at higher vertical effective stress, above 40 MPa, as that of the Kozeny model for shale samples with high non-clay content ≥ 70% but are higher...

  16. Connecting clinical and actuarial prediction with rule-based methods

    NARCIS (Netherlands)

    Fokkema, M.; Smits, N.; Kelderman, H.; Penninx, B.W.J.H.

    2015-01-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction

  17. Can Morphing Methods Predict Intermediate Structures?

    Science.gov (United States)

    Weiss, Dahlia R.; Levitt, Michael

    2009-01-01

    Movement is crucial to the biological function of many proteins, yet crystallographic structures of proteins can give us only a static snapshot. The protein dynamics that are important to biological function often happen on a timescale that is unattainable through detailed simulation methods such as molecular dynamics as they often involve crossing high-energy barriers. To address this coarse-grained motion, several methods have been implemented as web servers in which a set of coordinates is usually linearly interpolated from an initial crystallographic structure to a final crystallographic structure. We present a new morphing method that does not extrapolate linearly and can therefore go around high-energy barriers and which can produce different trajectories between the same two starting points. In this work, we evaluate our method and other established coarse-grained methods according to an objective measure: how close a coarse-grained dynamics method comes to a crystallographically determined intermediate structure when calculating a trajectory between the initial and final crystal protein structure. We test this with a set of five proteins with at least three crystallographically determined on-pathway high-resolution intermediate structures from the Protein Data Bank. For simple hinging motions involving a small conformational change, segmentation of the protein into two rigid sections outperforms other more computationally involved methods. However, large-scale conformational change is best addressed using a nonlinear approach and we suggest that there is merit in further developing such methods. PMID:18996395

  18. Prediction Methods in Science and Technology

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    Presents the H-principle, the Heisenberg modelling principle. General properties of the Heisenberg modelling procedure is developed. The theory is applied to principal component analysis and linear regression analysis. It is shown that the H-principle leads to PLS regression in case the task...... is linear regression analysis. The book contains different methods to find the dimensions of linear models, to carry out sensitivity analysis in latent structure models, variable selection methods and presentation of results from analysis....

  19. Generic methods for aero-engine exhaust emission prediction

    NARCIS (Netherlands)

    Shakariyants, S.A.

    2008-01-01

    In the thesis, generic methods have been developed for aero-engine combustor performance, combustion chemistry, as well as airplane aerodynamics, airplane and engine performance. These methods specifically aim to support diverse emission prediction studies coupled with airplane and engine

  20. Force prediction in cold rolling mills by polynomial methods

    Directory of Open Access Journals (Sweden)

    Nicu ROMAN

    2007-12-01

    Full Text Available A method for steel and aluminium strip thickness control is provided including a new technique for predictive rolling force estimation method by statistic model based on polynomial techniques.

  1. An Approximate Method for Pitch-Damping Prediction

    National Research Council Canada - National Science Library

    Danberg, James

    2003-01-01

    ...) method for predicting the pitch-damping coefficients has been employed. The CFD method provides important details necessary to derive the correlation functions that are unavailable from the current experimental database...

  2. A Versatile Nonlinear Method for Predictive Modeling

    Science.gov (United States)

    Liou, Meng-Sing; Yao, Weigang

    2015-01-01

    As computational fluid dynamics techniques and tools become widely accepted for realworld practice today, it is intriguing to ask: what areas can it be utilized to its potential in the future. Some promising areas include design optimization and exploration of fluid dynamics phenomena (the concept of numerical wind tunnel), in which both have the common feature where some parameters are varied repeatedly and the computation can be costly. We are especially interested in the need for an accurate and efficient approach for handling these applications: (1) capturing complex nonlinear dynamics inherent in a system under consideration and (2) versatility (robustness) to encompass a range of parametric variations. In our previous paper, we proposed to use first-order Taylor expansion collected at numerous sampling points along a trajectory and assembled together via nonlinear weighting functions. The validity and performance of this approach was demonstrated for a number of problems with a vastly different input functions. In this study, we are especially interested in enhancing the method's accuracy; we extend it to include the second-orer Taylor expansion, which however requires a complicated evaluation of Hessian matrices for a system of equations, like in fluid dynamics. We propose a method to avoid these Hessian matrices, while maintaining the accuracy. Results based on the method are presented to confirm its validity.

  3. DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail

    2016-03-16

    Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind

  4. Deep learning methods for protein torsion angle prediction.

    Science.gov (United States)

    Li, Haiou; Hou, Jie; Adhikari, Badri; Lyu, Qiang; Cheng, Jianlin

    2017-09-18

    Deep learning is one of the most powerful machine learning methods that has achieved the state-of-the-art performance in many domains. Since deep learning was introduced to the field of bioinformatics in 2012, it has achieved success in a number of areas such as protein residue-residue contact prediction, secondary structure prediction, and fold recognition. In this work, we developed deep learning methods to improve the prediction of torsion (dihedral) angles of proteins. We design four different deep learning architectures to predict protein torsion angles. The architectures including deep neural network (DNN) and deep restricted Boltzmann machine (DRBN), deep recurrent neural network (DRNN) and deep recurrent restricted Boltzmann machine (DReRBM) since the protein torsion angle prediction is a sequence related problem. In addition to existing protein features, two new features (predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments) are used as input to each of the four deep learning architectures to predict phi and psi angles of protein backbone. The mean absolute error (MAE) of phi and psi angles predicted by DRNN, DReRBM, DRBM and DNN is about 20-21° and 29-30° on an independent dataset. The MAE of phi angle is comparable to the existing methods, but the MAE of psi angle is 29°, 2° lower than the existing methods. On the latest CASP12 targets, our methods also achieved the performance better than or comparable to a state-of-the art method. Our experiment demonstrates that deep learning is a valuable method for predicting protein torsion angles. The deep recurrent network architecture performs slightly better than deep feed-forward architecture, and the predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments are useful features for improving prediction accuracy.

  5. Computational methods in sequence and structure prediction

    Science.gov (United States)

    Lang, Caiyi

    This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed

  6. Life prediction methods for the combined creep-fatigue endurance

    International Nuclear Information System (INIS)

    Wareing, J.; Lloyd, G.J.

    1980-09-01

    The basis and current status of development of the various approaches to the prediction of the combined creep-fatigue endurance are reviewed. It is concluded that an inadequate materials data base makes it difficult to draw sensible conclusions about the prediction capabilities of each of the available methods. Correlation with data for stainless steel 304 and 316 is presented. (U.K.)

  7. What Predicts Use of Learning-Centered, Interactive Engagement Methods?

    Science.gov (United States)

    Madson, Laura; Trafimow, David; Gray, Tara; Gutowitz, Michael

    2014-01-01

    What makes some faculty members more likely to use interactive engagement methods than others? We use the theory of reasoned action to predict faculty members' use of interactive engagement methods. Results indicate that faculty members' beliefs about the personal positive consequences of using these methods (e.g., "Using interactive…

  8. Method for Predicting Solubilities of Solids in Mixed Solvents

    DEFF Research Database (Denmark)

    Ellegaard, Martin Dela; Abildskov, Jens; O'Connell, J. P.

    2009-01-01

    A method is presented for predicting solubilities of solid solutes in mixed solvents, based on excess Henry's law constants. The basis is statistical mechanical fluctuation solution theory for composition derivatives of solute/solvent infinite dilution activity coefficients. Suitable approximatio...

  9. Fast Prediction Method for Steady-State Heat Convection

    KAUST Repository

    Wá ng, Yì ; Yu, Bo; Sun, Shuyu

    2012-01-01

    , the nonuniform POD-Galerkin projection method exhibits high accuracy, good suitability, and fast computation. It has universal significance for accurate and fast prediction. Also, the methodology can be applied to more complex modeling in chemical engineering

  10. Development of motion image prediction method using principal component analysis

    International Nuclear Information System (INIS)

    Chhatkuli, Ritu Bhusal; Demachi, Kazuyuki; Kawai, Masaki; Sakakibara, Hiroshi; Kamiaka, Kazuma

    2012-01-01

    Respiratory motion can induce the limit in the accuracy of area irradiated during lung cancer radiation therapy. Many methods have been introduced to minimize the impact of healthy tissue irradiation due to the lung tumor motion. The purpose of this research is to develop an algorithm for the improvement of image guided radiation therapy by the prediction of motion images. We predict the motion images by using principal component analysis (PCA) and multi-channel singular spectral analysis (MSSA) method. The images/movies were successfully predicted and verified using the developed algorithm. With the proposed prediction method it is possible to forecast the tumor images over the next breathing period. The implementation of this method in real time is believed to be significant for higher level of tumor tracking including the detection of sudden abdominal changes during radiation therapy. (author)

  11. An assessment on epitope prediction methods for protozoa genomes

    Directory of Open Access Journals (Sweden)

    Resende Daniela M

    2012-11-01

    Full Text Available Abstract Background Epitope prediction using computational methods represents one of the most promising approaches to vaccine development. Reduction of time, cost, and the availability of completely sequenced genomes are key points and highly motivating regarding the use of reverse vaccinology. Parasites of genus Leishmania are widely spread and they are the etiologic agents of leishmaniasis. Currently, there is no efficient vaccine against this pathogen and the drug treatment is highly toxic. The lack of sufficiently large datasets of experimentally validated parasites epitopes represents a serious limitation, especially for trypanomatids genomes. In this work we highlight the predictive performances of several algorithms that were evaluated through the development of a MySQL database built with the purpose of: a evaluating individual algorithms prediction performances and their combination for CD8+ T cell epitopes, B-cell epitopes and subcellular localization by means of AUC (Area Under Curve performance and a threshold dependent method that employs a confusion matrix; b integrating data from experimentally validated and in silico predicted epitopes; and c integrating the subcellular localization predictions and experimental data. NetCTL, NetMHC, BepiPred, BCPred12, and AAP12 algorithms were used for in silico epitope prediction and WoLF PSORT, Sigcleave and TargetP for in silico subcellular localization prediction against trypanosomatid genomes. Results A database-driven epitope prediction method was developed with built-in functions that were capable of: a removing experimental data redundancy; b parsing algorithms predictions and storage experimental validated and predict data; and c evaluating algorithm performances. Results show that a better performance is achieved when the combined prediction is considered. This is particularly true for B cell epitope predictors, where the combined prediction of AAP12 and BCPred12 reached an AUC value

  12. Assessment of a method for the prediction of mandibular rotation.

    Science.gov (United States)

    Lee, R S; Daniel, F J; Swartz, M; Baumrind, S; Korn, E L

    1987-05-01

    A new method to predict mandibular rotation developed by Skieller and co-workers on a sample of 21 implant subjects with extreme growth patterns has been tested against an alternative sample of 25 implant patients with generally similar mean values, but with less extreme facial patterns. The method, which had been highly successful in retrospectively predicting changes in the sample of extreme subjects, was much less successful in predicting individual patterns of mandibular rotation in the new, less extreme sample. The observation of a large difference in the strength of the predictions for these two samples, even though their mean values were quite similar, should serve to increase our awareness of the complexity of the problem of predicting growth patterns in individual cases.

  13. Performance prediction method for a multi-stage Knudsen pump

    Science.gov (United States)

    Kugimoto, K.; Hirota, Y.; Kizaki, Y.; Yamaguchi, H.; Niimi, T.

    2017-12-01

    In this study, the novel method to predict the performance of a multi-stage Knudsen pump is proposed. The performance prediction method is carried out in two steps numerically with the assistance of a simple experimental result. In the first step, the performance of a single-stage Knudsen pump was measured experimentally under various pressure conditions, and the relationship of the mass flow rate was obtained with respect to the average pressure between the inlet and outlet of the pump and the pressure difference between them. In the second step, the performance of a multi-stage pump was analyzed by a one-dimensional model derived from the mass conservation law. The performances predicted by the 1D-model of 1-stage, 2-stage, 3-stage, and 4-stage pumps were validated by the experimental results for the corresponding number of stages. It was concluded that the proposed prediction method works properly.

  14. Connecting clinical and actuarial prediction with rule-based methods.

    Science.gov (United States)

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  15. The trajectory prediction of spacecraft by grey method

    International Nuclear Information System (INIS)

    Wang, Qiyue; Wang, Zhongyu; Zhang, Zili; Wang, Yanqing; Zhou, Weihu

    2016-01-01

    The real-time and high-precision trajectory prediction of a moving object is a core technology in the field of aerospace engineering. The real-time monitoring and tracking technology are also significant guarantees of aerospace equipment. A dynamic trajectory prediction method called grey dynamic filter (GDF) which combines the dynamic measurement theory and grey system theory is proposed. GDF can use coordinates of the current period to extrapolate coordinates of the following period. At meantime, GDF can also keep the instantaneity of measured coordinates by the metabolism model. In this paper the optimal model length of GDF is firstly selected to improve the prediction accuracy. Then the simulation for uniformly accelerated motion and variably accelerated motion is conducted. The simulation results indicate that the mean composite position error of GDF prediction is one-fifth to that of Kalman filter (KF). By using a spacecraft landing experiment, the prediction accuracy of GDF is compared with the KF method and the primitive grey method (GM). The results show that the motion trajectory of spacecraft predicted by GDF is much closer to actual trajectory than the other two methods. The mean composite position error calculated by GDF is one-eighth to KF and one-fifth to GM respectively. (paper)

  16. Predicting chaos in memristive oscillator via harmonic balance method.

    Science.gov (United States)

    Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai

    2012-12-01

    This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.

  17. Evaluation and comparison of mammalian subcellular localization prediction methods

    Directory of Open Access Journals (Sweden)

    Fink J Lynn

    2006-12-01

    Full Text Available Abstract Background Determination of the subcellular location of a protein is essential to understanding its biochemical function. This information can provide insight into the function of hypothetical or novel proteins. These data are difficult to obtain experimentally but have become especially important since many whole genome sequencing projects have been finished and many resulting protein sequences are still lacking detailed functional information. In order to address this paucity of data, many computational prediction methods have been developed. However, these methods have varying levels of accuracy and perform differently based on the sequences that are presented to the underlying algorithm. It is therefore useful to compare these methods and monitor their performance. Results In order to perform a comprehensive survey of prediction methods, we selected only methods that accepted large batches of protein sequences, were publicly available, and were able to predict localization to at least nine of the major subcellular locations (nucleus, cytosol, mitochondrion, extracellular region, plasma membrane, Golgi apparatus, endoplasmic reticulum (ER, peroxisome, and lysosome. The selected methods were CELLO, MultiLoc, Proteome Analyst, pTarget and WoLF PSORT. These methods were evaluated using 3763 mouse proteins from SwissProt that represent the source of the training sets used in development of the individual methods. In addition, an independent evaluation set of 2145 mouse proteins from LOCATE with a bias towards the subcellular localization underrepresented in SwissProt was used. The sensitivity and specificity were calculated for each method and compared to a theoretical value based on what might be observed by random chance. Conclusion No individual method had a sufficient level of sensitivity across both evaluation sets that would enable reliable application to hypothetical proteins. All methods showed lower performance on the LOCATE

  18. Univariate Time Series Prediction of Solar Power Using a Hybrid Wavelet-ARMA-NARX Prediction Method

    Energy Technology Data Exchange (ETDEWEB)

    Nazaripouya, Hamidreza; Wang, Yubo; Chu, Chi-Cheng; Pota, Hemanshu; Gadh, Rajit

    2016-05-02

    This paper proposes a new hybrid method for super short-term solar power prediction. Solar output power usually has a complex, nonstationary, and nonlinear characteristic due to intermittent and time varying behavior of solar radiance. In addition, solar power dynamics is fast and is inertia less. An accurate super short-time prediction is required to compensate for the fluctuations and reduce the impact of solar power penetration on the power system. The objective is to predict one step-ahead solar power generation based only on historical solar power time series data. The proposed method incorporates discrete wavelet transform (DWT), Auto-Regressive Moving Average (ARMA) models, and Recurrent Neural Networks (RNN), while the RNN architecture is based on Nonlinear Auto-Regressive models with eXogenous inputs (NARX). The wavelet transform is utilized to decompose the solar power time series into a set of richer-behaved forming series for prediction. ARMA model is employed as a linear predictor while NARX is used as a nonlinear pattern recognition tool to estimate and compensate the error of wavelet-ARMA prediction. The proposed method is applied to the data captured from UCLA solar PV panels and the results are compared with some of the common and most recent solar power prediction methods. The results validate the effectiveness of the proposed approach and show a considerable improvement in the prediction precision.

  19. Available Prediction Methods for Corrosion under Insulation (CUI): A Review

    OpenAIRE

    Burhani Nurul Rawaida Ain; Muhammad Masdi; Ismail Mokhtar Che

    2014-01-01

    Corrosion under insulation (CUI) is an increasingly important issue for the piping in industries especially petrochemical and chemical plants due to its unexpected catastrophic disaster. Therefore, attention towards the maintenance and prediction of CUI occurrence, particularly in the corrosion rates, has grown in recent years. In this study, a literature review in determining the corrosion rates by using various prediction models and method of the corrosion occurrence between the external su...

  20. Methods, apparatus and system for notification of predictable memory failure

    Energy Technology Data Exchange (ETDEWEB)

    Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2017-01-03

    A method for providing notification of a predictable memory failure includes the steps of: obtaining information regarding at least one condition associated with a memory; calculating a memory failure probability as a function of the obtained information; calculating a failure probability threshold; and generating a signal when the memory failure probability exceeds the failure probability threshold, the signal being indicative of a predicted future memory failure.

  1. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Methods and techniques for prediction of environmental impact

    International Nuclear Information System (INIS)

    1992-04-01

    Environmental impact assessment (EIA) is the procedure that helps decision makers understand the environmental implications of their decisions. The prediction of environmental effects or impact is an extremely important part of the EIA procedure and improvements in existing capabilities are needed. Considerable attention is paid within environmental impact assessment and in handbooks on EIA to methods for identifying and evaluating environmental impacts. However, little attention is given to the issue distribution of information on impact prediction methods. The quantitative or qualitative methods for the prediction of environmental impacts appear to be the two basic approaches for incorporating environmental concerns into the decision-making process. Depending on the nature of the proposed activity and the environment likely to be affected, a combination of both quantitative and qualitative methods is used. Within environmental impact assessment, the accuracy of methods for the prediction of environmental impacts is of major importance while it provides for sound and well-balanced decision making. Pertinent and effective action to deal with the problems of environmental protection and the rational use of natural resources and sustainable development is only possible given objective methods and techniques for the prediction of environmental impact. Therefore, the Senior Advisers to ECE Governments on Environmental and Water Problems, decided to set up a task force, with the USSR as lead country, on methods and techniques for the prediction of environmental impacts in order to undertake a study to review and analyse existing methodological approaches and to elaborate recommendations to ECE Governments. The work of the task force was completed in 1990 and the resulting report, with all relevant background material, was approved by the Senior Advisers to ECE Governments on Environmental and Water Problems in 1991. The present report reflects the situation, state of

  3. Modified-Fibonacci-Dual-Lucas method for earthquake prediction

    Science.gov (United States)

    Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.

    2015-06-01

    The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with the earthquake date and in this case the FDL method coincides with the MFDL. Based on the MDFL method we present the prediction method capable of predicting global events or localized earthquakes and we will discuss the accuracy of the method in as far as the prediction and location parts of the method. We show example calendar style predictions for global events as well as for the Greek region using

  4. Methods for early prediction of lactation flow in Holstein heifers

    Directory of Open Access Journals (Sweden)

    Vesna Gantner

    2010-12-01

    Full Text Available The aim of this research was to define methods for early prediction (based on I. milk control record of lactation flow in Holstein heifers as well as to choose optimal one in terms of prediction fit and application simplicity. Total of 304,569 daily yield records automatically recorded on a 1,136 first lactation Holstein cows, from March 2003 till August 2008., were included in analysis. According to the test date, calving date, the age at first calving, lactation stage when I. milk control occurred and to the average milk yield in first 25th, T1 (and 25th-45th, T2 lactation days, measuring monthcalving month-age-production-time-period subgroups were formed. The parameters of analysed nonlinear and linear methods were estimated for each defined subgroup. As models evaluation measures,adjusted coefficient of determination, and average and standard deviation of error were used. Considering obtained results, in terms of total variance explanation (R2 adj, the nonlinear Wood’s method showed superiority above the linear ones (Wilmink’s, Ali-Schaeffer’s and Guo-Swalve’s method in both time-period subgroups (T1 - 97.5 % of explained variability; T2 - 98.1 % of explained variability. Regarding the evaluation measures based on prediction error amount (eavg±eSD, the lowest average error of daily milk yield prediction (less than 0.005 kg/day, as well as of lactation milk yield prediction (less than 50 kg/lactation (T1 time-period subgroup and less than 30 kg/lactation (T2 time-period subgroup; were determined when Wood’s nonlinear prediction method were applied. Obtained results indicate that estimated Wood’s regression parameters could be used in routine work for early prediction of Holstein heifer’s lactation flow.

  5. Development of a regional ensemble prediction method for probabilistic weather prediction

    International Nuclear Information System (INIS)

    Nohara, Daisuke; Tamura, Hidetoshi; Hirakuchi, Hiromaru

    2015-01-01

    A regional ensemble prediction method has been developed to provide probabilistic weather prediction using a numerical weather prediction model. To obtain consistent perturbations with the synoptic weather pattern, both of initial and lateral boundary perturbations were given by differences between control and ensemble member of the Japan Meteorological Agency (JMA)'s operational one-week ensemble forecast. The method provides a multiple ensemble member with a horizontal resolution of 15 km for 48-hour based on a downscaling of the JMA's operational global forecast accompanied with the perturbations. The ensemble prediction was examined in the case of heavy snow fall event in Kanto area on January 14, 2013. The results showed that the predictions represent different features of high-resolution spatiotemporal distribution of precipitation affected by intensity and location of extra-tropical cyclone in each ensemble member. Although the ensemble prediction has model bias of mean values and variances in some variables such as wind speed and solar radiation, the ensemble prediction has a potential to append a probabilistic information to a deterministic prediction. (author)

  6. Towards a unified fatigue life prediction method for marine structures

    CERN Document Server

    Cui, Weicheng; Wang, Fang

    2014-01-01

    In order to apply the damage tolerance design philosophy to design marine structures, accurate prediction of fatigue crack growth under service conditions is required. Now, more and more people have realized that only a fatigue life prediction method based on fatigue crack propagation (FCP) theory has the potential to explain various fatigue phenomena observed. In this book, the issues leading towards the development of a unified fatigue life prediction (UFLP) method based on FCP theory are addressed. Based on the philosophy of the UFLP method, the current inconsistency between fatigue design and inspection of marine structures could be resolved. This book presents the state-of-the-art and recent advances, including those by the authors, in fatigue studies. It is designed to lead the future directions and to provide a useful tool in many practical applications. It is intended to address to engineers, naval architects, research staff, professionals and graduates engaged in fatigue prevention design and survey ...

  7. DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail; Soufan, Othman; Essack, Magbubah; Kalnis, Panos; Bajic, Vladimir B.

    2016-01-01

    DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind can be accessed online at: http://​www.​cbrc.​kaust.​edu.​sa/​daspfind.

  8. Prediction of Protein–Protein Interactions by Evidence Combining Methods

    Directory of Open Access Journals (Sweden)

    Ji-Wei Chang

    2016-11-01

    Full Text Available Most cellular functions involve proteins’ features based on their physical interactions with other partner proteins. Sketching a map of protein–protein interactions (PPIs is therefore an important inception step towards understanding the basics of cell functions. Several experimental techniques operating in vivo or in vitro have made significant contributions to screening a large number of protein interaction partners, especially high-throughput experimental methods. However, computational approaches for PPI predication supported by rapid accumulation of data generated from experimental techniques, 3D structure definitions, and genome sequencing have boosted the map sketching of PPIs. In this review, we shed light on in silico PPI prediction methods that integrate evidence from multiple sources, including evolutionary relationship, function annotation, sequence/structure features, network topology and text mining. These methods are developed for integration of multi-dimensional evidence, for designing the strategies to predict novel interactions, and for making the results consistent with the increase of prediction coverage and accuracy.

  9. Predicting Metabolic Syndrome Using the Random Forest Method

    Directory of Open Access Journals (Sweden)

    Apilak Worachartcheewan

    2015-01-01

    Full Text Available Aims. This study proposes a computational method for determining the prevalence of metabolic syndrome (MS and to predict its occurrence using the National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III criteria. The Random Forest (RF method is also applied to identify significant health parameters. Materials and Methods. We used data from 5,646 adults aged between 18–78 years residing in Bangkok who had received an annual health check-up in 2008. MS was identified using the NCEP ATP III criteria. The RF method was applied to predict the occurrence of MS and to identify important health parameters surrounding this disorder. Results. The overall prevalence of MS was 23.70% (34.32% for males and 17.74% for females. RF accuracy for predicting MS in an adult Thai population was 98.11%. Further, based on RF, triglyceride levels were the most important health parameter associated with MS. Conclusion. RF was shown to predict MS in an adult Thai population with an accuracy >98% and triglyceride levels were identified as the most informative variable associated with MS. Therefore, using RF to predict MS may be potentially beneficial in identifying MS status for preventing the development of diabetes mellitus and cardiovascular diseases.

  10. Prediction of polymer flooding performance using an analytical method

    International Nuclear Information System (INIS)

    Tan Czek Hoong; Mariyamni Awang; Foo Kok Wai

    2001-01-01

    The study investigated the applicability of an analytical method developed by El-Khatib in polymer flooding. Results from a simulator UTCHEM and experiments were compared with the El-Khatib prediction method. In general, by assuming a constant viscosity polymer injection, the method gave much higher recovery values than the simulation runs and the experiments. A modification of the method gave better correlation, albeit only oil production. Investigation is continuing on modifying the method so that a better overall fit can be obtained for polymer flooding. (Author)

  11. Preface to the Focus Issue: Chaos Detection Methods and Predictability

    International Nuclear Information System (INIS)

    Gottwald, Georg A.; Skokos, Charalampos

    2014-01-01

    This Focus Issue presents a collection of papers originating from the workshop Methods of Chaos Detection and Predictability: Theory and Applications held at the Max Planck Institute for the Physics of Complex Systems in Dresden, June 17–21, 2013. The main aim of this interdisciplinary workshop was to review comprehensively the theory and numerical implementation of the existing methods of chaos detection and predictability, as well as to report recent applications of these techniques to different scientific fields. The collection of twelve papers in this Focus Issue represents the wide range of applications, spanning mathematics, physics, astronomy, particle accelerator physics, meteorology and medical research. This Preface surveys the papers of this Issue

  12. Preface to the Focus Issue: chaos detection methods and predictability.

    Science.gov (United States)

    Gottwald, Georg A; Skokos, Charalampos

    2014-06-01

    This Focus Issue presents a collection of papers originating from the workshop Methods of Chaos Detection and Predictability: Theory and Applications held at the Max Planck Institute for the Physics of Complex Systems in Dresden, June 17-21, 2013. The main aim of this interdisciplinary workshop was to review comprehensively the theory and numerical implementation of the existing methods of chaos detection and predictability, as well as to report recent applications of these techniques to different scientific fields. The collection of twelve papers in this Focus Issue represents the wide range of applications, spanning mathematics, physics, astronomy, particle accelerator physics, meteorology and medical research. This Preface surveys the papers of this Issue.

  13. The energetic cost of walking: a comparison of predictive methods.

    Directory of Open Access Journals (Sweden)

    Patricia Ann Kramer

    Full Text Available BACKGROUND: The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1 to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2 to investigate to what degree the prediction methods explain the variation in energy expenditure. METHODOLOGY/PRINCIPAL FINDINGS: We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. CONCLUSION: Our results indicate that the choice of predictive method is dependent on the question(s of interest and the data available for use as inputs. Although we

  14. The energetic cost of walking: a comparison of predictive methods.

    Science.gov (United States)

    Kramer, Patricia Ann; Sylvester, Adam D

    2011-01-01

    The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended

  15. Combining gene prediction methods to improve metagenomic gene annotation

    Directory of Open Access Journals (Sweden)

    Rosen Gail L

    2011-01-01

    Full Text Available Abstract Background Traditional gene annotation methods rely on characteristics that may not be available in short reads generated from next generation technology, resulting in suboptimal performance for metagenomic (environmental samples. Therefore, in recent years, new programs have been developed that optimize performance on short reads. In this work, we benchmark three metagenomic gene prediction programs and combine their predictions to improve metagenomic read gene annotation. Results We not only analyze the programs' performance at different read-lengths like similar studies, but also separate different types of reads, including intra- and intergenic regions, for analysis. The main deficiencies are in the algorithms' ability to predict non-coding regions and gene edges, resulting in more false-positives and false-negatives than desired. In fact, the specificities of the algorithms are notably worse than the sensitivities. By combining the programs' predictions, we show significant improvement in specificity at minimal cost to sensitivity, resulting in 4% improvement in accuracy for 100 bp reads with ~1% improvement in accuracy for 200 bp reads and above. To correctly annotate the start and stop of the genes, we find that a consensus of all the predictors performs best for shorter read lengths while a unanimous agreement is better for longer read lengths, boosting annotation accuracy by 1-8%. We also demonstrate use of the classifier combinations on a real dataset. Conclusions To optimize the performance for both prediction and annotation accuracies, we conclude that the consensus of all methods (or a majority vote is the best for reads 400 bp and shorter, while using the intersection of GeneMark and Orphelia predictions is the best for reads 500 bp and longer. We demonstrate that most methods predict over 80% coding (including partially coding reads on a real human gut sample sequenced by Illumina technology.

  16. Orthology prediction methods: a quality assessment using curated protein families.

    Science.gov (United States)

    Trachana, Kalliopi; Larsson, Tomas A; Powell, Sean; Chen, Wei-Hua; Doerks, Tobias; Muller, Jean; Bork, Peer

    2011-10-01

    The increasing number of sequenced genomes has prompted the development of several automated orthology prediction methods. Tests to evaluate the accuracy of predictions and to explore biases caused by biological and technical factors are therefore required. We used 70 manually curated families to analyze the performance of five public methods in Metazoa. We analyzed the strengths and weaknesses of the methods and quantified the impact of biological and technical challenges. From the latter part of the analysis, genome annotation emerged as the largest single influencer, affecting up to 30% of the performance. Generally, most methods did well in assigning orthologous group but they failed to assign the exact number of genes for half of the groups. The publicly available benchmark set (http://eggnog.embl.de/orthobench/) should facilitate the improvement of current orthology assignment protocols, which is of utmost importance for many fields of biology and should be tackled by a broad scientific community. Copyright © 2011 WILEY Periodicals, Inc.

  17. Fast Prediction Method for Steady-State Heat Convection

    KAUST Repository

    Wáng, Yì

    2012-03-14

    A reduced model by proper orthogonal decomposition (POD) and Galerkin projection methods for steady-state heat convection is established on a nonuniform grid. It was verified by thousands of examples that the results are in good agreement with the results obtained from the finite volume method. This model can also predict the cases where model parameters far exceed the sample scope. Moreover, the calculation time needed by the model is much shorter than that needed for the finite volume method. Thus, the nonuniform POD-Galerkin projection method exhibits high accuracy, good suitability, and fast computation. It has universal significance for accurate and fast prediction. Also, the methodology can be applied to more complex modeling in chemical engineering and technology, such as reaction and turbulence. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Hybrid robust predictive optimization method of power system dispatch

    Science.gov (United States)

    Chandra, Ramu Sharat [Niskayuna, NY; Liu, Yan [Ballston Lake, NY; Bose, Sumit [Niskayuna, NY; de Bedout, Juan Manuel [West Glenville, NY

    2011-08-02

    A method of power system dispatch control solves power system dispatch problems by integrating a larger variety of generation, load and storage assets, including without limitation, combined heat and power (CHP) units, renewable generation with forecasting, controllable loads, electric, thermal and water energy storage. The method employs a predictive algorithm to dynamically schedule different assets in order to achieve global optimization and maintain the system normal operation.

  19. Available Prediction Methods for Corrosion under Insulation (CUI: A Review

    Directory of Open Access Journals (Sweden)

    Burhani Nurul Rawaida Ain

    2014-07-01

    Full Text Available Corrosion under insulation (CUI is an increasingly important issue for the piping in industries especially petrochemical and chemical plants due to its unexpected catastrophic disaster. Therefore, attention towards the maintenance and prediction of CUI occurrence, particularly in the corrosion rates, has grown in recent years. In this study, a literature review in determining the corrosion rates by using various prediction models and method of the corrosion occurrence between the external surface piping and its insulation was carried out. The results, prediction models and methods available were presented for future research references. However, most of the prediction methods available are based on each local industrial data only which might be different based on the plant location, environment, temperature and many other factors which may contribute to the difference and reliability of the model developed. Thus, it is more reliable if those models or method supported by laboratory testing or simulation which includes the factors promoting CUI such as environment temperature, insulation types, operating temperatures, and other factors.

  20. Predicting proteasomal cleavage sites: a comparison of available methods

    DEFF Research Database (Denmark)

    Saxova, P.; Buus, S.; Brunak, Søren

    2003-01-01

    -terminal, in particular, of CTL epitopes is cleaved precisely by the proteasome, whereas the N-terminal is produced with an extension, and later trimmed by peptidases in the cytoplasm and in the endoplasmic reticulum. Recently, three publicly available methods have been developed for prediction of the specificity...

  1. Customer churn prediction using a hybrid method and censored data

    Directory of Open Access Journals (Sweden)

    Reza Tavakkoli-Moghaddam

    2013-05-01

    Full Text Available Customers are believed to be the main part of any organization’s assets and customer retention as well as customer churn management are important responsibilities of organizations. In today’s competitive environment, organization must do their best to retain their existing customers since attracting new customers cost significantly more than taking care of existing ones. In this paper, we present a hybrid method based on neural network and Cox regression analysis where neural network is used for outlier data and Cox regression method is implemented for prediction of future events. The proposed model of this paper has been implemented on some data and the results are compared based on five criteria including prediction accuracy, errors’ type I and II, root mean square error and mean absolute deviation. The preliminary results indicate that the proposed model of this paper performs better than alternative methods.

  2. Method of predicting surface deformation in the form of sinkholes

    Energy Technology Data Exchange (ETDEWEB)

    Chudek, M.; Arkuszewski, J.

    1980-06-01

    Proposes a method for predicting probability of sinkhole shaped subsidence, number of funnel-shaped subsidences and size of individual funnels. The following factors which influence the sudden subsidence of the surface in the form of funnels are analyzed: geologic structure of the strata between mining workings and the surface, mining depth, time factor, and geologic disolocations. Sudden surface subsidence is observed only in the case of workings situated up to a few dozen meters from the surface. Using the proposed method is explained with some examples. It is suggested that the method produces correct results which can be used in coal mining and in ore mining. (1 ref.) (In Polish)

  3. Polyadenylation site prediction using PolyA-iEP method.

    Science.gov (United States)

    Kavakiotis, Ioannis; Tzanis, George; Vlahavas, Ioannis

    2014-01-01

    This chapter presents a method called PolyA-iEP that has been developed for the prediction of polyadenylation sites. More precisely, PolyA-iEP is a method that recognizes mRNA 3'ends which contain polyadenylation sites. It is a modular system which consists of two main components. The first exploits the advantages of emerging patterns and the second is a distance-based scoring method. The outputs of the two components are finally combined by a classifier. The final results reach very high scores of sensitivity and specificity.

  4. Lattice gas methods for predicting intrinsic permeability of porous media

    Energy Technology Data Exchange (ETDEWEB)

    Santos, L.O.E.; Philippi, P.C. [Santa Catarina Univ., Florianopolis, SC (Brazil). Dept. de Engenharia Mecanica. Lab. de Propriedades Termofisicas e Meios Porosos)]. E-mail: emerich@lmpt.ufsc.br; philippi@lmpt.ufsc.br; Damiani, M.C. [Engineering Simulation and Scientific Software (ESSS), Florianopolis, SC (Brazil). Parque Tecnologico]. E-mail: damiani@lmpt.ufsc.br

    2000-07-01

    This paper presents a method for predicting intrinsic permeability of porous media based on Lattice Gas Cellular Automata methods. Two methods are presented. The first is based on a Boolean model (LGA). The second is Boltzmann method (LB) based on Boltzmann relaxation equation. LGA is a relatively recent method developed to perform hydrodynamic calculations. The method, in its simplest form, consists of a regular lattice populated with particles that hop from site to site in discrete time steps in a process, called propagation. After propagation, the particles in each site interact with each other in a process called collision, in which the number of particles and momentum are conserved. An exclusion principle is imposed in order to achieve better computational efficiency. In despite of its simplicity, this model evolves in agreement with Navier-Stokes equation for low Mach numbers. LB methods were recently developed for the numerical integration of the Navier-Stokes equation based on discrete Boltzmann transport equation. Derived from LGA, LB is a powerful alternative to the standard methods in computational fluid dynamics. In recent years, it has received much attention and has been used in several applications like simulations of flows through porous media, turbulent flows and multiphase flows. It is important to emphasize some aspects that make Lattice Gas Cellular Automata methods very attractive for simulating flows through porous media. In fact, boundary conditions in flows through complex geometry structures are very easy to describe in simulations using these methods. In LGA methods simulations are performed with integers needing less resident memory capability and boolean arithmetic reduces running time. The two methods are used to simulate flows through several Brazilian reservoir petroleum rocks leading to intrinsic permeability prediction. Simulation is compared with experimental results. (author)

  5. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    Science.gov (United States)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  6. The Dissolved Oxygen Prediction Method Based on Neural Network

    Directory of Open Access Journals (Sweden)

    Zhong Xiao

    2017-01-01

    Full Text Available The dissolved oxygen (DO is oxygen dissolved in water, which is an important factor for the aquaculture. Using BP neural network method with the combination of purelin, logsig, and tansig activation functions is proposed for the prediction of aquaculture’s dissolved oxygen. The input layer, hidden layer, and output layer are introduced in detail including the weight adjustment process. The breeding data of three ponds in actual 10 consecutive days were used for experiments; these ponds were located in Beihai, Guangxi, a traditional aquaculture base in southern China. The data of the first 7 days are used for training, and the data of the latter 3 days are used for the test. Compared with the common prediction models, curve fitting (CF, autoregression (AR, grey model (GM, and support vector machines (SVM, the experimental results show that the prediction accuracy of the neural network is the highest, and all the predicted values are less than 5% of the error limit, which can meet the needs of practical applications, followed by AR, GM, SVM, and CF. The prediction model can help to improve the water quality monitoring level of aquaculture which will prevent the deterioration of water quality and the outbreak of disease.

  7. A method of predicting the reliability of CDM coil insulation

    International Nuclear Information System (INIS)

    Kytasty, A.; Ogle, C.; Arrendale, H.

    1992-01-01

    This paper presents a method of predicting the reliability of the Collider Dipole Magnet (CDM) coil insulation design. The method proposes a probabilistic treatment of electrical test data, stress analysis, material properties variability and loading uncertainties to give the reliability estimate. The approach taken to predict reliability of design related failure modes of the CDM is to form analytical models of the various possible failure modes and their related mechanisms or causes, and then statistically assess the contributions of the various contributing variables. The probability of the failure mode occurring is interpreted as the number of times one would expect certain extreme situations to combine and randomly occur. One of the more complex failure modes of the CDM will be used to illustrate this methodology

  8. Drug-Target Interactions: Prediction Methods and Applications.

    Science.gov (United States)

    Anusuya, Shanmugam; Kesherwani, Manish; Priya, K Vishnu; Vimala, Antonydhason; Shanmugam, Gnanendra; Velmurugan, Devadasan; Gromiha, M Michael

    2018-01-01

    Identifying the interactions between drugs and target proteins is a key step in drug discovery. This not only aids to understand the disease mechanism, but also helps to identify unexpected therapeutic activity or adverse side effects of drugs. Hence, drug-target interaction prediction becomes an essential tool in the field of drug repurposing. The availability of heterogeneous biological data on known drug-target interactions enabled many researchers to develop various computational methods to decipher unknown drug-target interactions. This review provides an overview on these computational methods for predicting drug-target interactions along with available webservers and databases for drug-target interactions. Further, the applicability of drug-target interactions in various diseases for identifying lead compounds has been outlined. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. Risk prediction, safety analysis and quantitative probability methods - a caveat

    International Nuclear Information System (INIS)

    Critchley, O.H.

    1976-01-01

    Views are expressed on the use of quantitative techniques for the determination of value judgements in nuclear safety assessments, hazard evaluation, and risk prediction. Caution is urged when attempts are made to quantify value judgements in the field of nuclear safety. Criteria are given the meaningful application of reliability methods but doubts are expressed about their application to safety analysis, risk prediction and design guidances for experimental or prototype plant. Doubts are also expressed about some concomitant methods of population dose evaluation. The complexities of new designs of nuclear power plants make the problem of safety assessment more difficult but some possible approaches are suggested as alternatives to the quantitative techniques criticized. (U.K.)

  10. Water hammer prediction and control: the Green's function method

    Science.gov (United States)

    Xuan, Li-Jun; Mao, Feng; Wu, Jie-Zhi

    2012-04-01

    By Green's function method we show that the water hammer (WH) can be analytically predicted for both laminar and turbulent flows (for the latter, with an eddy viscosity depending solely on the space coordinates), and thus its hazardous effect can be rationally controlled and minimized. To this end, we generalize a laminar water hammer equation of Wang et al. (J. Hydrodynamics, B2, 51, 1995) to include arbitrary initial condition and variable viscosity, and obtain its solution by Green's function method. The predicted characteristic WH behaviors by the solutions are in excellent agreement with both direct numerical simulation of the original governing equations and, by adjusting the eddy viscosity coefficient, experimentally measured turbulent flow data. Optimal WH control principle is thereby constructed and demonstrated.

  11. River Flow Prediction Using the Nearest Neighbor Probabilistic Ensemble Method

    Directory of Open Access Journals (Sweden)

    H. Sanikhani

    2016-02-01

    Full Text Available Introduction: In the recent years, researchers interested on probabilistic forecasting of hydrologic variables such river flow.A probabilistic approach aims at quantifying the prediction reliability through a probability distribution function or a prediction interval for the unknown future value. The evaluation of the uncertainty associated to the forecast is seen as a fundamental information, not only to correctly assess the prediction, but also to compare forecasts from different methods and to evaluate actions and decisions conditionally on the expected values. Several probabilistic approaches have been proposed in the literature, including (1 methods that use resampling techniques to assess parameter and model uncertainty, such as the Metropolis algorithm or the Generalized Likelihood Uncertainty Estimation (GLUE methodology for an application to runoff prediction, (2 methods based on processing the forecast errors of past data to produce the probability distributions of future values and (3 methods that evaluate how the uncertainty propagates from the rainfall forecast to the river discharge prediction, as the Bayesian forecasting system. Materials and Methods: In this study, two different probabilistic methods are used for river flow prediction.Then the uncertainty related to the forecast is quantified. One approach is based on linear predictors and in the other, nearest neighbor was used. The nonlinear probabilistic ensemble can be used for nonlinear time series analysis using locally linear predictors, while NNPE utilize a method adapted for one step ahead nearest neighbor methods. In this regard, daily river discharge (twelve years of Dizaj and Mashin Stations on Baranduz-Chay basin in west Azerbijan and Zard-River basin in Khouzestan provinces were used, respectively. The first six years of data was applied for fitting the model. The next three years was used to calibration and the remained three yeas utilized for testing the models

  12. Improving protein function prediction methods with integrated literature data

    Directory of Open Access Journals (Sweden)

    Gabow Aaron P

    2008-04-01

    Full Text Available Abstract Background Determining the function of uncharacterized proteins is a major challenge in the post-genomic era due to the problem's complexity and scale. Identifying a protein's function contributes to an understanding of its role in the involved pathways, its suitability as a drug target, and its potential for protein modifications. Several graph-theoretic approaches predict unidentified functions of proteins by using the functional annotations of better-characterized proteins in protein-protein interaction networks. We systematically consider the use of literature co-occurrence data, introduce a new method for quantifying the reliability of co-occurrence and test how performance differs across species. We also quantify changes in performance as the prediction algorithms annotate with increased specificity. Results We find that including information on the co-occurrence of proteins within an abstract greatly boosts performance in the Functional Flow graph-theoretic function prediction algorithm in yeast, fly and worm. This increase in performance is not simply due to the presence of additional edges since supplementing protein-protein interactions with co-occurrence data outperforms supplementing with a comparably-sized genetic interaction dataset. Through the combination of protein-protein interactions and co-occurrence data, the neighborhood around unknown proteins is quickly connected to well-characterized nodes which global prediction algorithms can exploit. Our method for quantifying co-occurrence reliability shows superior performance to the other methods, particularly at threshold values around 10% which yield the best trade off between coverage and accuracy. In contrast, the traditional way of asserting co-occurrence when at least one abstract mentions both proteins proves to be the worst method for generating co-occurrence data, introducing too many false positives. Annotating the functions with greater specificity is harder

  13. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  14. Comparison of RF spectrum prediction methods for dynamic spectrum access

    Science.gov (United States)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  15. Methods for predicting isochronous stress-strain curves

    International Nuclear Information System (INIS)

    Kiyoshige, Masanori; Shimizu, Shigeki; Satoh, Keisuke.

    1976-01-01

    Isochronous stress-strain curves show the relation between stress and total strain at a certain temperature with time as a parameter, and they are drawn up from the creep test results at various stress levels at a definite temperature. The concept regarding the isochronous stress-strain curves was proposed by McVetty in 1930s, and has been used for the design of aero-engines. Recently the high temperature characteristics of materials are shown as the isochronous stress-strain curves in the design guide for the nuclear energy equipments and structures used in high temperature creep region. It is prescribed that these curves are used as the criteria for determining design stress intensity or the data for analyzing the superposed effects of creep and fatigue. In case of the isochronous stress-strain curves used for the design of nuclear energy equipments with very long service life, it is impractical to determine the curves directly from the results of long time creep test, accordingly the method of predicting long time stress-strain curves from short time creep test results must be established. The method proposed by the authors, for which the creep constitution equations taking the first and second creep stages into account are used, and the method using Larson-Miller parameter were studied, and it was found that both methods were reliable for the prediction. (Kako, I.)

  16. A Lifetime Prediction Method for LEDs Considering Real Mission Profiles

    DEFF Research Database (Denmark)

    Qu, Xiaohui; Wang, Huai; Zhan, Xiaoqing

    2017-01-01

    operations due to the varying operational and environmental conditions during the entire service time (i.e., mission profiles). To overcome the challenge, this paper proposes an advanced lifetime prediction method, which takes into account the field operation mission profiles and also the statistical......The Light-Emitting Diode (LED) has become a very promising alternative lighting source with the advantages of longer lifetime and higher efficiency than traditional ones. The lifetime prediction of LEDs is important to guide the LED system designers to fulfill the design specifications...... properties of the life data available from accelerated degradation testing. The electrical and thermal characteristics of LEDs are measured by a T3Ster system, used for the electro-thermal modeling. It also identifies key variables (e.g., heat sink parameters) that can be designed to achieve a specified...

  17. Long-Term Prediction of Satellite Orbit Using Analytical Method

    Directory of Open Access Journals (Sweden)

    Jae-Cheol Yoon

    1997-12-01

    Full Text Available A long-term prediction algorithm of geostationary orbit was developed using the analytical method. The perturbation force models include geopotential upto fifth order and degree and luni-solar gravitation, and solar radiation pressure. All of the perturbation effects were analyzed by secular variations, short-period variations, and long-period variations for equinoctial elements such as the semi-major axis, eccentricity vector, inclination vector, and mean longitude of the satellite. Result of the analytical orbit propagator was compared with that of the cowell orbit propagator for the KOREASAT. The comparison indicated that the analytical solution could predict the semi-major axis with an accuarcy of better than ~35meters over a period of 3 month.

  18. Prediction of Chloride Diffusion in Concrete Structure Using Meshless Methods

    Directory of Open Access Journals (Sweden)

    Ling Yao

    2016-01-01

    Full Text Available Degradation of RC structures due to chloride penetration followed by reinforcement corrosion is a serious problem in civil engineering. The numerical simulation methods at present mainly involve finite element methods (FEM, which are based on mesh generation. In this study, element-free Galerkin (EFG and meshless weighted least squares (MWLS methods are used to solve the problem of simulation of chloride diffusion in concrete. The range of a scaling parameter is presented using numerical examples based on meshless methods. One- and two-dimensional numerical examples validated the effectiveness and accuracy of the two meshless methods by comparing results obtained by MWLS with results computed by EFG and FEM and results calculated by an analytical method. A good agreement is obtained among MWLS and EFG numerical simulations and the experimental data obtained from an existing marine concrete structure. These results indicate that MWLS and EFG are reliable meshless methods that can be used for the prediction of chloride ingress in concrete structures.

  19. Fingerprint image reconstruction for swipe sensor using Predictive Overlap Method

    Directory of Open Access Journals (Sweden)

    Mardiansyah Ahmad Zafrullah

    2018-01-01

    Full Text Available Swipe sensor is one of many biometric authentication sensor types that widely applied to embedded devices. The sensor produces an overlap on every pixel block of the image, so the picture requires a reconstruction process before heading to the feature extraction process. Conventional reconstruction methods require extensive computation, causing difficult to apply to embedded devices that have limited computing process. In this paper, image reconstruction is proposed using predictive overlap method, which determines the image block shift from the previous set of change data. The experiments were performed using 36 images generated by a swipe sensor with 128 x 8 pixels size of the area, where each image has an overlap in each block. The results reveal computation can increase up to 86.44% compared with conventional methods, with accuracy decreasing to 0.008% in average.

  20. Bicycle Frame Prediction Techniques with Fuzzy Logic Method

    Directory of Open Access Journals (Sweden)

    Rafiuddin Syam

    2015-03-01

    Full Text Available In general, an appropriate size bike frame would get comfort to the rider while biking. This study aims to predict the simulation system on the bike frame sizes with fuzzy logic. Testing method used is the simulation test. In this study, fuzzy logic will be simulated using Matlab language to test their performance. Mamdani fuzzy logic using 3 variables and 1 output variable intake. Triangle function for the input and output. The controller is designed in the type mamdani with max-min composition and the method deffuzification using center of gravity method. The results showed that height, inseam and Crank Size generating appropriate frame size for the rider associated with comfort. Has a height range between 142 cm and 201 cm. Inseam has a range between 64 cm and 97 cm. Crank has a size range between 175 mm and 180 mm. The simulation results have a range of frame sizes between 13 inches and 22 inches. By using the fuzzy logic can be predicted the size frame of bicycle suitable for the biker.

  1. Bicycle Frame Prediction Techniques with Fuzzy Logic Method

    Directory of Open Access Journals (Sweden)

    Rafiuddin Syam

    2017-03-01

    Full Text Available In general, an appropriate size bike frame would get comfort to the rider while biking. This study aims to predict the simulation system on the bike frame sizes with fuzzy logic. Testing method used is the simulation test. In this study, fuzzy logic will be simulated using Matlab language to test their performance. Mamdani fuzzy logic using 3 variables and 1 output variable intake. Triangle function for the input and output. The controller is designed in the type mamdani with max-min composition and the method deffuzification using center of gravity method. The results showed that height, inseam and Crank Size generating appropriate frame size for the rider associated with comfort. Has a height range between 142 cm and 201 cm. Inseam has a range between 64 cm and 97 cm. Crank has a size range between 175 mm and 180 mm. The simulation results have a range of frame sizes between 13 inches and 22 inches. By using the fuzzy logic can be predicted the size frame of bicycle suitable for the biker.

  2. Alternative Testing Methods for Predicting Health Risk from Environmental Exposures

    Directory of Open Access Journals (Sweden)

    Annamaria Colacci

    2014-08-01

    Full Text Available Alternative methods to animal testing are considered as promising tools to support the prediction of toxicological risks from environmental exposure. Among the alternative testing methods, the cell transformation assay (CTA appears to be one of the most appropriate approaches to predict the carcinogenic properties of single chemicals, complex mixtures and environmental pollutants. The BALB/c 3T3 CTA shows a good degree of concordance with the in vivo rodent carcinogenesis tests. Whole-genome transcriptomic profiling is performed to identify genes that are transcriptionally regulated by different kinds of exposures. Its use in cell models representative of target organs may help in understanding the mode of action and predicting the risk for human health. Aiming at associating the environmental exposure to health-adverse outcomes, we used an integrated approach including the 3T3 CTA and transcriptomics on target cells, in order to evaluate the effects of airborne particulate matter (PM on toxicological complex endpoints. Organic extracts obtained from PM2.5 and PM1 samples were evaluated in the 3T3 CTA in order to identify effects possibly associated with different aerodynamic diameters or airborne chemical components. The effects of the PM2.5 extracts on human health were assessed by using whole-genome 44 K oligo-microarray slides. Statistical analysis by GeneSpring GX identified genes whose expression was modulated in response to the cell treatment. Then, modulated genes were associated with pathways, biological processes and diseases through an extensive biological analysis. Data derived from in vitro methods and omics techniques could be valuable for monitoring the exposure to toxicants, understanding the modes of action via exposure-associated gene expression patterns and to highlight the role of genes in key events related to adversity.

  3. Method for predicting peptide detection in mass spectrometry

    Science.gov (United States)

    Kangas, Lars [West Richland, WA; Smith, Richard D [Richland, WA; Petritis, Konstantinos [Richland, WA

    2010-07-13

    A method of predicting whether a peptide present in a biological sample will be detected by analysis with a mass spectrometer. The method uses at least one mass spectrometer to perform repeated analysis of a sample containing peptides from proteins with known amino acids. The method then generates a data set of peptides identified as contained within the sample by the repeated analysis. The method then calculates the probability that a specific peptide in the data set was detected in the repeated analysis. The method then creates a plurality of vectors, where each vector has a plurality of dimensions, and each dimension represents a property of one or more of the amino acids present in each peptide and adjacent peptides in the data set. Using these vectors, the method then generates an algorithm from the plurality of vectors and the calculated probabilities that specific peptides in the data set were detected in the repeated analysis. The algorithm is thus capable of calculating the probability that a hypothetical peptide represented as a vector will be detected by a mass spectrometry based proteomic platform, given that the peptide is present in a sample introduced into a mass spectrometer.

  4. A lifetime prediction method for LEDs considering mission profiles

    DEFF Research Database (Denmark)

    Qu, Xiaohui; Wang, Huai; Zhan, Xiaoqing

    2016-01-01

    and to benchmark the cost-competitiveness of different lighting technologies. The existing lifetime data released by LED manufacturers or standard organizations are usually applicable only for specific temperature and current levels. Significant lifetime discrepancies may be observed in field operations due...... to the varying operational and environmental conditions during the entire service time (i.e., mission profiles). To overcome the challenge, this paper proposes an advanced lifetime prediction method, which takes into account the field operation mission profiles and the statistical properties of the life data...

  5. Prediction strategies in a TV recommender system - Method and experiments

    NARCIS (Netherlands)

    van Setten, M.J.; Veenstra, M.; van Dijk, Elisabeth M.A.G.; Nijholt, Antinus; Isaísas, P.; Karmakar, N.

    2003-01-01

    Predicting the interests of a user in information is an important process in personalized information systems. In this paper, we present a way to create prediction engines that allow prediction techniques to be easily combined into prediction strategies. Prediction strategies choose one or a

  6. Data Based Prediction of Blood Glucose Concentrations Using Evolutionary Methods.

    Science.gov (United States)

    Hidalgo, J Ignacio; Colmenar, J Manuel; Kronberger, Gabriel; Winkler, Stephan M; Garnica, Oscar; Lanchares, Juan

    2017-08-08

    Predicting glucose values on the basis of insulin and food intakes is a difficult task that people with diabetes need to do daily. This is necessary as it is important to maintain glucose levels at appropriate values to avoid not only short-term, but also long-term complications of the illness. Artificial intelligence in general and machine learning techniques in particular have already lead to promising results in modeling and predicting glucose concentrations. In this work, several machine learning techniques are used for the modeling and prediction of glucose concentrations using as inputs the values measured by a continuous monitoring glucose system as well as also previous and estimated future carbohydrate intakes and insulin injections. In particular, we use the following four techniques: genetic programming, random forests, k-nearest neighbors, and grammatical evolution. We propose two new enhanced modeling algorithms for glucose prediction, namely (i) a variant of grammatical evolution which uses an optimized grammar, and (ii) a variant of tree-based genetic programming which uses a three-compartment model for carbohydrate and insulin dynamics. The predictors were trained and tested using data of ten patients from a public hospital in Spain. We analyze our experimental results using the Clarke error grid metric and see that 90% of the forecasts are correct (i.e., Clarke error categories A and B), but still even the best methods produce 5 to 10% of serious errors (category D) and approximately 0.5% of very serious errors (category E). We also propose an enhanced genetic programming algorithm that incorporates a three-compartment model into symbolic regression models to create smoothed time series of the original carbohydrate and insulin time series.

  7. Decision tree methods: applications for classification and prediction.

    Science.gov (United States)

    Song, Yan-Yan; Lu, Ying

    2015-04-25

    Decision tree methodology is a commonly used data mining method for establishing classification systems based on multiple covariates or for developing prediction algorithms for a target variable. This method classifies a population into branch-like segments that construct an inverted tree with a root node, internal nodes, and leaf nodes. The algorithm is non-parametric and can efficiently deal with large, complicated datasets without imposing a complicated parametric structure. When the sample size is large enough, study data can be divided into training and validation datasets. Using the training dataset to build a decision tree model and a validation dataset to decide on the appropriate tree size needed to achieve the optimal final model. This paper introduces frequently used algorithms used to develop decision trees (including CART, C4.5, CHAID, and QUEST) and describes the SPSS and SAS programs that can be used to visualize tree structure.

  8. Use of simplified methods for predicting natural resource damages

    International Nuclear Information System (INIS)

    Loreti, C.P.; Boehm, P.D.; Gundlach, E.R.; Healy, E.A.; Rosenstein, A.B.; Tsomides, H.J.; Turton, D.J.; Webber, H.M.

    1995-01-01

    To reduce transaction costs and save time, the US Department of the Interior (DOI) and the National Oceanic and Atmospheric Administration (NOAA) have developed simplified methods for assessing natural resource damages from oil and chemical spills. DOI has proposed the use of two computer models, the Natural Resource Damage Assessment Model for Great Lakes Environments (NRDAM/GLE) and a revised Natural Resource Damage Assessment Model for Coastal and Marine Environments (NRDAM/CME) for predicting monetary damages for spills of oils and chemicals into the Great Lakes and coastal and marine environments. NOAA has used versions of these models to create Compensation Formulas, which it has proposed for calculating natural resource damages for oil spills of up to 50,000 gallons anywhere in the US. Based on a review of the documentation supporting the methods, the results of hundreds of sample runs of DOI's models, and the outputs of the thousands of model runs used to create NOAA's Compensation Formulas, this presentation discusses the ability of these simplified assessment procedures to make realistic damage estimates. The limitations of these procedures are described, and the need for validating the assumptions used in predicting natural resource injuries is discussed

  9. VAN method of short-term earthquake prediction shows promise

    Science.gov (United States)

    Uyeda, Seiya

    Although optimism prevailed in the 1970s, the present consensus on earthquake prediction appears to be quite pessimistic. However, short-term prediction based on geoelectric potential monitoring has stood the test of time in Greece for more than a decade [VarotsosandKulhanek, 1993] Lighthill, 1996]. The method used is called the VAN method.The geoelectric potential changes constantly due to causes such as magnetotelluric effects, lightning, rainfall, leakage from manmade sources, and electrochemical instabilities of electrodes. All of this noise must be eliminated before preseismic signals are identified, if they exist at all. The VAN group apparently accomplished this task for the first time. They installed multiple short (100-200m) dipoles with different lengths in both north-south and east-west directions and long (1-10 km) dipoles in appropriate orientations at their stations (one of their mega-stations, Ioannina, for example, now has 137 dipoles in operation) and found that practically all of the noise could be eliminated by applying a set of criteria to the data.

  10. Predictive ability of machine learning methods for massive crop yield prediction

    Directory of Open Access Journals (Sweden)

    Alberto Gonzalez-Sanchez

    2014-04-01

    Full Text Available An important issue for agricultural planning purposes is the accurate yield estimation for the numerous crops involved in the planning. Machine learning (ML is an essential approach for achieving practical and effective solutions for this problem. Many comparisons of ML methods for yield prediction have been made, seeking for the most accurate technique. Generally, the number of evaluated crops and techniques is too low and does not provide enough information for agricultural planning purposes. This paper compares the predictive accuracy of ML and linear regression techniques for crop yield prediction in ten crop datasets. Multiple linear regression, M5-Prime regression trees, perceptron multilayer neural networks, support vector regression and k-nearest neighbor methods were ranked. Four accuracy metrics were used to validate the models: the root mean square error (RMS, root relative square error (RRSE, normalized mean absolute error (MAE, and correlation factor (R. Real data of an irrigation zone of Mexico were used for building the models. Models were tested with samples of two consecutive years. The results show that M5-Prime and k-nearest neighbor techniques obtain the lowest average RMSE errors (5.14 and 4.91, the lowest RRSE errors (79.46% and 79.78%, the lowest average MAE errors (18.12% and 19.42%, and the highest average correlation factors (0.41 and 0.42. Since M5-Prime achieves the largest number of crop yield models with the lowest errors, it is a very suitable tool for massive crop yield prediction in agricultural planning.

  11. A highly accurate predictive-adaptive method for lithium-ion battery remaining discharge energy prediction in electric vehicle applications

    International Nuclear Information System (INIS)

    Liu, Guangming; Ouyang, Minggao; Lu, Languang; Li, Jianqiu; Hua, Jianfeng

    2015-01-01

    Highlights: • An energy prediction (EP) method is introduced for battery E RDE determination. • EP determines E RDE through coupled prediction of future states, parameters, and output. • The PAEP combines parameter adaptation and prediction to update model parameters. • The PAEP provides improved E RDE accuracy compared with DC and other EP methods. - Abstract: In order to estimate the remaining driving range (RDR) in electric vehicles, the remaining discharge energy (E RDE ) of the applied battery system needs to be precisely predicted. Strongly affected by the load profiles, the available E RDE varies largely in real-world applications and requires specific determination. However, the commonly-used direct calculation (DC) method might result in certain energy prediction errors by relating the E RDE directly to the current state of charge (SOC). To enhance the E RDE accuracy, this paper presents a battery energy prediction (EP) method based on the predictive control theory, in which a coupled prediction of future battery state variation, battery model parameter change, and voltage response, is implemented on the E RDE prediction horizon, and the E RDE is subsequently accumulated and real-timely optimized. Three EP approaches with different model parameter updating routes are introduced, and the predictive-adaptive energy prediction (PAEP) method combining the real-time parameter identification and the future parameter prediction offers the best potential. Based on a large-format lithium-ion battery, the performance of different E RDE calculation methods is compared under various dynamic profiles. Results imply that the EP methods provide much better accuracy than the traditional DC method, and the PAEP could reduce the E RDE error by more than 90% and guarantee the relative energy prediction error under 2%, proving as a proper choice in online E RDE prediction. The correlation of SOC estimation and E RDE calculation is then discussed to illustrate the

  12. Predicting human height by Victorian and genomic methods.

    Science.gov (United States)

    Aulchenko, Yurii S; Struchalin, Maksim V; Belonogova, Nadezhda M; Axenovich, Tatiana I; Weedon, Michael N; Hofman, Albert; Uitterlinden, Andre G; Kayser, Manfred; Oostra, Ben A; van Duijn, Cornelia M; Janssens, A Cecile J W; Borodin, Pavel M

    2009-08-01

    In the Victorian era, Sir Francis Galton showed that 'when dealing with the transmission of stature from parents to children, the average height of the two parents, ... is all we need care to know about them' (1886). One hundred and twenty-two years after Galton's work was published, 54 loci showing strong statistical evidence for association to human height were described, providing us with potential genomic means of human height prediction. In a population-based study of 5748 people, we find that a 54-loci genomic profile explained 4-6% of the sex- and age-adjusted height variance, and had limited ability to discriminate tall/short people, as characterized by the area under the receiver-operating characteristic curve (AUC). In a family-based study of 550 people, with both parents having height measurements, we find that the Galtonian mid-parental prediction method explained 40% of the sex- and age-adjusted height variance, and showed high discriminative accuracy. We have also explored how much variance a genomic profile should explain to reach certain AUC values. For highly heritable traits such as height, we conclude that in applications in which parental phenotypic information is available (eg, medicine), the Victorian Galton's method will long stay unsurpassed, in terms of both discriminative accuracy and costs. For less heritable traits, and in situations in which parental information is not available (eg, forensics), genomic methods may provide an alternative, given that the variants determining an essential proportion of the trait's variation can be identified.

  13. Methods and approaches to prediction in the meat industry

    Directory of Open Access Journals (Sweden)

    A. B. Lisitsyn

    2016-01-01

    Full Text Available The modern stage of the agro-industrial complex is characterized by an increasing complexity, intensification of technological processes of complex processing of materials of animal origin also the need for a systematic analysis of the variety of determining factors and relationships between them, complexity of the objective function of product quality and severe restrictions on technological regimes. One of the main tasks that face the employees of the enterprises of the agro-industrial complex, which are engaged in processing biotechnological raw materials, is the further organizational improvement of work at all stages of the food chain, besides an increase in the production volume. The meat industry as a part of the agro-industrial complex has to use the biological raw materials with maximum efficiency, while reducing and even eliminating losses at all stages of processing; rationally use raw material when selecting a type of processing products; steadily increase quality, biological and food value of products; broaden the assortment of manufactured products in order to satisfy increasing consumer requirements and extend the market for their realization in the conditions of uncertainty of external environment, due to the uneven receipt of raw materials, variations in its properties and parameters, limited time sales and fluctuations in demand for products. The challenges facing the meat industry cannot be solved without changes to the strategy for scientific and technological development of the industry. To achieve these tasks, it is necessary to use the prediction as a method of constant improvement of all technological processes and their performance under the rational and optimal regimes, while constantly controlling quality of raw material, semi-prepared products and finished products at all stages of the technological processing by the physico-chemical, physico-mechanical (rheological, microbiological and organoleptic methods. The paper

  14. FREEZING AND THAWING TIME PREDICTION METHODS OF FOODS II: NUMARICAL METHODS

    Directory of Open Access Journals (Sweden)

    Yahya TÜLEK

    1999-03-01

    Full Text Available Freezing is one of the excellent methods for the preservation of foods. If freezing and thawing processes and frozen storage method are carried out correctly, the original characteristics of the foods can remain almost unchanged over an extended periods of time. It is very important to determine the freezing and thawing time period of the foods, as they strongly influence the both quality of food material and process productivity and the economy. For developing a simple and effectively usable mathematical model, less amount of process parameters and physical properties should be enrolled in calculations. But it is a difficult to have all of these in one prediction method. For this reason, various freezing and thawing time prediction methods were proposed in literature and research studies have been going on.

  15. Method of predicting Splice Sites based on signal interactions

    Directory of Open Access Journals (Sweden)

    Deogun Jitender S

    2006-04-01

    Full Text Available Abstract Background Predicting and proper ranking of canonical splice sites (SSs is a challenging problem in bioinformatics and machine learning communities. Any progress in SSs recognition will lead to better understanding of splicing mechanism. We introduce several new approaches of combining a priori knowledge for improved SS detection. First, we design our new Bayesian SS sensor based on oligonucleotide counting. To further enhance prediction quality, we applied our new de novo motif detection tool MHMMotif to intronic ends and exons. We combine elements found with sensor information using Naive Bayesian Network, as implemented in our new tool SpliceScan. Results According to our tests, the Bayesian sensor outperforms the contemporary Maximum Entropy sensor for 5' SS detection. We report a number of putative Exonic (ESE and Intronic (ISE Splicing Enhancers found by MHMMotif tool. T-test statistics on mouse/rat intronic alignments indicates, that detected elements are on average more conserved as compared to other oligos, which supports our assumption of their functional importance. The tool has been shown to outperform the SpliceView, GeneSplicer, NNSplice, Genio and NetUTR tools for the test set of human genes. SpliceScan outperforms all contemporary ab initio gene structural prediction tools on the set of 5' UTR gene fragments. Conclusion Designed methods have many attractive properties, compared to existing approaches. Bayesian sensor, MHMMotif program and SpliceScan tools are freely available on our web site. Reviewers This article was reviewed by Manyuan Long, Arcady Mushegian and Mikhail Gelfand.

  16. Evaluation of mathematical methods for predicting optimum dose of gamma radiation in sugarcane (Saccharum sp.)

    International Nuclear Information System (INIS)

    Wu, K.K.; Siddiqui, S.H.; Heinz, D.J.; Ladd, S.L.

    1978-01-01

    Two mathematical methods - the reversed logarithmic method and the regression method - were used to compare the predicted and the observed optimum gamma radiation dose (OD 50 ) in vegetative propagules of sugarcane. The reversed logarithmic method, usually used in sexually propagated crops, showed the largest difference between the predicted and observed optimum dose. The regression method resulted in a better prediction of the observed values and is suggested as a better method for the prediction of optimum dose for vegetatively propagated crops. (author)

  17. PREDICTION OF MEAT PRODUCT QUALITY BY THE MATHEMATICAL PROGRAMMING METHODS

    Directory of Open Access Journals (Sweden)

    A. B. Lisitsyn

    2016-01-01

    Full Text Available Abstract Use of the prediction technologies is one of the directions of the research work carried out both in Russia and abroad. Meat processing is accompanied by the complex physico-chemical, biochemical and mechanical processes. To predict the behavior of meat raw material during the technological processing, a complex of physico-technological and structural-mechanical indicators, which objectively reflects its quality, is used. Among these indicators are pH value, water binding and fat holding capacities, water activity, adhesiveness, viscosity, plasticity and so on. The paper demonstrates the influence of animal proteins (beef and pork on the physico-chemical and functional properties before and after thermal treatment of minced meat made from meat raw material with different content of the connective and fat tissues. On the basis of the experimental data, the model (stochastic dependence parameters linking the quantitative resultant and factor variables were obtained using the regression analysis, and the degree of the correlation with the experimental data was assessed. The maximum allowable levels of meat raw material replacement with animal proteins (beef and pork were established by the methods of mathematical programming. Use of the information technologies will significantly reduce the costs of the experimental search and substantiation of the optimal level of replacement of meat raw material with animal proteins (beef, pork, and will also allow establishing a relationship of product quality indicators with quantity and quality of minced meat ingredients.

  18. A comparison of different methods for predicting coal devolatilisation kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Arenillas, A.; Rubiera, F.; Pevida, C.; Pis, J.J. [Instituto Nacional del Carbon, CSIC, Apartado 73, 33080 Oviedo (Spain)

    2001-04-01

    Knowledge of the coal devolatilisation rate is of great importance because it exerts a marked effect on the overall combustion behaviour. Different approaches can be used to obtain the kinetics of the complex devolatilisation process. The simplest are empirical and employ global kinetics, where the Arrhenius expression is used to correlate rates of mass loss with temperature. In this study a high volatile bituminous coal was devolatilised at four different heating rates in a thermogravimetric analyser (TG) linked to a mass spectrometer (MS). As a first approach, the Arrhenius kinetic parameters (k and A) were calculated from the experimental results, assuming a single step process. Another approach is the distributed-activation energy model, which is more complex due to the assumption that devolatilisation occurs through several first-order reactions, which occur simultaneously. Recent advances in the understanding of coal structure have led to more fundamental approaches for modelling devolatilisation behaviour, such as network models. These are based on a physico-chemical description of coal structure. In the present study the FG-DVC (Functional Group-Depolymerisation, Vaporisation and Crosslinking) computer code was used as the network model and the FG-DVC predicted evolution of volatile compounds was compared with the experimental results. In addition, the predicted rate of mass loss from the FG-DVC model was used to obtain a third devolatilisation kinetic approach. The three methods were compared and discussed, with the experimental results as a reference.

  19. Predicting lattice thermal conductivity with help from ab initio methods

    Science.gov (United States)

    Broido, David

    2015-03-01

    The lattice thermal conductivity is a fundamental transport parameter that determines the utility a material for specific thermal management applications. Materials with low thermal conductivity find applicability in thermoelectric cooling and energy harvesting. High thermal conductivity materials are urgently needed to help address the ever-growing heat dissipation problem in microelectronic devices. Predictive computational approaches can provide critical guidance in the search and development of new materials for such applications. Ab initio methods for calculating lattice thermal conductivity have demonstrated predictive capability, but while they are becoming increasingly efficient, they are still computationally expensive particularly for complex crystals with large unit cells . In this talk, I will review our work on first principles phonon transport for which the intrinsic lattice thermal conductivity is limited only by phonon-phonon scattering arising from anharmonicity. I will examine use of the phase space for anharmonic phonon scattering and the Grüneisen parameters as measures of the thermal conductivities for a range of materials and compare these to the widely used guidelines stemming from the theory of Liebfried and Schölmann. This research was supported primarily by the NSF under Grant CBET-1402949, and by the S3TEC, an Energy Frontier Research Center funded by the US DOE, office of Basic Energy Sciences under Award No. DE-SC0001299.

  20. Development of nondestructive method for prediction of crack instability

    International Nuclear Information System (INIS)

    Schroeder, J.L.; Eylon, D.; Shell, E.B.; Matikas, T.E.

    2000-01-01

    A method to characterize the deformation zone at a crack tip and predict upcoming fracture under load using white light interference microscopy was developed and studied. Cracks were initiated in notched Ti-6Al-4V specimens through fatigue loading. Following crack initiation, specimens were subjected to static loading during in-situ observation of the deformation area ahead of the crack. Nondestructive in-situ observations were performed using white light interference microscopy. Profilometer measurements quantified the area, volume, and shape of the deformation ahead of the crack front. Results showed an exponential relationship between the area and volume of deformation and the stress intensity factor of the cracked alloy. These findings also indicate that it is possible to determine a critical rate of change in deformation versus the stress intensity factor that can predict oncoming catastrophic failure. In addition, crack front deformation zones were measured as a function of time under sustained load, and crack tip deformation zone enlargement over time was observed

  1. Extremely Randomized Machine Learning Methods for Compound Activity Prediction

    Directory of Open Access Journals (Sweden)

    Wojciech M. Czarnecki

    2015-11-01

    Full Text Available Speed, a relatively low requirement for computational resources and high effectiveness of the evaluation of the bioactivity of compounds have caused a rapid growth of interest in the application of machine learning methods to virtual screening tasks. However, due to the growth of the amount of data also in cheminformatics and related fields, the aim of research has shifted not only towards the development of algorithms of high predictive power but also towards the simplification of previously existing methods to obtain results more quickly. In the study, we tested two approaches belonging to the group of so-called ‘extremely randomized methods’—Extreme Entropy Machine and Extremely Randomized Trees—for their ability to properly identify compounds that have activity towards particular protein targets. These methods were compared with their ‘non-extreme’ competitors, i.e., Support Vector Machine and Random Forest. The extreme approaches were not only found out to improve the efficiency of the classification of bioactive compounds, but they were also proved to be less computationally complex, requiring fewer steps to perform an optimization procedure.

  2. Assessment method to predict the rate of unresolved false alarms

    International Nuclear Information System (INIS)

    Reardon, P.T.; Eggers, R.F.; Heaberlin, S.W.

    1982-06-01

    A method has been developed to predict the rate of unresolved false alarms of material loss in a nuclear facility. The computer program DETRES-1 was developed. The program first assigns the true values of control unit components receipts, shipments, beginning and ending inventories. A normal random number generator is used to generate measured values of each component. A loss estimator is calculated from the control unit's measured values. If the loss estimator triggers a detection alarm, a response is simulated. The response simulation is divided into two phases. The first phase is to simulate remeasurement of the components of the detection loss estimator using the same or better measurement methods or inferences from surrounding control units. If this phase of response continues to indicate a material loss, phase of response simulating a production shutdown and comprehensive cleanout is initiated. A new loss estimator is found, and tested against the alarm thresholds. If the estimator value is below the threshold, the original detection alarm is considered resolved; if above the threshold, an unresolved alarm has occurred. A tally is kept of valid alarms, unresolved false alarms, and failure to alarm upon a true loss

  3. A novel time series link prediction method: Learning automata approach

    Science.gov (United States)

    Moradabadi, Behnaz; Meybodi, Mohammad Reza

    2017-09-01

    Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.

  4. Genomic prediction based on data from three layer lines: a comparison between linear methods

    NARCIS (Netherlands)

    Calus, M.P.L.; Huang, H.; Vereijken, J.; Visscher, J.; Napel, ten J.; Windig, J.J.

    2014-01-01

    Background The prediction accuracy of several linear genomic prediction models, which have previously been used for within-line genomic prediction, was evaluated for multi-line genomic prediction. Methods Compared to a conventional BLUP (best linear unbiased prediction) model using pedigree data, we

  5. Prediction of residual stress using explicit finite element method

    Directory of Open Access Journals (Sweden)

    W.A. Siswanto

    2015-12-01

    Full Text Available This paper presents the residual stress behaviour under various values of friction coefficients and scratching displacement amplitudes. The investigation is based on numerical solution using explicit finite element method in quasi-static condition. Two different aeroengine materials, i.e. Super CMV (Cr-Mo-V and Titanium alloys (Ti-6Al-4V, are examined. The usage of FEM analysis in plate under normal contact is validated with Hertzian theoretical solution in terms of contact pressure distributions. The residual stress distributions along with normal and shear stresses on elastic and plastic regimes of the materials are studied for a simple cylinder-on-flat contact configuration model subjected to normal loading, scratching and followed by unloading. The investigated friction coefficients are 0.3, 0.6 and 0.9, while scratching displacement amplitudes are 0.05 mm, 0.10 mm and 0.20 mm respectively. It is found that friction coefficient of 0.6 results in higher residual stress for both materials. Meanwhile, the predicted residual stress is proportional to the scratching displacement amplitude, higher displacement amplitude, resulting in higher residual stress. It is found that less residual stress is predicted on Super CMV material compared to Ti-6Al-4V material because of its high yield stress and ultimate strength. Super CMV material with friction coefficient of 0.3 and scratching displacement amplitude of 0.10 mm is recommended to be used in contact engineering applications due to its minimum possibility of fatigue.

  6. Analysis of the uranium price predicted to 24 months, implementing neural networks and the Monte Carlo method like predictive tools

    International Nuclear Information System (INIS)

    Esquivel E, J.; Ramirez S, J. R.; Palacios H, J. C.

    2011-11-01

    The present work shows predicted prices of the uranium, using a neural network. The importance of predicting financial indexes of an energy resource, in this case, allows establishing budgetary measures, as well as the costs of the resource to medium period. The uranium is part of the main energy generating fuels and as such, its price rebounds in the financial analyses, due to this is appealed to predictive methods to obtain an outline referent to the financial behaviour that will have in a certain time. In this study, two methodologies are used for the prediction of the uranium price: the Monte Carlo method and the neural networks. These methods allow predicting the indexes of monthly costs, for a two years period, starting from the second bimonthly of 2011. For the prediction the uranium costs are used, registered from the year 2005. (Author)

  7. Experimental method to predict avalanches based on neural networks

    Directory of Open Access Journals (Sweden)

    V. V. Zhdanov

    2016-01-01

    Full Text Available The article presents results of experimental use of currently available statistical methods to classify the avalanche‑dangerous precipitations and snowfalls in the Kishi Almaty river basin. The avalanche service of Kazakhstan uses graphical methods for prediction of avalanches developed by I.V. Kondrashov and E.I. Kolesnikov. The main objective of this work was to develop a modern model that could be used directly at the avalanche stations. Classification of winter precipitations into dangerous snowfalls and non‑dangerous ones was performed by two following ways: the linear discriminant function (canonical analysis and artificial neural networks. Observational data on weather and avalanches in the gorge Kishi Almaty in the gorge Kishi Almaty were used as a training sample. Coefficients for the canonical variables were calculated by the software «Statistica» (Russian version 6.0, and then the necessary formula had been constructed. The accuracy of the above classification was 96%. Simulator by the authors L.N. Yasnitsky and F.М. Cherepanov was used to learn the neural networks. The trained neural network demonstrated 98% accuracy of the classification. Prepared statistical models are recommended to be tested at the snow‑avalanche stations. Results of the tests will be used for estimation of the model quality and its readiness for the operational work. In future, we plan to apply these models for classification of the avalanche danger by the five‑point international scale.

  8. A METHOD OF PREDICTING BREAST CANCER USING QUESTIONNAIRES

    Directory of Open Access Journals (Sweden)

    V. N. Malashenko

    2017-01-01

    Full Text Available Purpose. Simplify and increase the accuracy of the questionnaire method of predicting breast cancer (BC for subsequent computer processing and Automated dispensary at risk without the doctor.Materials and methods. The work was based on statistical data obtained by surveying 305 women. The questionnaire included 63 items: 17 open-ended questions, 46 — with a choice of response. It was established multifactor model, the development of which, in addition to the survey data were used materials from the medical histories of patients and respondents data immuno-histochemical studies. Data analysis was performed using Statistica 10.0 and MedCalc 12.7.0 programs.Results. The ROC analysis was performas and the questionnaire data revealed 8 significant predictors of breast cancer. On their basis we created the formula for calculating the prognostic factor of risk of development of breast cancer with a sensitivity 83,12% and a specificity of 91,43%.Conclusions. The completed developments allow to create a computer program for automated processing of profiles on the formation of groups at risk of breast cancer and clinical supervision. The introduction of a screening questionnaire over the Internet with subsequent computer processing of the results, without the direct involvement of doctors, will increase the coverage of the female population of the Russian Federation activities related to the prevention of breast cancer. It can free up time for physicians to receive primary patients, as well as improve oncological vigilance of the female population of the Russian Federation.

  9. The Comparison Study of Short-Term Prediction Methods to Enhance the Model Predictive Controller Applied to Microgrid Energy Management

    Directory of Open Access Journals (Sweden)

    César Hernández-Hernández

    2017-06-01

    Full Text Available Electricity load forecasting, optimal power system operation and energy management play key roles that can bring significant operational advantages to microgrids. This paper studies how methods based on time series and neural networks can be used to predict energy demand and production, allowing them to be combined with model predictive control. Comparisons of different prediction methods and different optimum energy distribution scenarios are provided, permitting us to determine when short-term energy prediction models should be used. The proposed prediction models in addition to the model predictive control strategy appear as a promising solution to energy management in microgrids. The controller has the task of performing the management of electricity purchase and sale to the power grid, maximizing the use of renewable energy sources and managing the use of the energy storage system. Simulations were performed with different weather conditions of solar irradiation. The obtained results are encouraging for future practical implementation.

  10. Skill forecasting from different wind power ensemble prediction methods

    International Nuclear Information System (INIS)

    Pinson, Pierre; Nielsen, Henrik A; Madsen, Henrik; Kariniotakis, George

    2007-01-01

    This paper presents an investigation on alternative approaches to the providing of uncertainty estimates associated to point predictions of wind generation. Focus is given to skill forecasts in the form of prediction risk indices, aiming at giving a comprehensive signal on the expected level of forecast uncertainty. Ensemble predictions of wind generation are used as input. A proposal for the definition of prediction risk indices is given. Such skill forecasts are based on the dispersion of ensemble members for a single prediction horizon, or over a set of successive look-ahead times. It is shown on the test case of a Danish offshore wind farm how prediction risk indices may be related to several levels of forecast uncertainty (and energy imbalances). Wind power ensemble predictions are derived from the transformation of ECMWF and NCEP ensembles of meteorological variables to power, as well as by a lagged average approach alternative. The ability of risk indices calculated from the various types of ensembles forecasts to resolve among situations with different levels of uncertainty is discussed

  11. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  12. PROXIMAL: a method for Prediction of Xenobiotic Metabolism.

    Science.gov (United States)

    Yousofshahi, Mona; Manteiga, Sara; Wu, Charmian; Lee, Kyongbum; Hassoun, Soha

    2015-12-22

    Contamination of the environment with bioactive chemicals has emerged as a potential public health risk. These substances that may cause distress or disease in humans can be found in air, water and food supplies. An open question is whether these chemicals transform into potentially more active or toxic derivatives via xenobiotic metabolizing enzymes expressed in the body. We present a new prediction tool, which we call PROXIMAL (Prediction of Xenobiotic Metabolism) for identifying possible transformation products of xenobiotic chemicals in the liver. Using reaction data from DrugBank and KEGG, PROXIMAL builds look-up tables that catalog the sites and types of structural modifications performed by Phase I and Phase II enzymes. Given a compound of interest, PROXIMAL searches for substructures that match the sites cataloged in the look-up tables, applies the corresponding modifications to generate a panel of possible transformation products, and ranks the products based on the activity and abundance of the enzymes involved. PROXIMAL generates transformations that are specific for the chemical of interest by analyzing the chemical's substructures. We evaluate the accuracy of PROXIMAL's predictions through case studies on two environmental chemicals with suspected endocrine disrupting activity, bisphenol A (BPA) and 4-chlorobiphenyl (PCB3). Comparisons with published reports confirm 5 out of 7 and 17 out of 26 of the predicted derivatives for BPA and PCB3, respectively. We also compare biotransformation predictions generated by PROXIMAL with those generated by METEOR and Metaprint2D-react, two other prediction tools. PROXIMAL can predict transformations of chemicals that contain substructures recognizable by human liver enzymes. It also has the ability to rank the predicted metabolites based on the activity and abundance of enzymes involved in xenobiotic transformation.

  13. Different protein-protein interface patterns predicted by different machine learning methods.

    Science.gov (United States)

    Wang, Wei; Yang, Yongxiao; Yin, Jianxin; Gong, Xinqi

    2017-11-22

    Different types of protein-protein interactions make different protein-protein interface patterns. Different machine learning methods are suitable to deal with different types of data. Then, is it the same situation that different interface patterns are preferred for prediction by different machine learning methods? Here, four different machine learning methods were employed to predict protein-protein interface residue pairs on different interface patterns. The performances of the methods for different types of proteins are different, which suggest that different machine learning methods tend to predict different protein-protein interface patterns. We made use of ANOVA and variable selection to prove our result. Our proposed methods taking advantages of different single methods also got a good prediction result compared to single methods. In addition to the prediction of protein-protein interactions, this idea can be extended to other research areas such as protein structure prediction and design.

  14. Predicting Solar Activity Using Machine-Learning Methods

    Science.gov (United States)

    Bobra, M.

    2017-12-01

    Of all the activity observed on the Sun, two of the most energetic events are flares and coronal mass ejections. However, we do not, as of yet, fully understand the physical mechanism that triggers solar eruptions. A machine-learning algorithm, which is favorable in cases where the amount of data is large, is one way to [1] empirically determine the signatures of this mechanism in solar image data and [2] use them to predict solar activity. In this talk, we discuss the application of various machine learning algorithms - specifically, a Support Vector Machine, a sparse linear regression (Lasso), and Convolutional Neural Network - to image data from the photosphere, chromosphere, transition region, and corona taken by instruments aboard the Solar Dynamics Observatory in order to predict solar activity on a variety of time scales. Such an approach may be useful since, at the present time, there are no physical models of flares available for real-time prediction. We discuss our results (Bobra and Couvidat, 2015; Bobra and Ilonidis, 2016; Jonas et al., 2017) as well as other attempts to predict flares using machine-learning (e.g. Ahmed et al., 2013; Nishizuka et al. 2017) and compare these results with the more traditional techniques used by the NOAA Space Weather Prediction Center (Crown, 2012). We also discuss some of the challenges in using machine-learning algorithms for space science applications.

  15. Predicting Plasma Glucose From Interstitial Glucose Observations Using Bayesian Methods

    DEFF Research Database (Denmark)

    Hansen, Alexander Hildenbrand; Duun-Henriksen, Anne Katrine; Juhl, Rune

    2014-01-01

    One way of constructing a control algorithm for an artificial pancreas is to identify a model capable of predicting plasma glucose (PG) from interstitial glucose (IG) observations. Stochastic differential equations (SDEs) make it possible to account both for the unknown influence of the continuous...... glucose monitor (CGM) and for unknown physiological influences. Combined with prior knowledge about the measurement devices, this approach can be used to obtain a robust predictive model. A stochastic-differential-equation-based gray box (SDE-GB) model is formulated on the basis of an identifiable...

  16. A comparison of methods of predicting maximum oxygen uptake.

    OpenAIRE

    Grant, S; Corbett, K; Amjad, A M; Wilson, J; Aitchison, T

    1995-01-01

    The aim of this study was to compare the results from a Cooper walk run test, a multistage shuttle run test, and a submaximal cycle test with the direct measurement of maximum oxygen uptake on a treadmill. Three predictive tests of maximum oxygen uptake--linear extrapolation of heart rate of VO2 collected from a submaximal cycle ergometer test (predicted L/E), the Cooper 12 min walk, run test, and a multi-stage progressive shuttle run test (MST)--were performed by 22 young healthy males (mean...

  17. What Predicts Method Effects in Child Behavior Ratings

    Science.gov (United States)

    Low, Justin A.; Keith, Timothy Z.; Jensen, Megan

    2015-01-01

    The purpose of this research was to determine whether child, parent, and teacher characteristics such as sex, socioeconomic status (SES), parental depressive symptoms, the number of years of teaching experience, number of children in the classroom, and teachers' disciplinary self-efficacy predict deviations from maternal ratings in a…

  18. A method for predicting the probability of business network profitability

    NARCIS (Netherlands)

    Johnson, P.; Iacob, Maria Eugenia; Välja, M.; van Sinderen, Marten J.; Magnusson, C; Ladhe, T.

    2014-01-01

    In the design phase of business collaboration, it is desirable to be able to predict the profitability of the business-to-be. Therefore, techniques to assess qualities such as costs, revenues, risks, and profitability have been previously proposed. However, they do not allow the modeler to properly

  19. Statistical tests for equal predictive ability across multiple forecasting methods

    DEFF Research Database (Denmark)

    Borup, Daniel; Thyrsgaard, Martin

    We develop a multivariate generalization of the Giacomini-White tests for equal conditional predictive ability. The tests are applicable to a mixture of nested and non-nested models, incorporate estimation uncertainty explicitly, and allow for misspecification of the forecasting model as well as ...

  20. Genomic breeding value prediction:methods and procedures

    NARCIS (Netherlands)

    Calus, M.P.L.

    2010-01-01

    Animal breeding faces one of the most significant changes of the past decades – the implementation of genomic selection. Genomic selection uses dense marker maps to predict the breeding value of animals with reported accuracies that are up to 0.31 higher than those of pedigree indexes, without the

  1. Link Prediction Methods and Their Accuracy for Different Social Networks and Network Metrics

    Directory of Open Access Journals (Sweden)

    Fei Gao

    2015-01-01

    Full Text Available Currently, we are experiencing a rapid growth of the number of social-based online systems. The availability of the vast amounts of data gathered in those systems brings new challenges that we face when trying to analyse it. One of the intensively researched topics is the prediction of social connections between users. Although a lot of effort has been made to develop new prediction approaches, the existing methods are not comprehensively analysed. In this paper we investigate the correlation between network metrics and accuracy of different prediction methods. We selected six time-stamped real-world social networks and ten most widely used link prediction methods. The results of the experiments show that the performance of some methods has a strong correlation with certain network metrics. We managed to distinguish “prediction friendly” networks, for which most of the prediction methods give good performance, as well as “prediction unfriendly” networks, for which most of the methods result in high prediction error. Correlation analysis between network metrics and prediction accuracy of prediction methods may form the basis of a metalearning system where based on network characteristics it will be able to recommend the right prediction method for a given network.

  2. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.

    2012-01-01

    PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator

  3. Hybrid Prediction Method for Aircraft Interior Noise, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of the project is research and development of methods for application of the Hybrid FE-SEA method to aircraft vibro-acoustic problems. This proposal...

  4. DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?

    Science.gov (United States)

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  5. Prediction of Solvent Physical Properties using the Hierarchical Clustering Method

    Science.gov (United States)

    Recently a QSAR (Quantitative Structure Activity Relationship) method, the hierarchical clustering method, was developed to estimate acute toxicity values for large, diverse datasets. This methodology has now been applied to the estimate solvent physical properties including sur...

  6. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients.

    Science.gov (United States)

    Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.

  7. Prediction of Human Drug Targets and Their Interactions Using Machine Learning Methods: Current and Future Perspectives.

    Science.gov (United States)

    Nath, Abhigyan; Kumari, Priyanka; Chaube, Radha

    2018-01-01

    Identification of drug targets and drug target interactions are important steps in the drug-discovery pipeline. Successful computational prediction methods can reduce the cost and time demanded by the experimental methods. Knowledge of putative drug targets and their interactions can be very useful for drug repurposing. Supervised machine learning methods have been very useful in drug target prediction and in prediction of drug target interactions. Here, we describe the details for developing prediction models using supervised learning techniques for human drug target prediction and their interactions.

  8. Theoretical prediction method of subcooled flow boiling CHF

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min; Chang, Soon Heung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1999-12-31

    A theoretical critical heat flux (CHF ) model, based on lateral bubble coalescence on the heated wall, is proposed to predict the subcooled flow boiling CHF in a uniformly heated vertical tube. The model is based on the concept that a single layer of bubbles contacted to the heated wall prevents a bulk liquid from reaching the wall at near CHF condition. Comparisons between the model predictions and experimental data result in satisfactory agreement within less than 9.73% root-mean-square error by the appropriate choice of the critical void fraction in the bubbly layer. The present model shows comparable performance with the CHF look-up table of Groeneveld et al.. 28 refs., 11 figs., 1 tab. (Author)

  9. Machine learning methods in predicting the student academic motivation

    Directory of Open Access Journals (Sweden)

    Ivana Đurđević Babić

    2017-01-01

    Full Text Available Academic motivation is closely related to academic performance. For educators, it is equally important to detect early students with a lack of academic motivation as it is to detect those with a high level of academic motivation. In endeavouring to develop a classification model for predicting student academic motivation based on their behaviour in learning management system (LMS courses, this paper intends to establish links between the predicted student academic motivation and their behaviour in the LMS course. Students from all years at the Faculty of Education in Osijek participated in this research. Three machine learning classifiers (neural networks, decision trees, and support vector machines were used. To establish whether a significant difference in the performance of models exists, a t-test of the difference in proportions was used. Although, all classifiers were successful, the neural network model was shown to be the most successful in detecting the student academic motivation based on their behaviour in LMS course.

  10. Theoretical prediction method of subcooled flow boiling CHF

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min; Chang, Soon Heung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A theoretical critical heat flux (CHF ) model, based on lateral bubble coalescence on the heated wall, is proposed to predict the subcooled flow boiling CHF in a uniformly heated vertical tube. The model is based on the concept that a single layer of bubbles contacted to the heated wall prevents a bulk liquid from reaching the wall at near CHF condition. Comparisons between the model predictions and experimental data result in satisfactory agreement within less than 9.73% root-mean-square error by the appropriate choice of the critical void fraction in the bubbly layer. The present model shows comparable performance with the CHF look-up table of Groeneveld et al.. 28 refs., 11 figs., 1 tab. (Author)

  11. Improved Methods for Pitch Synchronous Linear Prediction Analysis of Speech

    OpenAIRE

    劉, 麗清

    2015-01-01

    Linear prediction (LP) analysis has been applied to speech system over the last few decades. LP technique is well-suited for speech analysis due to its ability to model speech production process approximately. Hence LP analysis has been widely used for speech enhancement, low-bit-rate speech coding in cellular telephony, speech recognition, characteristic parameter extraction (vocal tract resonances frequencies, fundamental frequency called pitch) and so on. However, the performance of the co...

  12. Development of an integrated method for long-term water quality prediction using seasonal climate forecast

    Directory of Open Access Journals (Sweden)

    J. Cho

    2016-10-01

    Full Text Available The APEC Climate Center (APCC produces climate prediction information utilizing a multi-climate model ensemble (MME technique. In this study, four different downscaling methods, in accordance with the degree of utilizing the seasonal climate prediction information, were developed in order to improve predictability and to refine the spatial scale. These methods include: (1 the Simple Bias Correction (SBC method, which directly uses APCC's dynamic prediction data with a 3 to 6 month lead time; (2 the Moving Window Regression (MWR method, which indirectly utilizes dynamic prediction data; (3 the Climate Index Regression (CIR method, which predominantly uses observation-based climate indices; and (4 the Integrated Time Regression (ITR method, which uses predictors selected from both CIR and MWR. Then, a sampling-based temporal downscaling was conducted using the Mahalanobis distance method in order to create daily weather inputs to the Soil and Water Assessment Tool (SWAT model. Long-term predictability of water quality within the Wecheon watershed of the Nakdong River Basin was evaluated. According to the Korean Ministry of Environment's Provisions of Water Quality Prediction and Response Measures, modeling-based predictability was evaluated by using 3-month lead prediction data issued in February, May, August, and November as model input of SWAT. Finally, an integrated approach, which takes into account various climate information and downscaling methods for water quality prediction, was presented. This integrated approach can be used to prevent potential problems caused by extreme climate in advance.

  13. Simple methods for predicting gas leakage flows through cracks

    International Nuclear Information System (INIS)

    Ewing, D.J.F.

    1989-01-01

    This report presents closed-form approximate analytical formulae with which the flow rate out of a through-wall crack can be estimated. The crack is idealised as a rough, tapering, wedgeshaped channel and the fluid is idealised as an isothermal or polytropically-expanding perfect gas. In practice, uncertainties about the wall friction factor dominate over uncertainties caused by the fluid-dynamics simplifications. The formulae take account of crack taper and for outwardly-diverging cracks they predict flows within 12% of mathematically more accurate one-dimensional numerical models. Upper and lower estimates of wall friction are discussed. (author)

  14. Underwater Sound Propagation Modeling Methods for Predicting Marine Animal Exposure.

    Science.gov (United States)

    Hamm, Craig A; McCammon, Diana F; Taillefer, Martin L

    2016-01-01

    The offshore exploration and production (E&P) industry requires comprehensive and accurate ocean acoustic models for determining the exposure of marine life to the high levels of sound used in seismic surveys and other E&P activities. This paper reviews the types of acoustic models most useful for predicting the propagation of undersea noise sources and describes current exposure models. The severe problems caused by model sensitivity to the uncertainty in the environment are highlighted to support the conclusion that it is vital that risk assessments include transmission loss estimates with statistical measures of confidence.

  15. Specification and prediction of nickel mobilization using artificial intelligence methods

    Science.gov (United States)

    Gholami, Raoof; Ziaii, Mansour; Ardejani, Faramarz Doulati; Maleki, Shahoo

    2011-12-01

    Groundwater and soil pollution from pyrite oxidation, acid mine drainage generation, and release and transport of toxic metals are common environmental problems associated with the mining industry. Nickel is one toxic metal considered to be a key pollutant in some mining setting; to date, its formation mechanism has not yet been fully evaluated. The goals of this study are 1) to describe the process of nickel mobilization in waste dumps by introducing a novel conceptual model, and 2) to predict nickel concentration using two algorithms, namely the support vector machine (SVM) and the general regression neural network (GRNN). The results obtained from this study have shown that considerable amount of nickel concentration can be arrived into the water flow system during the oxidation of pyrite and subsequent Acid Drainage (AMD) generation. It was concluded that pyrite, water, and oxygen are the most important factors for nickel pollution generation while pH condition, SO4, HCO3, TDS, EC, Mg, Fe, Zn, and Cu are measured quantities playing significant role in nickel mobilization. SVM and GRNN have predicted nickel concentration with a high degree of accuracy. Hence, SVM and GRNN can be considered as appropriate tools for environmental risk assessment.

  16. Verifying a computational method for predicting extreme ground motion

    Science.gov (United States)

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  17. Using deuterated PAH amendments to validate chemical extraction methods to predict PAH bioavailability in soils

    International Nuclear Information System (INIS)

    Gomez-Eyles, Jose L.; Collins, Chris D.; Hodson, Mark E.

    2011-01-01

    Validating chemical methods to predict bioavailable fractions of polycyclic aromatic hydrocarbons (PAHs) by comparison with accumulation bioassays is problematic. Concentrations accumulated in soil organisms not only depend on the bioavailable fraction but also on contaminant properties. A historically contaminated soil was freshly spiked with deuterated PAHs (dPAHs). dPAHs have a similar fate to their respective undeuterated analogues, so chemical methods that give good indications of bioavailability should extract the fresh more readily available dPAHs and historic more recalcitrant PAHs in similar proportions to those in which they are accumulated in the tissues of test organisms. Cyclodextrin and butanol extractions predicted the bioavailable fraction for earthworms (Eisenia fetida) and plants (Lolium multiflorum) better than the exhaustive extraction. The PAHs accumulated by earthworms had a larger dPAH:PAH ratio than that predicted by chemical methods. The isotope ratio method described here provides an effective way of evaluating other chemical methods to predict bioavailability. - Research highlights: → Isotope ratios can be used to evaluate chemical methods to predict bioavailability. → Chemical methods predicted bioavailability better than exhaustive extractions. → Bioavailability to earthworms was still far from that predicted by chemical methods. - A novel method using isotope ratios to assess the ability of chemical methods to predict PAH bioavailability to soil biota.

  18. Using deuterated PAH amendments to validate chemical extraction methods to predict PAH bioavailability in soils

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Eyles, Jose L., E-mail: j.l.gomezeyles@reading.ac.uk [University of Reading, School of Human and Environmental Sciences, Soil Research Centre, Reading, RG6 6DW Berkshire (United Kingdom); Collins, Chris D.; Hodson, Mark E. [University of Reading, School of Human and Environmental Sciences, Soil Research Centre, Reading, RG6 6DW Berkshire (United Kingdom)

    2011-04-15

    Validating chemical methods to predict bioavailable fractions of polycyclic aromatic hydrocarbons (PAHs) by comparison with accumulation bioassays is problematic. Concentrations accumulated in soil organisms not only depend on the bioavailable fraction but also on contaminant properties. A historically contaminated soil was freshly spiked with deuterated PAHs (dPAHs). dPAHs have a similar fate to their respective undeuterated analogues, so chemical methods that give good indications of bioavailability should extract the fresh more readily available dPAHs and historic more recalcitrant PAHs in similar proportions to those in which they are accumulated in the tissues of test organisms. Cyclodextrin and butanol extractions predicted the bioavailable fraction for earthworms (Eisenia fetida) and plants (Lolium multiflorum) better than the exhaustive extraction. The PAHs accumulated by earthworms had a larger dPAH:PAH ratio than that predicted by chemical methods. The isotope ratio method described here provides an effective way of evaluating other chemical methods to predict bioavailability. - Research highlights: > Isotope ratios can be used to evaluate chemical methods to predict bioavailability. > Chemical methods predicted bioavailability better than exhaustive extractions. > Bioavailability to earthworms was still far from that predicted by chemical methods. - A novel method using isotope ratios to assess the ability of chemical methods to predict PAH bioavailability to soil biota.

  19. Method to predict process signals to learn for SVM

    International Nuclear Information System (INIS)

    Minowa, Hirotsugu; Gofuku, Akio

    2013-01-01

    Study of diagnostic system using machine learning to reduce the incidents of the plant is in advance because an accident causes large damage about human, economic and social loss. There is a problem that 2 performances between a classification performance and generalization performance on the machine diagnostic machine is exclusive. However, multi agent diagnostic system makes it possible to use a diagnostic machine specialized either performance by multi diagnostic machines can be used. We propose method to select optimized variables to improve classification performance. The method can also be used for other supervised learning machine but Support Vector Machine. This paper reports that our method and result of evaluation experiment applied our method to output 40% of Monju. (author)

  20. Kinetic mesh-free method for flutter prediction in turbomachines

    Indian Academy of Sciences (India)

    -based mesh-free method for unsteady flows. ... Council for Scientific and Industrial Research, National Aerospace Laboratories, Computational and Theoretical Fluid Dynamics Division, Bangalore 560 017, India; Engineering Mechanics Unit, ...

  1. Computer prediction of subsurface radionuclide transport: an adaptive numerical method

    International Nuclear Information System (INIS)

    Neuman, S.P.

    1983-01-01

    Radionuclide transport in the subsurface is often modeled with the aid of the advection-dispersion equation. A review of existing computer methods for the solution of this equation shows that there is need for improvement. To answer this need, a new adaptive numerical method is proposed based on an Eulerian-Lagrangian formulation. The method is based on a decomposition of the concentration field into two parts, one advective and one dispersive, in a rigorous manner that does not leave room for ambiguity. The advective component of steep concentration fronts is tracked forward with the aid of moving particles clustered around each front. Away from such fronts the advection problem is handled by an efficient modified method of characteristics called single-step reverse particle tracking. When a front dissipates with time, its forward tracking stops automatically and the corresponding cloud of particles is eliminated. The dispersion problem is solved by an unconventional Lagrangian finite element formulation on a fixed grid which involves only symmetric and diagonal matrices. Preliminary tests against analytical solutions of ne- and two-dimensional dispersion in a uniform steady state velocity field suggest that the proposed adaptive method can handle the entire range of Peclet numbers from 0 to infinity, with Courant numbers well in excess of 1

  2. Comparison of four statistical and machine learning methods for crash severity prediction.

    Science.gov (United States)

    Iranitalab, Amirfarrokh; Khattak, Aemal

    2017-11-01

    Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Performance prediction of electrohydrodynamic thrusters by the perturbation method

    International Nuclear Information System (INIS)

    Shibata, H.; Watanabe, Y.; Suzuki, K.

    2016-01-01

    In this paper, we present a novel method for analyzing electrohydrodynamic (EHD) thrusters. The method is based on a perturbation technique applied to a set of drift-diffusion equations, similar to the one introduced in our previous study on estimating breakdown voltage. The thrust-to-current ratio is generalized to represent the performance of EHD thrusters. We have compared the thrust-to-current ratio obtained theoretically with that obtained from the proposed method under atmospheric air conditions, and we have obtained good quantitative agreement. Also, we have conducted a numerical simulation in more complex thruster geometries, such as the dual-stage thruster developed by Masuyama and Barrett [Proc. R. Soc. A 469, 20120623 (2013)]. We quantitatively clarify the fact that if the magnitude of a third electrode voltage is low, the effective gap distance shortens, whereas if the magnitude of the third electrode voltage is sufficiently high, the effective gap distance lengthens.

  4. Predicting and explaining inflammation in Crohn's disease patients using predictive analytics methods and electronic medical record data.

    Science.gov (United States)

    Reddy, Bhargava K; Delen, Dursun; Agrawal, Rupesh K

    2018-01-01

    Crohn's disease is among the chronic inflammatory bowel diseases that impact the gastrointestinal tract. Understanding and predicting the severity of inflammation in real-time settings is critical to disease management. Extant literature has primarily focused on studies that are conducted in clinical trial settings to investigate the impact of a drug treatment on the remission status of the disease. This research proposes an analytics methodology where three different types of prediction models are developed to predict and to explain the severity of inflammation in patients diagnosed with Crohn's disease. The results show that machine-learning-based analytic methods such as gradient boosting machines can predict the inflammation severity with a very high accuracy (area under the curve = 92.82%), followed by regularized regression and logistic regression. According to the findings, a combination of baseline laboratory parameters, patient demographic characteristics, and disease location are among the strongest predictors of inflammation severity in Crohn's disease patients.

  5. A critical pressure based panel method for prediction of unsteady loading of marine propellers under cavitation

    International Nuclear Information System (INIS)

    Liu, P.; Bose, N.; Colbourne, B.

    2002-01-01

    A simple numerical procedure is established and implemented into a time domain panel method to predict hydrodynamic performance of marine propellers with sheet cavitation. This paper describes the numerical formulations and procedures to construct this integration. Predicted hydrodynamic loads were compared with both a previous numerical model and experimental measurements for a propeller in steady flow. The current method gives a substantial improvement in thrust and torque coefficient prediction over a previous numerical method at low cavitation numbers of less than 2.0, where severe cavitation occurs. Predicted pressure coefficient distributions are also presented. (author)

  6. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    Science.gov (United States)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  7. Methods to compute reliabilities for genomic predictions of feed intake

    Science.gov (United States)

    For new traits without historical reference data, cross-validation is often the preferred method to validate reliability (REL). Time truncation is less useful because few animals gain substantial REL after the truncation point. Accurate cross-validation requires separating genomic gain from pedigree...

  8. Prediction of IRI in short and long terms for flexible pavements: ANN and GMDH methods

    NARCIS (Netherlands)

    Ziari, H.; Sobhani, J.; Ayoubinejad, J.; Hartmann, Timo

    2015-01-01

    Prediction of pavement condition is one of the most important issues in pavement management systems. In this paper, capabilities of artificial neural networks (ANNs) and group method of data handling (GMDH) methods in predicting flexible pavement conditions were analysed in three levels: in 1 year,

  9. Ensemble approach combining multiple methods improves human transcription start site prediction.

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-01-01

    The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets.

  10. Modification of an Existing In vitro Method to Predict Relative Bioavailable Arsenic in Soils

    Science.gov (United States)

    The soil matrix can sequester arsenic (As) and reduces its exposure by soil ingestion. In vivo dosing studies and in vitro gastrointestinal (IVG) methods have been used to predict relative bioavailable (RBA) As. Originally, the Ohio State University (OSU-IVG) method predicted R...

  11. Signal predictions for a proposed fast neutron interrogation method

    International Nuclear Information System (INIS)

    Sale, K.E.

    1992-12-01

    We have applied the Monte Carlo radiation transport code COG) to assess the utility of a proposed explosives detection scheme based on neutron emission. In this scheme a pulsed neutron beam is generated by an approximately seven MeV deuteron beam incident on a thick Be target. A scintillation detector operating in the current mode measures the neutrons transmitted through the object as a function of time. The flight time of unscattered neutrons from the source to the detector is simply related to the neutron energy. This information along with neutron cross section excitation functions is used to infer the densities of H, C, N and O in the volume sampled. The code we have chosen to use enables us to create very detailed and realistic models of the geometrical configuration of the system, the neutron source and of the detector response. By calculating the signals that will be observed for several configurations and compositions of interrogated object we can investigate and begin to understand how a system that could actually be fielded will perform. Using this modeling capability many early on with substantial savings in time and cost and with improvements in performance. We will present our signal predictions for simple single element test cases and for explosive compositions. From these studies it is dear that the interpretation of the signals from such an explosives identification system will pose a substantial challenge

  12. An SEU rate prediction method for microprocessors of space applications

    International Nuclear Information System (INIS)

    Gao Jie; Li Qiang

    2012-01-01

    In this article,the relationship between static SEU (Single Event Upset) rate and dynamic SEU rate in microprocessors for satellites is studied by using process duty cycle concept and fault injection technique. The results are compared to in-orbit flight monitoring data. The results show that dynamic SEU rate by using process duty cycle can estimate in-orbit SEU rate of microprocessor reasonable; and the fault injection technique is a workable method to estimate SEU rate. (authors)

  13. Experimentally aided development of a turbine heat transfer prediction method

    International Nuclear Information System (INIS)

    Forest, A.E.; White, A.J.; Lai, C.C.; Guo, S.M.; Oldfield, M.L.G.; Lock, G.D.

    2004-01-01

    In the design of cooled turbomachinery blading a central role is played by the computer methods used to optimise the aerodynamic and thermal performance of the turbine aerofoils. Estimates of the heat load on the turbine blading should be as accurate as possible, in order that adequate life may be obtained with the minimum cooling air requirement. Computer methods are required which are able to model transonic flows, which are a mixture of high temperature combustion gases and relatively cool air injected through holes in the aerofoil surface. These holes may be of complex geometry, devised after empirical studies of the optimum shape and the most cost effective manufacturing technology. The method used here is a further development of the heat transfer design code (HTDC), originally written by Rolls-Royce plc under subcontract to Rolls-Royce Inc for the United States Air Force. The physical principles of the modelling employed in the code are explained without extensive mathematical details. The paper describes the calibration of the code in conjunction with a series of experimental measurements on a scale model of a high-pressure nozzle guide vane at non-dimensionally correct engine conditions. The results are encouraging, although indicating that some further work is required in modelling highly accelerated pressure surface flow

  14. Prediction of periodically correlated processes by wavelet transform and multivariate methods with applications to climatological data

    Science.gov (United States)

    Ghanbarzadeh, Mitra; Aminghafari, Mina

    2015-05-01

    This article studies the prediction of periodically correlated process using wavelet transform and multivariate methods with applications to climatological data. Periodically correlated processes can be reformulated as multivariate stationary processes. Considering this fact, two new prediction methods are proposed. In the first method, we use stepwise regression between the principal components of the multivariate stationary process and past wavelet coefficients of the process to get a prediction. In the second method, we propose its multivariate version without principal component analysis a priori. Also, we study a generalization of the prediction methods dealing with a deterministic trend using exponential smoothing. Finally, we illustrate the performance of the proposed methods on simulated and real climatological data (ozone amounts, flows of a river, solar radiation, and sea levels) compared with the multivariate autoregressive model. The proposed methods give good results as we expected.

  15. An Electrochemical Method to Predict Corrosion Rates in Soils

    Energy Technology Data Exchange (ETDEWEB)

    Dafter, M. R. [Hunter Water Australia Pty Ltd, Newcastle (Australia)

    2016-10-15

    Linear polarization resistance (LPR) testing of soils has been used extensively by a number of water utilities across Australia for many years now to determine the condition of buried ferrous water mains. The LPR test itself is a relatively simple, inexpensive test that serves as a substitute for actual exhumation and physical inspection of buried water mains to determine corrosion losses. LPR testing results (and the corresponding pit depth estimates) in combination with proprietary pipe failure algorithms can provide a useful predictive tool in determining the current and future conditions of an asset{sup 1)}. A number of LPR tests have been developed on soil by various researchers over the years{sup 1)}, but few have gained widespread commercial use, partly due to the difficulty in replicating the results. This author developed an electrochemical cell that was suitable for LPR soil testing and utilized this cell to test a series of soil samples obtained through an extensive program of field exhumations. The objective of this testing was to examine the relationship between short-term electrochemical testing and long-term in-situ corrosion of buried water mains, utilizing an LPR test that could be robustly replicated. Forty-one soil samples and related corrosion data were obtained from ad hoc condition assessments of buried water mains located throughout the Hunter region of New South Wales, Australia. Each sample was subjected to the electrochemical test developed by the author, and the resulting polarization data were compared with long-term pitting data obtained from each water main. The results of this testing program enabled the author to undertake a comprehensive review of the LPR technique as it is applied to soils and to examine whether correlations can be made between LPR testing results and long-term field corrosion.

  16. [Predictive methods versus clinical titration for the initiation of lithium therapy. A systematic review].

    Science.gov (United States)

    Geeraerts, I; Sienaert, P

    2013-01-01

    When lithium is administered, the clinician needs to know when the lithium in the patient’s blood has reached a therapeutic level. At the initiation of treatment the level is usually achieved gradually through the application of the titration method. In order to increase the efficacy of this procedure several methods for dosing lithium and for predicting lithium levels have been developed. To conduct a systematic review of the publications relating to the various methods for dosing lithium or predicting lithium levels at the initiation of therapy. We searched Medline systematically for articles published in English, French or Dutch between 1966 and April 2012 which described or studied a method for dosing lithium or for predicting the lithium level reached following a specific dosage. We screened the reference lists of relevant articles in order to locate additional papers. We found 38 lithium prediction methods, in addition to the clinical titration method. These methods can be divided into two categories: the ‘a priori’ methods and the ‘test-dose’ methods, the latter requiring the administration of a test dose of lithium. The lithium prediction methods generally achieve a therapeutic blood level faster than the clinical titration method, but none of the methods achieves convincing results. On the basis of our review, we propose that the titration method should be used as the standard method in clinical practice.

  17. Ensemble Methods in Data Mining Improving Accuracy Through Combining Predictions

    CERN Document Server

    Seni, Giovanni

    2010-01-01

    This book is aimed at novice and advanced analytic researchers and practitioners -- especially in Engineering, Statistics, and Computer Science. Those with little exposure to ensembles will learn why and how to employ this breakthrough method, and advanced practitioners will gain insight into building even more powerful models. Throughout, snippets of code in R are provided to illustrate the algorithms described and to encourage the reader to try the techniques. The authors are industry experts in data mining and machine learning who are also adjunct professors and popular speakers. Although e

  18. A Novel Method to Predict Circulation Control Noise

    Science.gov (United States)

    2016-03-17

    and incompressibility of the flow are automatically satisfied (Sirovich, 1987). The difference be- tween these two methods stems from the...From top left to bottom right 1 kHz, 2 kHz, 4 kHz, 8 kHz, 12 kHz , and 16 kHz. blowing and lowest blowing (CJ.l = 0 and Cll = 0.004 , respectively...also shown that at low frequencies for the lower blowing conditions (Gil = 0, Gil = 0.004, and Cll = 0.017) the levels are quite similar. By using

  19. Shelf life prediction of apple brownies using accelerated method

    Science.gov (United States)

    Pulungan, M. H.; Sukmana, A. D.; Dewi, I. A.

    2018-03-01

    The aim of this research was to determine shelf life of apple brownies. Shelf life was determined with Accelerated Shelf Life Testing method and Arrhenius equation. Experiment was conducted at 25, 35, and 45°C for 30 days. Every five days, the sample was analysed for free fatty acid (FFA), water activity (Aw), and organoleptic acceptance (flavour, aroma, and texture). The shelf life of the apple brownies based on FFA were 110, 54, and 28 days at temperature of 25, 35, and 45°C, respectively.

  20. Short-term prediction method of wind speed series based on fractal interpolation

    International Nuclear Information System (INIS)

    Xiu, Chunbo; Wang, Tiantian; Tian, Meng; Li, Yanqing; Cheng, Yi

    2014-01-01

    Highlights: • An improved fractal interpolation prediction method is proposed. • The chaos optimization algorithm is used to obtain the iterated function system. • The fractal extrapolate interpolation prediction of wind speed series is performed. - Abstract: In order to improve the prediction performance of the wind speed series, the rescaled range analysis is used to analyze the fractal characteristics of the wind speed series. An improved fractal interpolation prediction method is proposed to predict the wind speed series whose Hurst exponents are close to 1. An optimization function which is composed of the interpolation error and the constraint items of the vertical scaling factors in the fractal interpolation iterated function system is designed. The chaos optimization algorithm is used to optimize the function to resolve the optimal vertical scaling factors. According to the self-similarity characteristic and the scale invariance, the fractal extrapolate interpolation prediction can be performed by extending the fractal characteristic from internal interval to external interval. Simulation results show that the fractal interpolation prediction method can get better prediction result than others for the wind speed series with the fractal characteristic, and the prediction performance of the proposed method can be improved further because the fractal characteristic of its iterated function system is similar to that of the predicted wind speed series

  1. A Method for Driving Route Predictions Based on Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Ning Ye

    2015-01-01

    Full Text Available We present a driving route prediction method that is based on Hidden Markov Model (HMM. This method can accurately predict a vehicle’s entire route as early in a trip’s lifetime as possible without inputting origins and destinations beforehand. Firstly, we propose the route recommendation system architecture, where route predictions play important role in the system. Secondly, we define a road network model, normalize each of driving routes in the rectangular coordinate system, and build the HMM to make preparation for route predictions using a method of training set extension based on K-means++ and the add-one (Laplace smoothing technique. Thirdly, we present the route prediction algorithm. Finally, the experimental results of the effectiveness of the route predictions that is based on HMM are shown.

  2. Machine Learning Methods for Prediction of CDK-Inhibitors

    Science.gov (United States)

    Ramana, Jayashree; Gupta, Dinesh

    2010-01-01

    Progression through the cell cycle involves the coordinated activities of a suite of cyclin/cyclin-dependent kinase (CDK) complexes. The activities of the complexes are regulated by CDK inhibitors (CDKIs). Apart from its role as cell cycle regulators, CDKIs are involved in apoptosis, transcriptional regulation, cell fate determination, cell migration and cytoskeletal dynamics. As the complexes perform crucial and diverse functions, these are important drug targets for tumour and stem cell therapeutic interventions. However, CDKIs are represented by proteins with considerable sequence heterogeneity and may fail to be identified by simple similarity search methods. In this work we have evaluated and developed machine learning methods for identification of CDKIs. We used different compositional features and evolutionary information in the form of PSSMs, from CDKIs and non-CDKIs for generating SVM and ANN classifiers. In the first stage, both the ANN and SVM models were evaluated using Leave-One-Out Cross-Validation and in the second stage these were tested on independent data sets. The PSSM-based SVM model emerged as the best classifier in both the stages and is publicly available through a user-friendly web interface at http://bioinfo.icgeb.res.in/cdkipred. PMID:20967128

  3. Prediction of skin sensitizers using alternative methods to animal experimentation.

    Science.gov (United States)

    Johansson, Henrik; Lindstedt, Malin

    2014-07-01

    Regulatory frameworks within the European Union demand that chemical substances are investigated for their ability to induce sensitization, an adverse health effect caused by the human immune system in response to chemical exposure. A recent ban on the use of animal tests within the cosmetics industry has led to an urgent need for alternative animal-free test methods that can be used for assessment of chemical sensitizers. To date, no such alternative assay has yet completed formal validation. However, a number of assays are in development and the understanding of the biological mechanisms of chemical sensitization has greatly increased during the last decade. In this MiniReview, we aim to summarize and give our view on the recent progress of method development for alternative assessment of chemical sensitizers. We propose that integrated testing strategies should comprise complementary assays, providing measurements of a wide range of mechanistic events, to perform well-educated risk assessments based on weight of evidence. © 2014 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).

  4. A simple method for improving predictions of nuclear masses

    International Nuclear Information System (INIS)

    Yamada, Masami; Tsuchiya, Susumu; Tachibana, Takahiro

    1991-01-01

    The formula for atomic masses which exactly conforms to all nuclides does not exist in reality and cannot be anticipated for the time being hereafter. At present the masses of many nuclides are known experimentally with good accuracy, but the values of whichever mass formulas are more or less different from those experimental values except small number of accidental coincidence. Under such situation, for forecasting the mass of an unknown nuclide, how is it cleverly done ? Generally speaking, to take the value itself of a mass formula seems not the best means. It may be better to take the difference of the values of a mass formula and experiment for the nuclide close to that to be forecast in consideration and to correct the forecast value of the mass formula. In this report, the simple method for this correction is proposed. The formula which connects between two extreme cases, the difference between a true mass and the value of a mass formula is the sum of proton part and neutron part, and the difference distributes randomly around zero, was proposed. The procedure for its concrete application is explained. This method can be applied to other physical quantities than mass, for example the half life of beta decay. (K.I.)

  5. Maximum Likelihood Method for Predicting Environmental Conditions from Assemblage Composition: The R Package bio.infer

    Directory of Open Access Journals (Sweden)

    Lester L. Yuan

    2007-06-01

    Full Text Available This paper provides a brief introduction to the R package bio.infer, a set of scripts that facilitates the use of maximum likelihood (ML methods for predicting environmental conditions from assemblage composition. Environmental conditions can often be inferred from only biological data, and these inferences are useful when other sources of data are unavailable. ML prediction methods are statistically rigorous and applicable to a broader set of problems than more commonly used weighted averaging techniques. However, ML methods require a substantially greater investment of time to program algorithms and to perform computations. This package is designed to reduce the effort required to apply ML prediction methods.

  6. A prediction method of natural gas hydrate formation in deepwater gas well and its application

    Directory of Open Access Journals (Sweden)

    Yanli Guo

    2016-09-01

    Full Text Available To prevent the deposition of natural gas hydrate in deepwater gas well, the hydrate formation area in wellbore must be predicted. Herein, by comparing four prediction methods of temperature in pipe with field data and comparing five prediction methods of hydrate formation with experiment data, a method based on OLGA & PVTsim for predicting the hydrate formation area in wellbore was proposed. Meanwhile, The hydrate formation under the conditions of steady production, throttling and shut-in was predicted by using this method based on a well data in the South China Sea. The results indicate that the hydrate formation area decreases with the increase of gas production, inhibitor concentrations and the thickness of insulation materials and increases with the increase of thermal conductivity of insulation materials and shutdown time. Throttling effect causes a plunge in temperature and pressure in wellbore, thus leading to an increase of hydrate formation area.

  7. A noninvasive method for the prediction of fetal hemolytic disease

    Directory of Open Access Journals (Sweden)

    E. N. Kravchenko

    2017-01-01

    Full Text Available Objective: to improve the diagnosis of fetal hemolytic disease.Subjects and methods. A study group consisted of 42 pregnant women whose newborn infants had varying degrees of hemolytic disease. The women were divided into 3 subgroups according to the severity of neonatal hemolytic disease: 1 pregnant women whose neonates were born with severe hemolytic disease (n = 14; 2 those who gave birth to babies with moderate hemolytic disease (n = 11; 3 those who delivered infants with mild hemolytic disease (n = 17. A comparison group included 42 pregnant women whose babies were born without signs of hemolytic disease. Curvesfor blood flow velocity in the middle cerebral artery were analyzed in a fetus of 25 to 39 weeks’ gestation.Results. The peak systolic blood flow velocity was observed in Subgroup 1; however, the indicator did not exceed 1.5 MoM even in severe fetal anemic syndrome. The fetal middle artery blood flow velocity rating scale was divided into 2 zones: 1 the boundary values of peak systolic blood flow velocity from the median to the obtained midscore; 2 the boundary values of peak systolic blood flow velocity of the obtained values of as high as 1.5 MoM.Conclusion. The value of peak systolic blood flow velocity being in Zone 2, or its dynamic changes by transiting to this zone can serve as a prognostic factor in the development of severe fetal hemolytic disease. 

  8. Building Customer Churn Prediction Models in Fitness Industry with Machine Learning Methods

    OpenAIRE

    Shan, Min

    2017-01-01

    With the rapid growth of digital systems, churn management has become a major focus within customer relationship management in many industries. Ample research has been conducted for churn prediction in different industries with various machine learning methods. This thesis aims to combine feature selection and supervised machine learning methods for defining models of churn prediction and apply them on fitness industry. Forward selection is chosen as feature selection methods. Support Vector ...

  9. A method of quantitative prediction for sandstone type uranium deposit in Russia and its application

    International Nuclear Information System (INIS)

    Chang Shushuai; Jiang Minzhong; Li Xiaolu

    2008-01-01

    The paper presents the foundational principle of quantitative predication for sandstone type uranium deposits in Russia. Some key methods such as physical-mathematical model construction and deposits prediction are described. The method has been applied to deposits prediction in Dahongshan region of Chaoshui basin. It is concluded that the technique can fortify the method of quantitative predication for sandstone type uranium deposits, and it could be used as a new technique in China. (authors)

  10. Validity of a Manual Soft Tissue Profile Prediction Method Following Mandibular Setback Osteotomy

    OpenAIRE

    Kolokitha, Olga-Elpis

    2007-01-01

    Objectives The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. Methods To test the validity of the manu...

  11. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  12. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  13. Real-time prediction of respiratory motion based on local regression methods

    International Nuclear Information System (INIS)

    Ruan, D; Fessler, J A; Balter, J M

    2007-01-01

    Recent developments in modulation techniques enable conformal delivery of radiation doses to small, localized target volumes. One of the challenges in using these techniques is real-time tracking and predicting target motion, which is necessary to accommodate system latencies. For image-guided-radiotherapy systems, it is also desirable to minimize sampling rates to reduce imaging dose. This study focuses on predicting respiratory motion, which can significantly affect lung tumours. Predicting respiratory motion in real-time is challenging, due to the complexity of breathing patterns and the many sources of variability. We propose a prediction method based on local regression. There are three major ingredients of this approach: (1) forming an augmented state space to capture system dynamics, (2) local regression in the augmented space to train the predictor from previous observation data using semi-periodicity of respiratory motion, (3) local weighting adjustment to incorporate fading temporal correlations. To evaluate prediction accuracy, we computed the root mean square error between predicted tumor motion and its observed location for ten patients. For comparison, we also investigated commonly used predictive methods, namely linear prediction, neural networks and Kalman filtering to the same data. The proposed method reduced the prediction error for all imaging rates and latency lengths, particularly for long prediction lengths

  14. Improvement of gas entrainment prediction method. Introduction of surface tension effect

    International Nuclear Information System (INIS)

    Ito, Kei; Sakai, Takaaki; Ohshima, Hiroyuki; Uchibori, Akihiro; Eguchi, Yuzuru; Monji, Hideaki; Xu, Yongze

    2010-01-01

    A gas entrainment (GE) prediction method has been developed to establish design criteria for the large-scale sodium-cooled fast reactor (JSFR) systems. The prototype of the GE prediction method was already confirmed to give reasonable gas core lengths by simple calculation procedures. However, for simplification, the surface tension effects were neglected. In this paper, the evaluation accuracy of gas core lengths is improved by introducing the surface tension effects into the prototype GE prediction method. First, the mechanical balance between gravitational, centrifugal, and surface tension forces is considered. Then, the shape of a gas core tip is approximated by a quadratic function. Finally, using the approximated gas core shape, the authors determine the gas core length satisfying the mechanical balance. This improved GE prediction method is validated by analyzing the gas core lengths observed in simple experiments. Results show that the analytical gas core lengths calculated by the improved GE prediction method become shorter in comparison to the prototype GE prediction method, and are in good agreement with the experimental data. In addition, the experimental data under different temperature and surfactant concentration conditions are reproduced by the improved GE prediction method. (author)

  15. Machine learning methods to predict child posttraumatic stress: a proof of concept study.

    Science.gov (United States)

    Saxe, Glenn N; Ma, Sisi; Ren, Jiwen; Aliferis, Constantin

    2017-07-10

    The care of traumatized children would benefit significantly from accurate predictive models for Posttraumatic Stress Disorder (PTSD), using information available around the time of trauma. Machine Learning (ML) computational methods have yielded strong results in recent applications across many diseases and data types, yet they have not been previously applied to childhood PTSD. Since these methods have not been applied to this complex and debilitating disorder, there is a great deal that remains to be learned about their application. The first step is to prove the concept: Can ML methods - as applied in other fields - produce predictive classification models for childhood PTSD? Additionally, we seek to determine if specific variables can be identified - from the aforementioned predictive classification models - with putative causal relations to PTSD. ML predictive classification methods - with causal discovery feature selection - were applied to a data set of 163 children hospitalized with an injury and PTSD was determined three months after hospital discharge. At the time of hospitalization, 105 risk factor variables were collected spanning a range of biopsychosocial domains. Seven percent of subjects had a high level of PTSD symptoms. A predictive classification model was discovered with significant predictive accuracy. A predictive model constructed based on subsets of potentially causally relevant features achieves similar predictivity compared to the best predictive model constructed with all variables. Causal Discovery feature selection methods identified 58 variables of which 10 were identified as most stable. In this first proof-of-concept application of ML methods to predict childhood Posttraumatic Stress we were able to determine both predictive classification models for childhood PTSD and identify several causal variables. This set of techniques has great potential for enhancing the methodological toolkit in the field and future studies should seek to

  16. Validity of a manual soft tissue profile prediction method following mandibular setback osteotomy.

    Science.gov (United States)

    Kolokitha, Olga-Elpis

    2007-10-01

    The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. To test the validity of the manual method the prediction tracings were compared to the actual post-operative tracings. The Dentofacial Planner software was used to develop the computerized post-surgical prediction tracings. Both manual and computerized prediction printouts were analyzed by using the cephalometric system PORDIOS. Statistical analysis was performed by means of t-test. Comparison between manual prediction tracings and the actual post-operative profile showed that the manual method results in more convex soft tissue profiles; the upper lip was found in a more prominent position, upper lip thickness was increased and, the mandible and lower lip were found in a less posterior position than that of the actual profiles. Comparison between computerized and manual prediction methods showed that in the manual method upper lip thickness was increased, the upper lip was found in a more anterior position and the lower anterior facial height was increased as compared to the computerized prediction method. Cephalometric simulation of post-operative soft tissue profile following orthodontic-surgical management of mandibular prognathism imposes certain limitations related to the methods implied. However, both manual and computerized prediction methods remain a useful tool for patient communication.

  17. Bayesian Methods for Predicting the Shape of Chinese Yam in Terms of Key Diameters

    Directory of Open Access Journals (Sweden)

    Mitsunori Kayano

    2017-01-01

    Full Text Available This paper proposes Bayesian methods for the shape estimation of Chinese yam (Dioscorea opposita using a few key diameters of yam. Shape prediction of yam is applicable to determining optimal cutoff positions of a yam for producing seed yams. Our Bayesian method, which is a combination of Bayesian estimation model and predictive model, enables automatic, rapid, and low-cost processing of yam. After the construction of the proposed models using a sample data set in Japan, the models provide whole shape prediction of yam based on only a few key diameters. The Bayesian method performed well on the shape prediction in terms of minimizing the mean squared error between measured shape and the prediction. In particular, a multiple regression method with key diameters at two fixed positions attained the highest performance for shape prediction. We have developed automatic, rapid, and low-cost yam-processing machines based on the Bayesian estimation model and predictive model. Development of such shape prediction approaches, including our Bayesian method, can be a valuable aid in reducing the cost and time in food processing.

  18. A Novel Grey Wave Method for Predicting Total Chinese Trade Volume

    Directory of Open Access Journals (Sweden)

    Kedong Yin

    2017-12-01

    Full Text Available The total trade volume of a country is an important way of appraising its international trade situation. A prediction based on trade volume will help enterprises arrange production efficiently and promote the sustainability of the international trade. Because the total Chinese trade volume fluctuates over time, this paper proposes a Grey wave forecasting model with a Hodrick–Prescott filter (HP filter to forecast it. This novel model first parses time series into long-term trend and short-term cycle. Second, the model uses a general GM (1,1 to predict the trend term and the Grey wave forecasting model to predict the cycle term. Empirical analysis shows that the improved Grey wave prediction method provides a much more accurate forecast than the basic Grey wave prediction method, achieving better prediction results than autoregressive moving average model (ARMA.

  19. An influence function method based subsidence prediction program for longwall mining operations in inclined coal seams

    Energy Technology Data Exchange (ETDEWEB)

    Yi Luo; Jian-wei Cheng [West Virginia University, Morgantown, WV (United States). Department of Mining Engineering

    2009-09-15

    The distribution of the final surface subsidence basin induced by longwall operations in inclined coal seam could be significantly different from that in flat coal seam and demands special prediction methods. Though many empirical prediction methods have been developed, these methods are inflexible for varying geological and mining conditions. An influence function method has been developed to take the advantage of its fundamentally sound nature and flexibility. In developing this method, significant modifications have been made to the original Knothe function to produce an asymmetrical influence function. The empirical equations for final subsidence parameters derived from US subsidence data and Chinese empirical values have been incorporated into the mathematical models to improve the prediction accuracy. A corresponding computer program is developed. A number of subsidence cases for longwall mining operations in coal seams with varying inclination angles have been used to demonstrate the applicability of the developed subsidence prediction model. 9 refs., 8 figs.

  20. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. An Influence Function Method for Predicting Store Aerodynamic Characteristics during Weapon Separation,

    Science.gov (United States)

    1981-05-14

    8217 AO-Ail 777 GRUMMAN AEROSPACE CORP BETHPAGE NY F/G 20/4 AN INFLUENCE FUNCTION METHOD FOR PREDICTING STORE AERODYNAMIC C--ETCCU) MAY 8 1 R MEYER, A...CENKO, S YARDS UNCLASSIFIED N ’.**~~N**n I EHEEKI j~j .25 Q~4 111110 111_L 5. AN INFLUENCE FUNCTION METHOD FOR PREDICTING STORE AERODYNAMIC...extended to their logical conclusion one is led quite naturally to consideration of an " Influence Function Method" for I predicting store aerodynamic

  2. Studies of the Raman Spectra of Cyclic and Acyclic Molecules: Combination and Prediction Spectrum Methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Taijin; Assary, Rajeev S.; Marshall, Christopher L.; Gosztola, David J.; Curtiss, Larry A.; Stair, Peter C.

    2012-04-02

    A combination of Raman spectroscopy and density functional methods was employed to investigate the spectral features of selected molecules: furfural, 5-hydroxymethyl furfural (HMF), methanol, acetone, acetic acid, and levulinic acid. The computed spectra and measured spectra are in excellent agreement, consistent with previous studies. Using the combination and prediction spectrum method (CPSM), we were able to predict the important spectral features of two platform chemicals, HMF and levulinic acid.The results have shown that CPSM is a useful alternative method for predicting vibrational spectra of complex molecules in the biomass transformation process.

  3. Development of laboratory acceleration test method for service life prediction of concrete structures

    International Nuclear Information System (INIS)

    Cho, M. S.; Song, Y. C.; Bang, K. S.; Lee, J. S.; Kim, D. K.

    1999-01-01

    Service life prediction of nuclear power plants depends on the application of history of structures, field inspection and test, the development of laboratory acceleration tests, their analysis method and predictive model. In this study, laboratory acceleration test method for service life prediction of concrete structures and application of experimental test results are introduced. This study is concerned with environmental condition of concrete structures and is to develop the acceleration test method for durability factors of concrete structures e.g. carbonation, sulfate attack, freeze-thaw cycles and shrinkage-expansion etc

  4. Ensemble approach combining multiple methods improves human transcription start site prediction

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-11-30

    Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.

  5. Prediction of Human Phenotype Ontology terms by means of hierarchical ensemble methods.

    Science.gov (United States)

    Notaro, Marco; Schubach, Max; Robinson, Peter N; Valentini, Giorgio

    2017-10-12

    The prediction of human gene-abnormal phenotype associations is a fundamental step toward the discovery of novel genes associated with human disorders, especially when no genes are known to be associated with a specific disease. In this context the Human Phenotype Ontology (HPO) provides a standard categorization of the abnormalities associated with human diseases. While the problem of the prediction of gene-disease associations has been widely investigated, the related problem of gene-phenotypic feature (i.e., HPO term) associations has been largely overlooked, even if for most human genes no HPO term associations are known and despite the increasing application of the HPO to relevant medical problems. Moreover most of the methods proposed in literature are not able to capture the hierarchical relationships between HPO terms, thus resulting in inconsistent and relatively inaccurate predictions. We present two hierarchical ensemble methods that we formally prove to provide biologically consistent predictions according to the hierarchical structure of the HPO. The modular structure of the proposed methods, that consists in a "flat" learning first step and a hierarchical combination of the predictions in the second step, allows the predictions of virtually any flat learning method to be enhanced. The experimental results show that hierarchical ensemble methods are able to predict novel associations between genes and abnormal phenotypes with results that are competitive with state-of-the-art algorithms and with a significant reduction of the computational complexity. Hierarchical ensembles are efficient computational methods that guarantee biologically meaningful predictions that obey the true path rule, and can be used as a tool to improve and make consistent the HPO terms predictions starting from virtually any flat learning method. The implementation of the proposed methods is available as an R package from the CRAN repository.

  6. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

    International Nuclear Information System (INIS)

    Ernst, Floris; Schweikard, Achim

    2008-01-01

    Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

  7. Predictive probability methods for interim monitoring in clinical trials with longitudinal outcomes.

    Science.gov (United States)

    Zhou, Ming; Tang, Qi; Lang, Lixin; Xing, Jun; Tatsuoka, Kay

    2018-04-17

    In clinical research and development, interim monitoring is critical for better decision-making and minimizing the risk of exposing patients to possible ineffective therapies. For interim futility or efficacy monitoring, predictive probability methods are widely adopted in practice. Those methods have been well studied for univariate variables. However, for longitudinal studies, predictive probability methods using univariate information from only completers may not be most efficient, and data from on-going subjects can be utilized to improve efficiency. On the other hand, leveraging information from on-going subjects could allow an interim analysis to be potentially conducted once a sufficient number of subjects reach an earlier time point. For longitudinal outcomes, we derive closed-form formulas for predictive probabilities, including Bayesian predictive probability, predictive power, and conditional power and also give closed-form solutions for predictive probability of success in a future trial and the predictive probability of success of the best dose. When predictive probabilities are used for interim monitoring, we study their distributions and discuss their analytical cutoff values or stopping boundaries that have desired operating characteristics. We show that predictive probabilities utilizing all longitudinal information are more efficient for interim monitoring than that using information from completers only. To illustrate their practical application for longitudinal data, we analyze 2 real data examples from clinical trials. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

    Energy Technology Data Exchange (ETDEWEB)

    Ernst, Floris; Schweikard, Achim [University of Luebeck, Institute for Robotics and Cognitive Systems, Luebeck (Germany)

    2008-06-15

    Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

  9. Analysis of deep learning methods for blind protein contact prediction in CASP12.

    Science.gov (United States)

    Wang, Sheng; Sun, Siqi; Xu, Jinbo

    2018-03-01

    Here we present the results of protein contact prediction achieved in CASP12 by our RaptorX-Contact server, which is an early implementation of our deep learning method for contact prediction. On a set of 38 free-modeling target domains with a median family size of around 58 effective sequences, our server obtained an average top L/5 long- and medium-range contact accuracy of 47% and 44%, respectively (L = length). A complete implementation has an average accuracy of 59% and 57%, respectively. Our deep learning method formulates contact prediction as a pixel-level image labeling problem and simultaneously predicts all residue pairs of a protein using a combination of two deep residual neural networks, taking as input the residue conservation information, predicted secondary structure and solvent accessibility, contact potential, and coevolution information. Our approach differs from existing methods mainly in (1) formulating contact prediction as a pixel-level image labeling problem instead of an image-level classification problem; (2) simultaneously predicting all contacts of an individual protein to make effective use of contact occurrence patterns; and (3) integrating both one-dimensional and two-dimensional deep convolutional neural networks to effectively learn complex sequence-structure relationship including high-order residue correlation. This paper discusses the RaptorX-Contact pipeline, both contact prediction and contact-based folding results, and finally the strength and weakness of our method. © 2017 Wiley Periodicals, Inc.

  10. NetMHCpan, a method for MHC class I binding prediction beyond humans

    DEFF Research Database (Denmark)

    Hoof, Ilka; Peters, B; Sidney, J

    2009-01-01

    molecules. We show that the NetMHCpan-2.0 method can accurately predict binding to uncharacterized HLA molecules, including HLA-C and HLA-G. Moreover, NetMHCpan-2.0 is demonstrated to accurately predict peptide binding to chimpanzee and macaque MHC class I molecules. The power of NetMHCpan-2.0 to guide...

  11. Novel computational methods to predict drug–target interactions using graph mining and machine learning approaches

    KAUST Repository

    Olayan, Rawan S.

    2017-12-01

    Computational drug repurposing aims at finding new medical uses for existing drugs. The identification of novel drug-target interactions (DTIs) can be a useful part of such a task. Computational determination of DTIs is a convenient strategy for systematic screening of a large number of drugs in the attempt to identify new DTIs at low cost and with reasonable accuracy. This necessitates development of accurate computational methods that can help focus on the follow-up experimental validation on a smaller number of highly likely targets for a drug. Although many methods have been proposed for computational DTI prediction, they suffer the high false positive prediction rate or they do not predict the effect that drugs exert on targets in DTIs. In this report, first, we present a comprehensive review of the recent progress in the field of DTI prediction from data-centric and algorithm-centric perspectives. The aim is to provide a comprehensive review of computational methods for identifying DTIs, which could help in constructing more reliable methods. Then, we present DDR, an efficient method to predict the existence of DTIs. DDR achieves significantly more accurate results compared to the other state-of-theart methods. As supported by independent evidences, we verified as correct 22 out of the top 25 DDR DTIs predictions. This validation proves the practical utility of DDR, suggesting that DDR can be used as an efficient method to identify 5 correct DTIs. Finally, we present DDR-FE method that predicts the effect types of a drug on its target. On different representative datasets, under various test setups, and using different performance measures, we show that DDR-FE achieves extremely good performance. Using blind test data, we verified as correct 2,300 out of 3,076 DTIs effects predicted by DDR-FE. This suggests that DDR-FE can be used as an efficient method to identify correct effects of a drug on its target.

  12. Improving local clustering based top-L link prediction methods via asymmetric link clustering information

    Science.gov (United States)

    Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan

    2018-02-01

    Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.

  13. A method for uncertainty quantification in the life prediction of gas turbine components

    Energy Technology Data Exchange (ETDEWEB)

    Lodeby, K.; Isaksson, O.; Jaervstraat, N. [Volvo Aero Corporation, Trolhaettan (Sweden)

    1998-12-31

    A failure in an aircraft jet engine can have severe consequences which cannot be accepted and high requirements are therefore raised on engine reliability. Consequently, assessment of the reliability of life predictions used in design and maintenance are important. To assess the validity of the predicted life a method to quantify the contribution to the total uncertainty in the life prediction from different uncertainty sources is developed. The method is a structured approach for uncertainty quantification that uses a generic description of the life prediction process. It is based on an approximate error propagation theory combined with a unified treatment of random and systematic errors. The result is an approximate statistical distribution for the predicted life. The method is applied on life predictions for three different jet engine components. The total uncertainty became of reasonable order of magnitude and a good qualitative picture of the distribution of the uncertainty contribution from the different sources was obtained. The relative importance of the uncertainty sources differs between the three components. It is also highly dependent on the methods and assumptions used in the life prediction. Advantages and disadvantages of this method is discussed. (orig.) 11 refs.

  14. Advanced Materials Test Methods for Improved Life Prediction of Turbine Engine Components

    National Research Council Canada - National Science Library

    Stubbs, Jack

    2000-01-01

    Phase I final report developed under SBIR contract for Topic # AF00-149, "Durability of Turbine Engine Materials/Advanced Material Test Methods for Improved Use Prediction of Turbine Engine Components...

  15. Prediction methods and databases within chemoinformatics: emphasis on drugs and drug candidates

    DEFF Research Database (Denmark)

    Jonsdottir, Svava Osk; Jorgensen, FS; Brunak, Søren

    2005-01-01

    about drugs and drug candidates, and of databases with relevant properties. Access to experimental data and numerical methods for selecting and utilizing these data is crucial for developing accurate predictive in silico models. Many interesting predictive methods for classifying the suitability......MOTIVATION: To gather information about available databases and chemoinformatics methods for prediction of properties relevant to the drug discovery and optimization process. RESULTS: We present an overview of the most important databases with 2-dimensional and 3-dimensional structural information...... of chemical compounds as potential drugs, as well as for predicting their physico-chemical and ADMET properties have been proposed in recent years. These methods are discussed, and some possible future directions in this rapidly developing field are described....

  16. Application of the backstepping method to the prediction of increase or decrease of infected population.

    Science.gov (United States)

    Kuniya, Toshikazu; Sano, Hideki

    2016-05-10

    In mathematical epidemiology, age-structured epidemic models have usually been formulated as the boundary-value problems of the partial differential equations. On the other hand, in engineering, the backstepping method has recently been developed and widely studied by many authors. Using the backstepping method, we obtained a boundary feedback control which plays the role of the threshold criteria for the prediction of increase or decrease of newly infected population. Under an assumption that the period of infectiousness is same for all infected individuals (that is, the recovery rate is given by the Dirac delta function multiplied by a sufficiently large positive constant), the prediction method is simplified to the comparison of the numbers of reported cases at the current and previous time steps. Our prediction method was applied to the reported cases per sentinel of influenza in Japan from 2006 to 2015 and its accuracy was 0.81 (404 correct predictions to the total 500 predictions). It was higher than that of the ARIMA models with different orders of the autoregressive part, differencing and moving-average process. In addition, a proposed method for the estimation of the number of reported cases, which is consistent with our prediction method, was better than that of the best-fitted ARIMA model ARIMA(1,1,0) in the sense of mean square error. Our prediction method based on the backstepping method can be simplified to the comparison of the numbers of reported cases of the current and previous time steps. In spite of its simplicity, it can provide a good prediction for the spread of influenza in Japan.

  17. Estimation of Mechanical Signals in Induction Motors using the Recursive Prediction Error Method

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik

    1993-01-01

    Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed ........Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed .....

  18. NetMHCcons: a consensus method for the major histocompatibility complex class I predictions

    DEFF Research Database (Denmark)

    Karosiene, Edita; Lundegaard, Claus; Lund, Ole

    2012-01-01

    A key role in cell-mediated immunity is dedicated to the major histocompatibility complex (MHC) molecules that bind peptides for presentation on the cell surface. Several in silico methods capable of predicting peptide binding to MHC class I have been developed. The accuracy of these methods depe...... at www.cbs.dtu.dk/services/NetMHCcons, and allows the user in an automatic manner to obtain the most accurate predictions for any given MHC molecule....

  19. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    Directory of Open Access Journals (Sweden)

    Nielsen Morten

    2009-07-01

    Full Text Available Abstract Background Estimation of the reliability of specific real value predictions is nontrivial and the efficacy of this is often questionable. It is important to know if you can trust a given prediction and therefore the best methods associate a prediction with a reliability score or index. For discrete qualitative predictions, the reliability is conventionally estimated as the difference between output scores of selected classes. Such an approach is not feasible for methods that predict a biological feature as a single real value rather than a classification. As a solution to this challenge, we have implemented a method that predicts the relative surface accessibility of an amino acid and simultaneously predicts the reliability for each prediction, in the form of a Z-score. Results An ensemble of artificial neural networks has been trained on a set of experimentally solved protein structures to predict the relative exposure of the amino acids. The method assigns a reliability score to each surface accessibility prediction as an inherent part of the training process. This is in contrast to the most commonly used procedures where reliabilities are obtained by post-processing the output. Conclusion The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability score with the individual predictions. However, our implementation of reliability scores in the form of a Z-score is shown to be the more informative measure for discriminating good predictions from bad ones in the entire range from completely buried to fully exposed amino acids. This is evident when comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0

  20. The Use of Data Mining Methods to Predict the Result of Infertility Treatment Using the IVF ET Method

    Directory of Open Access Journals (Sweden)

    Malinowski Paweł

    2014-12-01

    Full Text Available The IVF ET method is a scientifically recognized infertility treat- ment method. The problem, however, is this method’s unsatisfactory efficiency. This calls for a more thorough analysis of the information available in the treat- ment process, in order to detect the factors that have an effect on the results, as well as to effectively predict result of treatment. Classical statistical methods have proven to be inadequate in this issue. Only the use of modern methods of data mining gives hope for a more effective analysis of the collected data. This work provides an overview of the new methods used for the analysis of data on infertility treatment, and formulates a proposal for further directions for research into increasing the efficiency of the predicted result of the treatment process.

  1. Methods of developing core collections based on the predicted genotypic value of rice ( Oryza sativa L.).

    Science.gov (United States)

    Li, C T; Shi, C H; Wu, J G; Xu, H M; Zhang, H Z; Ren, Y L

    2004-04-01

    The selection of an appropriate sampling strategy and a clustering method is important in the construction of core collections based on predicted genotypic values in order to retain the greatest degree of genetic diversity of the initial collection. In this study, methods of developing rice core collections were evaluated based on the predicted genotypic values for 992 rice varieties with 13 quantitative traits. The genotypic values of the traits were predicted by the adjusted unbiased prediction (AUP) method. Based on the predicted genotypic values, Mahalanobis distances were calculated and employed to measure the genetic similarities among the rice varieties. Six hierarchical clustering methods, including the single linkage, median linkage, centroid, unweighted pair-group average, weighted pair-group average and flexible-beta methods, were combined with random, preferred and deviation sampling to develop 18 core collections of rice germplasm. The results show that the deviation sampling strategy in combination with the unweighted pair-group average method of hierarchical clustering retains the greatest degree of genetic diversities of the initial collection. The core collections sampled using predicted genotypic values had more genetic diversity than those based on phenotypic values.

  2. An auxiliary optimization method for complex public transit route network based on link prediction

    Science.gov (United States)

    Zhang, Lin; Lu, Jian; Yue, Xianfei; Zhou, Jialin; Li, Yunxuan; Wan, Qian

    2018-02-01

    Inspired by the missing (new) link prediction and the spurious existing link identification in link prediction theory, this paper establishes an auxiliary optimization method for public transit route network (PTRN) based on link prediction. First, link prediction applied to PTRN is described, and based on reviewing the previous studies, the summary indices set and its algorithms set are collected for the link prediction experiment. Second, through analyzing the topological properties of Jinan’s PTRN established by the Space R method, we found that this is a typical small-world network with a relatively large average clustering coefficient. This phenomenon indicates that the structural similarity-based link prediction will show a good performance in this network. Then, based on the link prediction experiment of the summary indices set, three indices with maximum accuracy are selected for auxiliary optimization of Jinan’s PTRN. Furthermore, these link prediction results show that the overall layout of Jinan’s PTRN is stable and orderly, except for a partial area that requires optimization and reconstruction. The above pattern conforms to the general pattern of the optimal development stage of PTRN in China. Finally, based on the missing (new) link prediction and the spurious existing link identification, we propose optimization schemes that can be used not only to optimize current PTRN but also to evaluate PTRN planning.

  3. Prediction of the solubility of selected pharmaceuticals in water and alcohols with a group contribution method

    International Nuclear Information System (INIS)

    Pelczarska, Aleksandra; Ramjugernath, Deresh; Rarey, Jurgen; Domańska, Urszula

    2013-01-01

    Highlights: ► The prediction of solubility of pharmaceuticals in water and alcohols was presented. ► Improved group contribution method UNIFAC was proposed for 42 binary mixtures. ► Infinite activity coefficients were used in a model. ► A semi-predictive model with one experimental point was proposed. ► This model qualitatively describes the temperature dependency of Pharms. -- Abstract: An improved group contribution approach using activity coefficients at infinite dilution, which has been proposed by our group, was used for the prediction of the solubility of selected pharmaceuticals in water and alcohols [B. Moller, Activity of complex multifunctional organic compounds in common solvents, PhD Thesis, Chemical Engineering, University of KwaZulu-Natal, 2009]. The solubility of 16 different pharmaceuticals in water, ethanol and octan-1-ol was predicted over a fairly wide range of temperature with this group contribution model. The predicted values, along with values computed with the Schroeder-van Laar equation, are compared to experimental results published by us previously for 42 binary mixtures. The predicted solubility values were lower than those from the experiments for most of the mixtures. In order to improve the prediction method, a semi-predictive calculation using one experimental solubility value was implemented. This one point prediction has given acceptable results when comparison is made to experimental values

  4. MAPPIN: a method for annotating, predicting pathogenicity and mode of inheritance for nonsynonymous variants.

    Science.gov (United States)

    Gosalia, Nehal; Economides, Aris N; Dewey, Frederick E; Balasubramanian, Suganthi

    2017-10-13

    Nonsynonymous single nucleotide variants (nsSNVs) constitute about 50% of known disease-causing mutations and understanding their functional impact is an area of active research. Existing algorithms predict pathogenicity of nsSNVs; however, they are unable to differentiate heterozygous, dominant disease-causing variants from heterozygous carrier variants that lead to disease only in the homozygous state. Here, we present MAPPIN (Method for Annotating, Predicting Pathogenicity, and mode of Inheritance for Nonsynonymous variants), a prediction method which utilizes a random forest algorithm to distinguish between nsSNVs with dominant, recessive, and benign effects. We apply MAPPIN to a set of Mendelian disease-causing mutations and accurately predict pathogenicity for all mutations. Furthermore, MAPPIN predicts mode of inheritance correctly for 70.3% of nsSNVs. MAPPIN also correctly predicts pathogenicity for 87.3% of mutations from the Deciphering Developmental Disorders Study with a 78.5% accuracy for mode of inheritance. When tested on a larger collection of mutations from the Human Gene Mutation Database, MAPPIN is able to significantly discriminate between mutations in known dominant and recessive genes. Finally, we demonstrate that MAPPIN outperforms CADD and Eigen in predicting disease inheritance modes for all validation datasets. To our knowledge, MAPPIN is the first nsSNV pathogenicity prediction algorithm that provides mode of inheritance predictions, adding another layer of information for variant prioritization. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua

    2014-01-01

    the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...

  6. Supplementary Material for: DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail

    2016-01-01

    Abstract Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery

  7. Prediction of intestinal absorption and blood-brain barrier penetration by computational methods.

    Science.gov (United States)

    Clark, D E

    2001-09-01

    This review surveys the computational methods that have been developed with the aim of identifying drug candidates likely to fail later on the road to market. The specifications for such computational methods are outlined, including factors such as speed, interpretability, robustness and accuracy. Then, computational filters aimed at predicting "drug-likeness" in a general sense are discussed before methods for the prediction of more specific properties--intestinal absorption and blood-brain barrier penetration--are reviewed. Directions for future research are discussed and, in concluding, the impact of these methods on the drug discovery process, both now and in the future, is briefly considered.

  8. A prediction method based on grey system theory in equipment condition based maintenance

    International Nuclear Information System (INIS)

    Yan, Shengyuan; Yan, Shengyuan; Zhang, Hongguo; Zhang, Zhijian; Peng, Minjun; Yang, Ming

    2007-01-01

    Grey prediction is a modeling method based on historical or present, known or indefinite information, which can be used for forecasting the development of the eigenvalues of the targeted equipment system and setting up the model by using less information. In this paper, the postulate of grey system theory, which includes the grey generating, the sorts of grey generating and the grey forecasting model, is introduced first. The concrete application process, which includes the grey prediction modeling, grey prediction, error calculation, equal dimension and new information approach, is introduced secondly. Application of a so-called 'Equal Dimension and New Information' (EDNI) technology in grey system theory is adopted in an application case, aiming at improving the accuracy of prediction without increasing the amount of calculation by replacing old data with new ones. The proposed method can provide a new way for solving the problem of eigenvalue data exploding in equal distance effectively, short time interval and real time prediction. The proposed method, which was based on historical or present, known or indefinite information, was verified by the vibration prediction of induced draft fan of a boiler of the Yantai Power Station in China, and the results show that the proposed method based on grey system theory is simple and provides a high accuracy in prediction. So, it is very useful and significant to the controlling and controllable management in safety production. (authors)

  9. Method for simulating predictive control of building systems operation in the early stages of building design

    DEFF Research Database (Denmark)

    Petersen, Steffen; Svendsen, Svend

    2011-01-01

    A method for simulating predictive control of building systems operation in the early stages of building design is presented. The method uses building simulation based on weather forecasts to predict whether there is a future heating or cooling requirement. This information enables the thermal...... control systems of the building to respond proactively to keep the operational temperature within the thermal comfort range with the minimum use of energy. The method is implemented in an existing building simulation tool designed to inform decisions in the early stages of building design through...... parametric analysis. This enables building designers to predict the performance of the method and include it as a part of the solution space. The method furthermore facilitates the task of configuring appropriate building systems control schemes in the tool, and it eliminates time consuming manual...

  10. Study on model current predictive control method of PV grid- connected inverters systems with voltage sag

    Science.gov (United States)

    Jin, N.; Yang, F.; Shang, S. Y.; Tao, T.; Liu, J. S.

    2016-08-01

    According to the limitations of the LVRT technology of traditional photovoltaic inverter existed, this paper proposes a low voltage ride through (LVRT) control method based on model current predictive control (MCPC). This method can effectively improve the photovoltaic inverter output characteristics and response speed. The MCPC method of photovoltaic grid-connected inverter designed, the sum of the absolute value of the predictive current and the given current error is adopted as the cost function with the model predictive control method. According to the MCPC, the optimal space voltage vector is selected. Photovoltaic inverter has achieved automatically switches of priority active or reactive power control of two control modes according to the different operating states, which effectively improve the inverter capability of LVRT. The simulation and experimental results proves that the proposed method is correct and effective.

  11. A Prediction Method of Airport Noise Based on Hybrid Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Tao XU

    2014-05-01

    Full Text Available Using monitoring history data to build and to train a prediction model for airport noise is a normal method in recent years. However, the single model built in different ways has various performances in the storage, efficiency and accuracy. In order to predict the noise accurately in some complex environment around airport, this paper presents a prediction method based on hybrid ensemble learning. The proposed method ensembles three algorithms: artificial neural network as an active learner, nearest neighbor as a passive leaner and nonlinear regression as a synthesized learner. The experimental results show that the three learners can meet forecast demands respectively in on- line, near-line and off-line. And the accuracy of prediction is improved by integrating these three learners’ results.

  12. A study on the fatigue life prediction of tire belt-layers using probabilistic method

    International Nuclear Information System (INIS)

    Lee, Dong Woo; Park, Jong Sang; Lee, Tae Won; Kim, Seong Rae; Sung, Ki Deug; Huh, Sun Chul

    2013-01-01

    Tire belt separation failure is occurred by internal cracks generated in *1 and *2 belt layers and by its growth. And belt failure seriously affects tire endurance. Therefore, to improve the tire endurance, it is necessary to analyze tire crack growth behavior and predict fatigue life. Generally, the prediction of tire endurance is performed by the experimental method using tire test machine. But it takes much cost and time to perform experiment. In this paper, to predict tire fatigue life, we applied deterministic fracture mechanics approach, based on finite element analysis. Also, probabilistic analysis method based on statistics using Monte Carlo simulation is presented. Above mentioned two methods include a global-local finite element analysis to provide the detail necessary to model explicitly an internal crack and calculate the J-integral for tire life prediction.

  13. A summary of methods of predicting reliability life of nuclear equipment with small samples

    International Nuclear Information System (INIS)

    Liao Weixian

    2000-03-01

    Some of nuclear equipment are manufactured in small batch, e.g., 1-3 sets. Their service life may be very difficult to determine experimentally in view of economy and technology. The method combining theoretical analysis with material tests to predict the life of equipment is put forward, based on that equipment consists of parts or elements which are made of different materials. The whole life of an equipment part consists of the crack forming life (i.e., the fatigue life or the damage accumulation life) and the crack extension life. Methods of predicting machine life has systematically summarized with the emphasis on those which use theoretical analysis to substitute large scale prototype experiments. Meanwhile, methods and steps of predicting reliability life have been described by taking into consideration of randomness of various variables and parameters in engineering. Finally, the latest advance and trends of machine life prediction are discussed

  14. A noise level prediction method based on electro-mechanical frequency response function for capacitors.

    Science.gov (United States)

    Zhu, Lingyu; Ji, Shengchang; Shen, Qi; Liu, Yuan; Li, Jinyu; Liu, Hao

    2013-01-01

    The capacitors in high-voltage direct-current (HVDC) converter stations radiate a lot of audible noise which can reach higher than 100 dB. The existing noise level prediction methods are not satisfying enough. In this paper, a new noise level prediction method is proposed based on a frequency response function considering both electrical and mechanical characteristics of capacitors. The electro-mechanical frequency response function (EMFRF) is defined as the frequency domain quotient of the vibration response and the squared capacitor voltage, and it is obtained from impulse current experiment. Under given excitations, the vibration response of the capacitor tank is the product of EMFRF and the square of the given capacitor voltage in frequency domain, and the radiated audible noise is calculated by structure acoustic coupling formulas. The noise level under the same excitations is also measured in laboratory, and the results are compared with the prediction. The comparison proves that the noise prediction method is effective.

  15. A comparison of radiosity with current methods of sound level prediction in commercial spaces

    Science.gov (United States)

    Beamer, C. Walter, IV; Muehleisen, Ralph T.

    2002-11-01

    The ray tracing and image methods (and variations thereof) are widely used for the computation of sound fields in architectural spaces. The ray tracing and image methods are best suited for spaces with mostly specular reflecting surfaces. The radiosity method, a method based on solving a system of energy balance equations, is best applied to spaces with mainly diffusely reflective surfaces. Because very few spaces are either purely specular or purely diffuse, all methods must deal with both types of reflecting surfaces. A comparison of the radiosity method to other methods for the prediction of sound levels in commercial environments is presented. [Work supported by NSF.

  16. An ensemble method for predicting subnuclear localizations from primary protein structures.

    Directory of Open Access Journals (Sweden)

    Guo Sheng Han

    Full Text Available BACKGROUND: Predicting protein subnuclear localization is a challenging problem. Some previous works based on non-sequence information including Gene Ontology annotations and kernel fusion have respective limitations. The aim of this work is twofold: one is to propose a novel individual feature extraction method; another is to develop an ensemble method to improve prediction performance using comprehensive information represented in the form of high dimensional feature vector obtained by 11 feature extraction methods. METHODOLOGY/PRINCIPAL FINDINGS: A novel two-stage multiclass support vector machine is proposed to predict protein subnuclear localizations. It only considers those feature extraction methods based on amino acid classifications and physicochemical properties. In order to speed up our system, an automatic search method for the kernel parameter is used. The prediction performance of our method is evaluated on four datasets: Lei dataset, multi-localization dataset, SNL9 dataset and a new independent dataset. The overall accuracy of prediction for 6 localizations on Lei dataset is 75.2% and that for 9 localizations on SNL9 dataset is 72.1% in the leave-one-out cross validation, 71.7% for the multi-localization dataset and 69.8% for the new independent dataset, respectively. Comparisons with those existing methods show that our method performs better for both single-localization and multi-localization proteins and achieves more balanced sensitivities and specificities on large-size and small-size subcellular localizations. The overall accuracy improvements are 4.0% and 4.7% for single-localization proteins and 6.5% for multi-localization proteins. The reliability and stability of our classification model are further confirmed by permutation analysis. CONCLUSIONS: It can be concluded that our method is effective and valuable for predicting protein subnuclear localizations. A web server has been designed to implement the proposed method

  17. A comparison of methods to predict historical daily streamflow time series in the southeastern United States

    Science.gov (United States)

    Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.

    2015-01-01

    Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB

  18. An ensemble method to predict target genes and pathways in uveal melanoma

    Directory of Open Access Journals (Sweden)

    Wei Chao

    2018-04-01

    Full Text Available This work proposes to predict target genes and pathways for uveal melanoma (UM based on an ensemble method and pathway analyses. Methods: The ensemble method integrated a correlation method (Pearson correlation coefficient, PCC, a causal inference method (IDA and a regression method (Lasso utilizing the Borda count election method. Subsequently, to validate the performance of PIL method, comparisons between confirmed database and predicted miRNA targets were performed. Ultimately, pathway enrichment analysis was conducted on target genes in top 1000 miRNA-mRNA interactions to identify target pathways for UM patients. Results: Thirty eight of the predicted interactions were matched with the confirmed interactions, indicating that the ensemble method was a suitable and feasible approach to predict miRNA targets. We obtained 50 seed miRNA-mRNA interactions of UM patients and extracted target genes from these interactions, such as ASPG, BSDC1 and C4BP. The 601 target genes in top 1,000 miRNA-mRNA interactions were enriched in 12 target pathways, of which Phototransduction was the most significant one. Conclusion: The target genes and pathways might provide a new way to reveal the molecular mechanism of UM and give hand for target treatments and preventions of this malignant tumor.

  19. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network.

    Science.gov (United States)

    Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.

  20. The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network

    Science.gov (United States)

    Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.

    2017-05-01

    The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.

  1. PREDICTION OF DROUGHT IMPACT ON RICE PADDIES IN WEST JAVA USING ANALOGUE DOWNSCALING METHOD

    Directory of Open Access Journals (Sweden)

    Elza Surmaini

    2015-09-01

    Full Text Available Indonesia consistently experiences dry climatic conditions and droughts during El Niño, with significant consequences for rice production. To mitigate the impacts of such droughts, robust, simple and timely rainfall forecast is critically important for predicting drought prior to planting time over rice growing areas in Indonesia. The main objective of this study was to predict drought in rice growing areas using ensemble seasonal prediction. The skill of National Oceanic and Atmospheric Administration’s (NOAA’s seasonal prediction model Climate Forecast System version 2 (CFSv2 for predicting rice drought in West Java was investigated in a series of hindcast experiments in 1989-2010. The Constructed Analogue (CA method was employed to produce downscaled local rainfall prediction with stream function (y and velocity potential (c at 850 hPa as predictors and observed rainfall as predictant. We used forty two rain gauges in northern part of West Java in Indramayu, Cirebon, Sumedang and Majalengka Districts. To be able to quantify the uncertainties, a multi-window scheme for predictors was applied to obtain ensemble rainfall prediction. Drought events in dry season planting were predicted by rainfall thresholds. The skill of downscaled rainfall prediction was assessed using Relative Operating Characteristics (ROC method. Results of the study showed that the skills of the probabilistic seasonal prediction for early detection of rice area drought were found to range from 62% to 82% with an improved lead time of 2-4 months. The lead time of 2-4 months provided sufficient time for practical policy makers, extension workers and farmers to cope with drought by preparing suitable farming practices and equipments.

  2. Predicting human splicing branchpoints by combining sequence-derived features and multi-label learning methods.

    Science.gov (United States)

    Zhang, Wen; Zhu, Xiaopeng; Fu, Yu; Tsuji, Junko; Weng, Zhiping

    2017-12-01

    Alternative splicing is the critical process in a single gene coding, which removes introns and joins exons, and splicing branchpoints are indicators for the alternative splicing. Wet experiments have identified a great number of human splicing branchpoints, but many branchpoints are still unknown. In order to guide wet experiments, we develop computational methods to predict human splicing branchpoints. Considering the fact that an intron may have multiple branchpoints, we transform the branchpoint prediction as the multi-label learning problem, and attempt to predict branchpoint sites from intron sequences. First, we investigate a variety of intron sequence-derived features, such as sparse profile, dinucleotide profile, position weight matrix profile, Markov motif profile and polypyrimidine tract profile. Second, we consider several multi-label learning methods: partial least squares regression, canonical correlation analysis and regularized canonical correlation analysis, and use them as the basic classification engines. Third, we propose two ensemble learning schemes which integrate different features and different classifiers to build ensemble learning systems for the branchpoint prediction. One is the genetic algorithm-based weighted average ensemble method; the other is the logistic regression-based ensemble method. In the computational experiments, two ensemble learning methods outperform benchmark branchpoint prediction methods, and can produce high-accuracy results on the benchmark dataset.

  3. SGC method for predicting the standard enthalpy of formation of pure compounds from their molecular structures

    International Nuclear Information System (INIS)

    Albahri, Tareq A.; Aljasmi, Abdulla F.

    2013-01-01

    Highlights: • ΔH° f is predicted from the molecular structure of the compounds alone. • ANN-SGC model predicts ΔH° f with a correlation coefficient of 0.99. • ANN-MNLR model predicts ΔH° f with a correlation coefficient of 0.90. • Better definition of the atom-type molecular groups is presented. • The method is better than others in terms of combined simplicity, accuracy and generality. - Abstract: A theoretical method for predicting the standard enthalpy of formation of pure compounds from various chemical families is presented. Back propagation artificial neural networks were used to investigate several structural group contribution (SGC) methods available in literature. The networks were used to probe the structural groups that have significant contribution to the overall enthalpy of formation property of pure compounds and arrive at the set of groups that can best represent the enthalpy of formation for about 584 substances. The 51 atom-type structural groups listed provide better definitions of group contributions than others in the literature. The proposed method can predict the standard enthalpy of formation of pure compounds with an AAD of 11.38 kJ/mol and a correlation coefficient of 0.9934 from only their molecular structure. The results are further compared with those of the traditional SGC method based on MNLR as well as other methods in the literature

  4. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231

  5. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction.

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P; Rother, Kristian M; Bujnicki, Janusz M

    2013-04-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks.

  6. Comparison of selected methods of prediction of wine exports and imports

    Directory of Open Access Journals (Sweden)

    Radka Šperková

    2008-01-01

    Full Text Available For prediction of future events, there exist a number of methods usable in managerial practice. Decision on which of them should be used in a particular situation depends not only on the amount and quality of input information, but also on a subjective managerial judgement. Paper performs a practical application and consequent comparison of results of two selected methods, which are statistical method and deductive method. Both methods were used for predicting wine exports and imports in (from the Czech Republic. Prediction was done in 2003 and it related to the economic years 2003/2004, 2004/2005, 2005/2006, and 2006/2007, within which it was compared with the real values of the given indicators.Within the deductive methods there were characterized the most important factors of external environment including the most important influence according to authors’ opinion, which was the integration of the Czech Republic into the EU from 1st May, 2004. On the contrary, the statistical method of time-series analysis did not regard the integration, which is comes out of its principle. Statistics only calculates based on data from the past, and cannot incorporate the influence of irregular future conditions, just as the EU integration. Because of this the prediction based on deductive method was more optimistic and more precise in terms of its difference from real development in the given field.

  7. Effectiveness of the cervical vertebral maturation method to predict postpeak circumpubertal growth of craniofacial structures.

    NARCIS (Netherlands)

    Fudalej, P.S.; Bollen, A.M.

    2010-01-01

    INTRODUCTION: Our aim was to assess effectiveness of the cervical vertebral maturation (CVM) method to predict circumpubertal craniofacial growth in the postpeak period. METHODS: The CVM stage was determined in 176 subjects (51 adolescent boys and 125 adolescent girls) on cephalograms taken at the

  8. Statistical Analysis of a Method to Predict Drug-Polymer Miscibility

    DEFF Research Database (Denmark)

    Knopp, Matthias Manne; Olesen, Niels Erik; Huang, Yanbin

    2016-01-01

    In this study, a method proposed to predict drug-polymer miscibility from differential scanning calorimetry measurements was subjected to statistical analysis. The method is relatively fast and inexpensive and has gained popularity as a result of the increasing interest in the formulation of drug...... as provided in this study. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci....

  9. Deep learning versus traditional machine learning methods for aggregated energy demand prediction

    NARCIS (Netherlands)

    Paterakis, N.G.; Mocanu, E.; Gibescu, M.; Stappers, B.; van Alst, W.

    2018-01-01

    In this paper the more advanced, in comparison with traditional machine learning approaches, deep learning methods are explored with the purpose of accurately predicting the aggregated energy consumption. Despite the fact that a wide range of machine learning methods have been applied to

  10. Some new results on correlation-preserving factor scores prediction methods

    NARCIS (Netherlands)

    Ten Berge, J.M.F.; Krijnen, W.P.; Wansbeek, T.J.; Shapiro, A.

    1999-01-01

    Anderson and Rubin and McDonald have proposed a correlation-preserving method of factor scores prediction which minimizes the trace of a residual covariance matrix for variables. Green has proposed a correlation-preserving method which minimizes the trace of a residual covariance matrix for factors.

  11. A Simple Microsoft Excel Method to Predict Antibiotic Outbreaks and Underutilization.

    Science.gov (United States)

    Miglis, Cristina; Rhodes, Nathaniel J; Avedissian, Sean N; Zembower, Teresa R; Postelnick, Michael; Wunderink, Richard G; Sutton, Sarah H; Scheetz, Marc H

    2017-07-01

    Benchmarking strategies are needed to promote the appropriate use of antibiotics. We have adapted a simple regressive method in Microsoft Excel that is easily implementable and creates predictive indices. This method trends consumption over time and can identify periods of over- and underuse at the hospital level. Infect Control Hosp Epidemiol 2017;38:860-862.

  12. Reliable B cell epitope predictions: impacts of method development and improved benchmarking

    DEFF Research Database (Denmark)

    Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole

    2012-01-01

    biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping...... evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve highly significant predictive performances suggesting these tools to be a powerful asset in rational epitope discovery. The updated version...

  13. A computational method to predict fluid-structure interaction of pressure relief valves

    Energy Technology Data Exchange (ETDEWEB)

    Kang, S. K.; Lee, D. H.; Park, S. K.; Hong, S. R. [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    2004-07-01

    An effective CFD (Computational fluid dynamics) method to predict important performance parameters, such as blowdown and chattering, for pressure relief valves in NPPs is provided in the present study. To calculate the valve motion, 6DOF (six degree of freedom) model is used. A chimera overset grid method is utilized to this study for the elimination of grid remeshing problem, when the disk moves. Further, CFD-Fastran which is developed by CFD-RC for compressible flow analysis is applied to an 1' safety valve. The prediction results ensure the applicability of the presented method in this study.

  14. Improved time series prediction with a new method for selection of model parameters

    International Nuclear Information System (INIS)

    Jade, A M; Jayaraman, V K; Kulkarni, B D

    2006-01-01

    A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)

  15. Improving Allergen Prediction in Main Crops Using a Weighted Integrative Method.

    Science.gov (United States)

    Li, Jing; Wang, Jing; Li, Jing

    2017-12-01

    As a public health problem, food allergy is frequently caused by food allergy proteins, which trigger a type-I hypersensitivity reaction in the immune system of atopic individuals. The food allergens in our daily lives are mainly from crops including rice, wheat, soybean and maize. However, allergens in these main crops are far from fully uncovered. Although some bioinformatics tools or methods predicting the potential allergenicity of proteins have been proposed, each method has their limitation. In this paper, we built a novel algorithm PREAL W , which integrated PREAL, FAO/WHO criteria and motif-based method by a weighted average score, to benefit the advantages of different methods. Our results illustrated PREAL W has better performance significantly in the crops' allergen prediction. This integrative allergen prediction algorithm could be useful for critical food safety matters. The PREAL W could be accessed at http://lilab.life.sjtu.edu.cn:8080/prealw .

  16. Prediction of protein post-translational modifications: main trends and methods

    Science.gov (United States)

    Sobolev, B. N.; Veselovsky, A. V.; Poroikov, V. V.

    2014-02-01

    The review summarizes main trends in the development of methods for the prediction of protein post-translational modifications (PTMs) by considering the three most common types of PTMs — phosphorylation, acetylation and glycosylation. Considerable attention is given to general characteristics of regulatory interactions associated with PTMs. Different approaches to the prediction of PTMs are analyzed. Most of the methods are based only on the analysis of the neighbouring environment of modification sites. The related software is characterized by relatively low accuracy of PTM predictions, which may be due both to the incompleteness of training data and the features of PTM regulation. Advantages and limitations of the phylogenetic approach are considered. The prediction of PTMs using data on regulatory interactions, including the modular organization of interacting proteins, is a promising field, provided that a more carefully selected training data will be used. The bibliography includes 145 references.

  17. In silico toxicology: computational methods for the prediction of chemical toxicity

    KAUST Repository

    Raies, Arwa B.; Bajic, Vladimir B.

    2016-01-01

    Determining the toxicity of chemicals is necessary to identify their harmful effects on humans, animals, plants, or the environment. It is also one of the main steps in drug design. Animal models have been used for a long time for toxicity testing. However, in vivo animal tests are constrained by time, ethical considerations, and financial burden. Therefore, computational methods for estimating the toxicity of chemicals are considered useful. In silico toxicology is one type of toxicity assessment that uses computational methods to analyze, simulate, visualize, or predict the toxicity of chemicals. In silico toxicology aims to complement existing toxicity tests to predict toxicity, prioritize chemicals, guide toxicity tests, and minimize late-stage failures in drugs design. There are various methods for generating models to predict toxicity endpoints. We provide a comprehensive overview, explain, and compare the strengths and weaknesses of the existing modeling methods and algorithms for toxicity prediction with a particular (but not exclusive) emphasis on computational tools that can implement these methods and refer to expert systems that deploy the prediction models. Finally, we briefly review a number of new research directions in in silico toxicology and provide recommendations for designing in silico models.

  18. In silico toxicology: computational methods for the prediction of chemical toxicity

    KAUST Repository

    Raies, Arwa B.

    2016-01-06

    Determining the toxicity of chemicals is necessary to identify their harmful effects on humans, animals, plants, or the environment. It is also one of the main steps in drug design. Animal models have been used for a long time for toxicity testing. However, in vivo animal tests are constrained by time, ethical considerations, and financial burden. Therefore, computational methods for estimating the toxicity of chemicals are considered useful. In silico toxicology is one type of toxicity assessment that uses computational methods to analyze, simulate, visualize, or predict the toxicity of chemicals. In silico toxicology aims to complement existing toxicity tests to predict toxicity, prioritize chemicals, guide toxicity tests, and minimize late-stage failures in drugs design. There are various methods for generating models to predict toxicity endpoints. We provide a comprehensive overview, explain, and compare the strengths and weaknesses of the existing modeling methods and algorithms for toxicity prediction with a particular (but not exclusive) emphasis on computational tools that can implement these methods and refer to expert systems that deploy the prediction models. Finally, we briefly review a number of new research directions in in silico toxicology and provide recommendations for designing in silico models.

  19. Multiplier method may be unreliable to predict the timing of temporary hemiepiphysiodesis for coronal angular deformity.

    Science.gov (United States)

    Wu, Zhenkai; Ding, Jing; Zhao, Dahang; Zhao, Li; Li, Hai; Liu, Jianlin

    2017-07-10

    The multiplier method was introduced by Paley to calculate the timing for temporary hemiepiphysiodesis. However, this method has not been verified in terms of clinical outcome measure. We aimed to (1) predict the rate of angular correction per year (ACPY) at the various corresponding ages by means of multiplier method and verify the reliability based on the data from the published studies and (2) screen out risk factors for deviation of prediction. A comprehensive search was performed in the following electronic databases: Cochrane, PubMed, and EMBASE™. A total of 22 studies met the inclusion criteria. If the actual value of ACPY from the collected date was located out of the range of the predicted value based on the multiplier method, it was considered as the deviation of prediction (DOP). The associations of patient characteristics with DOP were assessed with the use of univariate logistic regression. Only one article was evaluated as moderate evidence; the remaining articles were evaluated as poor quality. The rate of DOP was 31.82%. In the detailed individual data of included studies, the rate of DOP was 55.44%. The multiplier method is not reliable in predicting the timing for temporary hemiepiphysiodesis, even though it is prone to be more reliable for the younger patients with idiopathic genu coronal deformity.

  20. A GPS Satellite Clock Offset Prediction Method Based on Fitting Clock Offset Rates Data

    Directory of Open Access Journals (Sweden)

    WANG Fuhong

    2016-12-01

    Full Text Available It is proposed that a satellite atomic clock offset prediction method based on fitting and modeling clock offset rates data. This method builds quadratic model or linear model combined with periodic terms to fit the time series of clock offset rates, and computes the model coefficients of trend with the best estimation. The clock offset precisely estimated at the initial prediction epoch is directly adopted to calculate the model coefficient of constant. The clock offsets in the rapid ephemeris (IGR provided by IGS are used as modeling data sets to perform certain experiments for different types of GPS satellite clocks. The results show that the clock prediction accuracies of the proposed method for 3, 6, 12 and 24 h achieve 0.43, 0.58, 0.90 and 1.47 ns respectively, which outperform the traditional prediction method based on fitting original clock offsets by 69.3%, 61.8%, 50.5% and 37.2%. Compared with the IGU real-time clock products provided by IGS, the prediction accuracies of the new method have improved about 15.7%, 23.7%, 27.4% and 34.4% respectively.

  1. A postprocessing method in the HMC framework for predicting gene function based on biological instrumental data

    Science.gov (United States)

    Feng, Shou; Fu, Ping; Zheng, Wenbin

    2018-03-01

    Predicting gene function based on biological instrumental data is a complicated and challenging hierarchical multi-label classification (HMC) problem. When using local approach methods to solve this problem, a preliminary results processing method is usually needed. This paper proposed a novel preliminary results processing method called the nodes interaction method. The nodes interaction method revises the preliminary results and guarantees that the predictions are consistent with the hierarchy constraint. This method exploits the label dependency and considers the hierarchical interaction between nodes when making decisions based on the Bayesian network in its first phase. In the second phase, this method further adjusts the results according to the hierarchy constraint. Implementing the nodes interaction method in the HMC framework also enhances the HMC performance for solving the gene function prediction problem based on the Gene Ontology (GO), the hierarchy of which is a directed acyclic graph that is more difficult to tackle. The experimental results validate the promising performance of the proposed method compared to state-of-the-art methods on eight benchmark yeast data sets annotated by the GO.

  2. Prediction of critical heat flux in fuel assemblies using a CHF table method

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Tae Hyun; Hwang, Dae Hyun; Bang, Je Geon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Baek, Won Pil; Chang, Soon Heung [Korea Advance Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    A CHF table method has been assessed in this study for rod bundle CHF predictions. At the conceptual design stage for a new reactor, a general critical heat flux (CHF) prediction method with a wide applicable range and reasonable accuracy is essential to the thermal-hydraulic design and safety analysis. In many aspects, a CHF table method (i.e., the use of a round tube CHF table with appropriate bundle correction factors) can be a promising way to fulfill this need. So the assessment of the CHF table method has been performed with the bundle CHF data relevant to pressurized water reactors (PWRs). For comparison purposes, W-3R and EPRI-1 were also applied to the same data base. Data analysis has been conducted with the subchannel code COBRA-IV-I. The CHF table method shows the best predictions based on the direct substitution method. Improvements of the bundle correction factors, especially for the spacer grid and cold wall effects, are desirable for better predictions. Though the present assessment is somewhat limited in both fuel geometries and operating conditions, the CHF table method clearly shows potential to be a general CHF predictor. 8 refs., 3 figs., 3 tabs. (Author)

  3. Prediction of critical heat flux in fuel assemblies using a CHF table method

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Tae Hyun; Hwang, Dae Hyun; Bang, Je Geon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Baek, Won Pil; Chang, Soon Heung [Korea Advance Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    A CHF table method has been assessed in this study for rod bundle CHF predictions. At the conceptual design stage for a new reactor, a general critical heat flux (CHF) prediction method with a wide applicable range and reasonable accuracy is essential to the thermal-hydraulic design and safety analysis. In many aspects, a CHF table method (i.e., the use of a round tube CHF table with appropriate bundle correction factors) can be a promising way to fulfill this need. So the assessment of the CHF table method has been performed with the bundle CHF data relevant to pressurized water reactors (PWRs). For comparison purposes, W-3R and EPRI-1 were also applied to the same data base. Data analysis has been conducted with the subchannel code COBRA-IV-I. The CHF table method shows the best predictions based on the direct substitution method. Improvements of the bundle correction factors, especially for the spacer grid and cold wall effects, are desirable for better predictions. Though the present assessment is somewhat limited in both fuel geometries and operating conditions, the CHF table method clearly shows potential to be a general CHF predictor. 8 refs., 3 figs., 3 tabs. (Author)

  4. Creep-fatigue life prediction method using Diercks equation for Cr-Mo steel

    International Nuclear Information System (INIS)

    Sonoya, Keiji; Nonaka, Isamu; Kitagawa, Masaki

    1990-01-01

    For dealing with the situation that creep-fatigue life properties of materials do not exist, a development of the simple method to predict creep-fatigue life properties is necessary. A method to predict the creep-fatigue life properties of Cr-Mo steels is proposed on the basis of D. Diercks equation which correlates the creep-fatigue lifes of SUS 304 steels under various temperatures, strain ranges, strain rates and hold times. The accuracy of the proposed method was compared with that of the existing methods. The following results were obtained. (1) Fatigue strength and creep rupture strength of Cr-Mo steel are different from those of SUS 304 steel. Therefore in order to apply Diercks equation to creep-fatigue prediction for Cr-Mo steel, the difference of fatigue strength was found to be corrected by fatigue life ratio of both steels and the difference of creep rupture strength was found to be corrected by the equivalent temperature corresponding to equal strength of both steels. (2) Creep-fatigue life can be predicted by the modified Diercks equation within a factor of 2 which is nearly as precise as the accuracy of strain range partitioning method. Required test and analysis procedure of this method are not so complicated as strain range partitioning method. (author)

  5. A deep learning-based multi-model ensemble method for cancer prediction.

    Science.gov (United States)

    Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong

    2018-01-01

    Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. PatchSurfers: Two methods for local molecular property-based binding ligand prediction.

    Science.gov (United States)

    Shin, Woong-Hee; Bures, Mark Gregory; Kihara, Daisuke

    2016-01-15

    Protein function prediction is an active area of research in computational biology. Function prediction can help biologists make hypotheses for characterization of genes and help interpret biological assays, and thus is a productive area for collaboration between experimental and computational biologists. Among various function prediction methods, predicting binding ligand molecules for a target protein is an important class because ligand binding events for a protein are usually closely intertwined with the proteins' biological function, and also because predicted binding ligands can often be directly tested by biochemical assays. Binding ligand prediction methods can be classified into two types: those which are based on protein-protein (or pocket-pocket) comparison, and those that compare a target pocket directly to ligands. Recently, our group proposed two computational binding ligand prediction methods, Patch-Surfer, which is a pocket-pocket comparison method, and PL-PatchSurfer, which compares a pocket to ligand molecules. The two programs apply surface patch-based descriptions to calculate similarity or complementarity between molecules. A surface patch is characterized by physicochemical properties such as shape, hydrophobicity, and electrostatic potentials. These properties on the surface are represented using three-dimensional Zernike descriptors (3DZD), which are based on a series expansion of a 3 dimensional function. Utilizing 3DZD for describing the physicochemical properties has two main advantages: (1) rotational invariance and (2) fast comparison. Here, we introduce Patch-Surfer and PL-PatchSurfer with an emphasis on PL-PatchSurfer, which is more recently developed. Illustrative examples of PL-PatchSurfer performance on binding ligand prediction as well as virtual drug screening are also provided. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Basic study on dynamic reactive-power control method with PV output prediction for solar inverter

    Directory of Open Access Journals (Sweden)

    Ryunosuke Miyoshi

    2016-01-01

    Full Text Available To effectively utilize a photovoltaic (PV system, reactive-power control methods for solar inverters have been considered. Among the various methods, the constant-voltage control outputs less reactive power compared with the other methods. We have developed a constant-voltage control to reduce the reactive-power output. However, the developed constant-voltage control still outputs unnecessary reactive power because the control parameter is constant in every waveform of the PV output. To reduce the reactive-power output, we propose a dynamic reactive-power control method with a PV output prediction. In the proposed method, the control parameter is varied according to the properties of the predicted PV waveform. In this study, we performed numerical simulations using a distribution system model, and we confirmed that the proposed method reduces the reactive-power output within the voltage constraint.

  8. Fluvial facies reservoir productivity prediction method based on principal component analysis and artificial neural network

    Directory of Open Access Journals (Sweden)

    Pengyu Gao

    2016-03-01

    Full Text Available It is difficult to forecast the well productivity because of the complexity of vertical and horizontal developments in fluvial facies reservoir. This paper proposes a method based on Principal Component Analysis and Artificial Neural Network to predict well productivity of fluvial facies reservoir. The method summarizes the statistical reservoir factors and engineering factors that affect the well productivity, extracts information by applying the principal component analysis method and approximates arbitrary functions of the neural network to realize an accurate and efficient prediction on the fluvial facies reservoir well productivity. This method provides an effective way for forecasting the productivity of fluvial facies reservoir which is affected by multi-factors and complex mechanism. The study result shows that this method is a practical, effective, accurate and indirect productivity forecast method and is suitable for field application.

  9. RSARF: Prediction of residue solvent accessibility from protein sequence using random forest method

    KAUST Repository

    Ganesan, Pugalenthi; Kandaswamy, Krishna Kumar Umar; Chou -, Kuochen; Vivekanandan, Saravanan; Kolatkar, Prasanna R.

    2012-01-01

    Prediction of protein structure from its amino acid sequence is still a challenging problem. The complete physicochemical understanding of protein folding is essential for the accurate structure prediction. Knowledge of residue solvent accessibility gives useful insights into protein structure prediction and function prediction. In this work, we propose a random forest method, RSARF, to predict residue accessible surface area from protein sequence information. The training and testing was performed using 120 proteins containing 22006 residues. For each residue, buried and exposed state was computed using five thresholds (0%, 5%, 10%, 25%, and 50%). The prediction accuracy for 0%, 5%, 10%, 25%, and 50% thresholds are 72.9%, 78.25%, 78.12%, 77.57% and 72.07% respectively. Further, comparison of RSARF with other methods using a benchmark dataset containing 20 proteins shows that our approach is useful for prediction of residue solvent accessibility from protein sequence without using structural information. The RSARF program, datasets and supplementary data are available at http://caps.ncbs.res.in/download/pugal/RSARF/. - See more at: http://www.eurekaselect.com/89216/article#sthash.pwVGFUjq.dpuf

  10. An improved method for predicting the effects of flight on jet mixing noise

    Science.gov (United States)

    Stone, J. R.

    1979-01-01

    A method for predicting the effects of flight on jet mixing noise has been developed on the basis of the jet noise theory of Ffowcs-Williams (1963) and data derived from model-jet/free-jet simulated flight tests. Predicted and experimental values are compared for the J85 turbojet engine on the Bertin Aerotrain, the low-bypass refanned JT8D engine on a DC-9, and the high-bypass JT9D engine on a DC-10. Over the jet velocity range from 280 to 680 m/sec, the predictions show a standard deviation of 1.5 dB.

  11. Prediction method for thermal ratcheting of a cylinder subjected to axially moving temperature distribution

    International Nuclear Information System (INIS)

    Wada, Hiroshi; Igari, Toshihide; Kitade, Shoji.

    1989-01-01

    A prediction method was proposed for plastic ratcheting of a cylinder, which was subjected to axially moving temperature distribution without primary stress. First, a mechanism of this ratcheting was proposed, which considered the movement of temperature distribution as a driving force of this phenomenon. Predictive equations of the ratcheting strain for two representative temperature distributions were proposed based on this mechanism by assuming the elastic-perfectly-plastic material behavior. Secondly, an elastic-plastic analysis was made on a cylinder subjected to the representative two temperature distributions. Analytical results coincided well with the predicted results, and the applicability of the proposed equations was confirmed. (author)

  12. MU-LOC: A Machine-Learning Method for Predicting Mitochondrially Localized Proteins in Plants

    DEFF Research Database (Denmark)

    Zhang, Ning; Rao, R Shyama Prasad; Salvato, Fernanda

    2018-01-01

    -sequence or a multitude of internal signals. Compared with experimental approaches, computational predictions provide an efficient way to infer subcellular localization of a protein. However, it is still challenging to predict plant mitochondrially localized proteins accurately due to various limitations. Consequently......, the performance of current tools can be improved with new data and new machine-learning methods. We present MU-LOC, a novel computational approach for large-scale prediction of plant mitochondrial proteins. We collected a comprehensive dataset of plant subcellular localization, extracted features including amino...

  13. Accurate approximation method for prediction of class I MHC affinities for peptides of length 8, 10 and 11 using prediction tools trained on 9mers

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2008-01-01

    Several accurate prediction systems have been developed for prediction of class I major histocompatibility complex (MHC):peptide binding. Most of these are trained on binding affinity data of primarily 9mer peptides. Here, we show how prediction methods trained on 9mer data can be used for accurate...

  14. A dynamic particle filter-support vector regression method for reliability prediction

    International Nuclear Information System (INIS)

    Wei, Zhao; Tao, Tao; ZhuoShu, Ding; Zio, Enrico

    2013-01-01

    Support vector regression (SVR) has been applied to time series prediction and some works have demonstrated the feasibility of its use to forecast system reliability. For accuracy of reliability forecasting, the selection of SVR's parameters is important. The existing research works on SVR's parameters selection divide the example dataset into training and test subsets, and tune the parameters on the training data. However, these fixed parameters can lead to poor prediction capabilities if the data of the test subset differ significantly from those of training. Differently, the novel method proposed in this paper uses particle filtering to estimate the SVR model parameters according to the whole measurement sequence up to the last observation instance. By treating the SVR training model as the observation equation of a particle filter, our method allows updating the SVR model parameters dynamically when a new observation comes. Because of the adaptability of the parameters to dynamic data pattern, the new PF–SVR method has superior prediction performance over that of standard SVR. Four application results show that PF–SVR is more robust than SVR to the decrease of the number of training data and the change of initial SVR parameter values. Also, even if there are trends in the test data different from those in the training data, the method can capture the changes, correct the SVR parameters and obtain good predictions. -- Highlights: •A dynamic PF–SVR method is proposed to predict the system reliability. •The method can adjust the SVR parameters according to the change of data. •The method is robust to the size of training data and initial parameter values. •Some cases based on both artificial and real data are studied. •PF–SVR shows superior prediction performance over standard SVR

  15. Disorder Prediction Methods, Their Applicability to Different Protein Targets and Their Usefulness for Guiding Experimental Studies

    Directory of Open Access Journals (Sweden)

    Jennifer D. Atkins

    2015-08-01

    Full Text Available The role and function of a given protein is dependent on its structure. In recent years, however, numerous studies have highlighted the importance of unstructured, or disordered regions in governing a protein’s function. Disordered proteins have been found to play important roles in pivotal cellular functions, such as DNA binding and signalling cascades. Studying proteins with extended disordered regions is often problematic as they can be challenging to express, purify and crystallise. This means that interpretable experimental data on protein disorder is hard to generate. As a result, predictive computational tools have been developed with the aim of predicting the level and location of disorder within a protein. Currently, over 60 prediction servers exist, utilizing different methods for classifying disorder and different training sets. Here we review several good performing, publicly available prediction methods, comparing their application and discussing how disorder prediction servers can be used to aid the experimental solution of protein structure. The use of disorder prediction methods allows us to adopt a more targeted approach to experimental studies by accurately identifying the boundaries of ordered protein domains so that they may be investigated separately, thereby increasing the likelihood of their successful experimental solution.

  16. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    International Nuclear Information System (INIS)

    Nedic, Vladimir; Despotovic, Danijela; Cvetanovic, Slobodan; Despotovic, Milan; Babic, Sasa

    2014-01-01

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. The output variable of the network is the equivalent noise level in the given time period L eq . Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model

  17. SVM-PB-Pred: SVM based protein block prediction method using sequence profiles and secondary structures.

    Science.gov (United States)

    Suresh, V; Parthasarathy, S

    2014-01-01

    We developed a support vector machine based web server called SVM-PB-Pred, to predict the Protein Block for any given amino acid sequence. The input features of SVM-PB-Pred include i) sequence profiles (PSSM) and ii) actual secondary structures (SS) from DSSP method or predicted secondary structures from NPS@ and GOR4 methods. There were three combined input features PSSM+SS(DSSP), PSSM+SS(NPS@) and PSSM+SS(GOR4) used to test and train the SVM models. Similarly, four datasets RS90, DB433, LI1264 and SP1577 were used to develop the SVM models. These four SVM models developed were tested using three different benchmarking tests namely; (i) self consistency, (ii) seven fold cross validation test and (iii) independent case test. The maximum possible prediction accuracy of ~70% was observed in self consistency test for the SVM models of both LI1264 and SP1577 datasets, where PSSM+SS(DSSP) input features was used to test. The prediction accuracies were reduced to ~53% for PSSM+SS(NPS@) and ~43% for PSSM+SS(GOR4) in independent case test, for the SVM models of above two same datasets. Using our method, it is possible to predict the protein block letters for any query protein sequence with ~53% accuracy, when the SP1577 dataset and predicted secondary structure from NPS@ server were used. The SVM-PB-Pred server can be freely accessed through http://bioinfo.bdu.ac.in/~svmpbpred.

  18. Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.

    Science.gov (United States)

    Moura, Antonio Divino; Hastenrath, Stefan

    2004-07-01

    Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.

  19. A novel method for improved accuracy of transcription factor binding site prediction

    KAUST Repository

    Khamis, Abdullah M.; Motwalli, Olaa Amin; Oliva, Romina; Jankovic, Boris R.; Medvedeva, Yulia; Ashoor, Haitham; Essack, Magbubah; Gao, Xin; Bajic, Vladimir B.

    2018-01-01

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  20. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    Energy Technology Data Exchange (ETDEWEB)

    Nedic, Vladimir, E-mail: vnedic@kg.ac.rs [Faculty of Philology and Arts, University of Kragujevac, Jovana Cvijića bb, 34000 Kragujevac (Serbia); Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs [Faculty of Economics, University of Kragujevac, Djure Pucara Starog 3, 34000 Kragujevac (Serbia); Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs [Faculty of Economics, University of Niš, Trg kralja Aleksandra Ujedinitelja, 18000 Niš (Serbia); Despotovic, Milan, E-mail: mdespotovic@kg.ac.rs [Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, 34000 Kragujevac (Serbia); Babic, Sasa, E-mail: babicsf@yahoo.com [College of Applied Mechanical Engineering, Trstenik (Serbia)

    2014-11-15

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. The output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.

  1. A novel method for improved accuracy of transcription factor binding site prediction

    KAUST Repository

    Khamis, Abdullah M.

    2018-03-20

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  2. Machine learning-based methods for prediction of linear B-cell epitopes.

    Science.gov (United States)

    Wang, Hsin-Wei; Pai, Tun-Wen

    2014-01-01

    B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.

  3. A novel method for predicting the power outputs of wave energy converters

    Science.gov (United States)

    Wang, Yingguang

    2018-03-01

    This paper focuses on realistically predicting the power outputs of wave energy converters operating in shallow water nonlinear waves. A heaving two-body point absorber is utilized as a specific calculation example, and the generated power of the point absorber has been predicted by using a novel method (a nonlinear simulation method) that incorporates a second order random wave model into a nonlinear dynamic filter. It is demonstrated that the second order random wave model in this article can be utilized to generate irregular waves with realistic crest-trough asymmetries, and consequently, more accurate generated power can be predicted by subsequently solving the nonlinear dynamic filter equation with the nonlinearly simulated second order waves as inputs. The research findings demonstrate that the novel nonlinear simulation method in this article can be utilized as a robust tool for ocean engineers in their design, analysis and optimization of wave energy converters.

  4. Benchmarking pKa prediction methods for Lys115 in acetoacetate decarboxylase.

    Science.gov (United States)

    Liu, Yuli; Patel, Anand H G; Burger, Steven K; Ayers, Paul W

    2017-05-01

    Three different pK a prediction methods were used to calculate the pK a of Lys115 in acetoacetate decarboxylase (AADase): the empirical method PROPKA, the multiconformation continuum electrostatics (MCCE) method, and the molecular dynamics/thermodynamic integration (MD/TI) method with implicit solvent. As expected, accurate pK a prediction of Lys115 depends on the protonation patterns of other ionizable groups, especially the nearby Glu76. However, since the prediction methods do not explicitly sample the protonation patterns of nearby residues, this must be done manually. When Glu76 is deprotonated, all three methods give an incorrect pK a value for Lys115. If protonated, Glu76 is used in an MD/TI calculation, the pK a of Lys115 is predicted to be 5.3, which agrees well with the experimental value of 5.9. This result agrees with previous site-directed mutagenesis studies, where the mutation of Glu76 (negative charge when deprotonated) to Gln (neutral) causes no change in K m , suggesting that Glu76 has no effect on the pK a shift of Lys115. Thus, we postulate that the pK a of Glu76 is also shifted so that Glu76 is protonated (neutral) in AADase. Graphical abstract Simulated abundances of protonated species as pH is varied.

  5. HomPPI: a class of sequence homology based protein-protein interface prediction methods

    Directory of Open Access Journals (Sweden)

    Dobbs Drena

    2011-06-01

    Full Text Available Abstract Background Although homology-based methods are among the most widely used methods for predicting the structure and function of proteins, the question as to whether interface sequence conservation can be effectively exploited in predicting protein-protein interfaces has been a subject of debate. Results We studied more than 300,000 pair-wise alignments of protein sequences from structurally characterized protein complexes, including both obligate and transient complexes. We identified sequence similarity criteria required for accurate homology-based inference of interface residues in a query protein sequence. Based on these analyses, we developed HomPPI, a class of sequence homology-based methods for predicting protein-protein interface residues. We present two variants of HomPPI: (i NPS-HomPPI (Non partner-specific HomPPI, which can be used to predict interface residues of a query protein in the absence of knowledge of the interaction partner; and (ii PS-HomPPI (Partner-specific HomPPI, which can be used to predict the interface residues of a query protein with a specific target protein. Our experiments on a benchmark dataset of obligate homodimeric complexes show that NPS-HomPPI can reliably predict protein-protein interface residues in a given protein, with an average correlation coefficient (CC of 0.76, sensitivity of 0.83, and specificity of 0.78, when sequence homologs of the query protein can be reliably identified. NPS-HomPPI also reliably predicts the interface residues of intrinsically disordered proteins. Our experiments suggest that NPS-HomPPI is competitive with several state-of-the-art interface prediction servers including those that exploit the structure of the query proteins. The partner-specific classifier, PS-HomPPI can, on a large dataset of transient complexes, predict the interface residues of a query protein with a specific target, with a CC of 0.65, sensitivity of 0.69, and specificity of 0.70, when homologs of

  6. Validation of a simple method for predicting the disinfection performance in a flow-through contactor.

    Science.gov (United States)

    Pfeiffer, Valentin; Barbeau, Benoit

    2014-02-01

    Despite its shortcomings, the T10 method introduced by the United States Environmental Protection Agency (USEPA) in 1989 is currently the method most frequently used in North America to calculate disinfection performance. Other methods (e.g., the Integrated Disinfection Design Framework, IDDF) have been advanced as replacements, and more recently, the USEPA suggested the Extended T10 and Extended CSTR (Continuous Stirred-Tank Reactor) methods to improve the inactivation calculations within ozone contactors. To develop a method that fully considers the hydraulic behavior of the contactor, two models (Plug Flow with Dispersion and N-CSTR) were successfully fitted with five tracer tests results derived from four Water Treatment Plants and a pilot-scale contactor. A new method based on the N-CSTR model was defined as the Partially Segregated (Pseg) method. The predictions from all the methods mentioned were compared under conditions of poor and good hydraulic performance, low and high disinfectant decay, and different levels of inactivation. These methods were also compared with experimental results from a chlorine pilot-scale contactor used for Escherichia coli inactivation. The T10 and Extended T10 methods led to large over- and under-estimations. The Segregated Flow Analysis (used in the IDDF) also considerably overestimated the inactivation under high disinfectant decay. Only the Extended CSTR and Pseg methods produced realistic and conservative predictions in all cases. Finally, a simple implementation procedure of the Pseg method was suggested for calculation of disinfection performance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. A Practical Radiosity Method for Predicting Transmission Loss in Urban Environments

    Directory of Open Access Journals (Sweden)

    Liang Ming

    2004-01-01

    Full Text Available The ability to predict transmission loss or field strength distribution is crucial for determining coverage in planning personal communication systems. This paper presents a practical method to accurately predict entire average transmission loss distribution in complicated urban environments. The method uses a 3D propagation model based on radiosity and a simplified city information database including surfaces of roads and building groups. Narrowband validation measurements with line-of-sight (LOS and non-line-of-sight (NLOS cases at 1800 MHz give excellent agreement in urban environments.

  8. Large-scale validation of methods for cytotoxic T-lymphocyte epitope prediction

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Lundegaard, Claus; Lamberth, K.

    2007-01-01

    BACKGROUND: Reliable predictions of Cytotoxic T lymphocyte (CTL) epitopes are essential for rational vaccine design. Most importantly, they can minimize the experimental effort needed to identify epitopes. NetCTL is a web-based tool designed for predicting human CTL epitopes in any given protein....... of the other methods achieved a sensitivity of 0.64. The NetCTL-1.2 method is available at http://www.cbs.dtu.dk/services/NetCTL.All used datasets are available at http://www.cbs.dtu.dk/suppl/immunology/CTL-1.2.php....

  9. Application of artificial intelligence methods for prediction of steel mechanical properties

    Directory of Open Access Journals (Sweden)

    Z. Jančíková

    2008-10-01

    Full Text Available The target of the contribution is to outline possibilities of applying artificial neural networks for the prediction of mechanical steel properties after heat treatment and to judge their perspective use in this field. The achieved models enable the prediction of final mechanical material properties on the basis of decisive parameters influencing these properties. By applying artificial intelligence methods in combination with mathematic-physical analysis methods it will be possible to create facilities for designing a system of the continuous rationalization of existing and also newly developing industrial technologies.

  10. Machine learning and statistical methods for the prediction of maximal oxygen uptake: recent advances

    Directory of Open Access Journals (Sweden)

    Abut F

    2015-08-01

    Full Text Available Fatih Abut, Mehmet Fatih AkayDepartment of Computer Engineering, Çukurova University, Adana, TurkeyAbstract: Maximal oxygen uptake (VO2max indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance

  11. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    Science.gov (United States)

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  12. Urban Link Travel Time Prediction Based on a Gradient Boosting Method Considering Spatiotemporal Correlations

    Directory of Open Access Journals (Sweden)

    Faming Zhang

    2016-11-01

    Full Text Available The prediction of travel times is challenging because of the sparseness of real-time traffic data and the intrinsic uncertainty of travel on congested urban road networks. We propose a new gradient–boosted regression tree method to accurately predict travel times. This model accounts for spatiotemporal correlations extracted from historical and real-time traffic data for adjacent and target links. This method can deliver high prediction accuracy by combining simple regression trees with poor performance. It corrects the error found in existing models for improved prediction accuracy. Our spatiotemporal gradient–boosted regression tree model was verified in experiments. The training data were obtained from big data reflecting historic traffic conditions collected by probe vehicles in Wuhan from January to May 2014. Real-time data were extracted from 11 weeks of GPS records collected in Wuhan from 5 May 2014 to 20 July 2014. Based on these data, we predicted link travel time for the period from 21 July 2014 to 25 July 2014. Experiments showed that our proposed spatiotemporal gradient–boosted regression tree model obtained better results than gradient boosting, random forest, or autoregressive integrated moving average approaches. Furthermore, these results indicate the advantages of our model for urban link travel time prediction.

  13. Fatigue Life Prediction of High Modulus Asphalt Concrete Based on the Local Stress-Strain Method

    Directory of Open Access Journals (Sweden)

    Mulian Zheng

    2017-03-01

    Full Text Available Previously published studies have proposed fatigue life prediction models for dense graded asphalt pavement based on flexural fatigue test. This study focused on the fatigue life prediction of High Modulus Asphalt Concrete (HMAC pavement using the local strain-stress method and direct tension fatigue test. First, the direct tension fatigue test at various strain levels was conducted on HMAC prism samples cut from plate specimens. Afterwards, their true stress-strain loop curves were obtained and modified to develop the strain-fatigue life equation. Then the nominal strain of HMAC course determined using finite element method was converted into local strain using the Neuber method. Finally, based on the established fatigue equation and converted local strain, a method to predict the pavement fatigue crack initiation life was proposed and the fatigue life of a typical HMAC overlay pavement which runs a risk of bottom-up cracking was predicted and validated. Results show that the proposed method was able to produce satisfactory crack initiation life.

  14. Method for estimating capacity and predicting remaining useful life of lithium-ion battery

    International Nuclear Information System (INIS)

    Hu, Chao; Jain, Gaurav; Tamirisa, Prabhakar; Gorka, Tom

    2014-01-01

    Highlights: • We develop an integrated method for the capacity estimation and RUL prediction. • A state projection scheme is derived for capacity estimation. • The Gauss–Hermite particle filter technique is used for the RUL prediction. • Results with 10 years’ continuous cycling data verify the effectiveness of the method. - Abstract: Reliability of lithium-ion (Li-ion) rechargeable batteries used in implantable medical devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, physicians, and patients. To ensure Li-ion batteries in these devices operate reliably, it is important to be able to assess the capacity of Li-ion battery and predict the remaining useful life (RUL) throughout the whole life-time. This paper presents an integrated method for the capacity estimation and RUL prediction of Li-ion battery used in implantable medical devices. A state projection scheme from the author’s previous study is used for the capacity estimation. Then, based on the capacity estimates, the Gauss–Hermite particle filter technique is used to project the capacity fade to the end-of-service (EOS) value (or the failure limit) for the RUL prediction. Results of 10 years’ continuous cycling test on Li-ion prismatic cells in the lab suggest that the proposed method achieves good accuracy in the capacity estimation and captures the uncertainty in the RUL prediction. Post-explant weekly cycling data obtained from field cells with 4–7 implant years further verify the effectiveness of the proposed method in the capacity estimation

  15. Prediction method for cavitation erosion based on measurement of bubble collapse impact loads

    International Nuclear Information System (INIS)

    Hattori, S; Hirose, T; Sugiyama, K

    2009-01-01

    The prediction of cavitation erosion rates is important in order to evaluate the exact life of components. The measurement of impact loads in bubble collapses helps to predict the life under cavitation erosion. In this study, we carried out erosion tests and the measurements of impact loads in bubble collapses with a vibratory apparatus. We evaluated the incubation period based on a cumulative damage rule by measuring the impact loads of cavitation acting on the specimen surface and by using the 'constant impact load - number of impact loads curve' similar to the modified Miner's rule which is employed for fatigue life prediction. We found that the parameter Σ(F i α xn i ) (F i : impact load, n i : number of impacts and α: constant) is suitable for the evaluation of the erosion life. Moreover, we propose a new method that can predict the incubation period under various cavitation conditions.

  16. Comparison and validation of statistical methods for predicting power outage durations in the event of hurricanes.

    Science.gov (United States)

    Nateghi, Roshanak; Guikema, Seth D; Quiring, Steven M

    2011-12-01

    This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out-of-sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy. © 2011 Society for Risk Analysis.

  17. Predicting hepatitis B monthly incidence rates using weighted Markov chains and time series methods.

    Science.gov (United States)

    Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam

    2015-01-01

    Hepatitis B (HB) is a major global mortality. Accurately predicting the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to predict monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The results of different applied methods were compared to correct percentages of predicted incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct predicted percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of results of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most accurate prediction of the incidence rates.

  18. DomPep--a general method for predicting modular domain-mediated protein-protein interactions.

    Directory of Open Access Journals (Sweden)

    Lei Li

    Full Text Available Protein-protein interactions (PPIs are frequently mediated by the binding of a modular domain in one protein to a short, linear peptide motif in its partner. The advent of proteomic methods such as peptide and protein arrays has led to the accumulation of a wealth of interaction data for modular interaction domains. Although several computational programs have been developed to predict modular domain-mediated PPI events, they are often restricted to a given domain type. We describe DomPep, a method that can potentially be used to predict PPIs mediated by any modular domains. DomPep combines proteomic data with sequence information to achieve high accuracy and high coverage in PPI prediction. Proteomic binding data were employed to determine a simple yet novel parameter Ligand-Binding Similarity which, in turn, is used to calibrate Domain Sequence Identity and Position-Weighted-Matrix distance, two parameters that are used in constructing prediction models. Moreover, DomPep can be used to predict PPIs for both domains with experimental binding data and those without. Using the PDZ and SH2 domain families as test cases, we show that DomPep can predict PPIs with accuracies superior to existing methods. To evaluate DomPep as a discovery tool, we deployed DomPep to identify interactions mediated by three human PDZ domains. Subsequent in-solution binding assays validated the high accuracy of DomPep in predicting authentic PPIs at the proteome scale. Because DomPep makes use of only interaction data and the primary sequence of a domain, it can be readily expanded to include other types of modular domains.

  19. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    Science.gov (United States)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  20. Application of statistical classification methods for predicting the acceptability of well-water quality

    Science.gov (United States)

    Cameron, Enrico; Pilla, Giorgio; Stella, Fabio A.

    2018-01-01

    The application of statistical classification methods is investigated—in comparison also to spatial interpolation methods—for predicting the acceptability of well-water quality in a situation where an effective quantitative model of the hydrogeological system under consideration cannot be developed. In the example area in northern Italy, in particular, the aquifer is locally affected by saline water and the concentration of chloride is the main indicator of both saltwater occurrence and groundwater quality. The goal is to predict if the chloride concentration in a water well will exceed the allowable concentration so that the water is unfit for the intended use. A statistical classification algorithm achieved the best predictive performances and the results of the study show that statistical classification methods provide further tools for dealing with groundwater quality problems concerning hydrogeological systems that are too difficult to describe analytically or to simulate effectively.

  1. Comparative Study of Different Methods for the Prediction of Drug-Polymer Solubility

    DEFF Research Database (Denmark)

    Knopp, Matthias Manne; Tajber, Lidia; Tian, Yiwei

    2015-01-01

    monomer weight ratios. The drug–polymer solubility at 25 °C was predicted using the Flory–Huggins model, from data obtained at elevated temperature using thermal analysis methods based on the recrystallization of a supersaturated amorphous solid dispersion and two variations of the melting point......, which suggests that this method can be used as an initial screening tool if a liquid analogue is available. The learnings of this important comparative study provided general guidance for the selection of the most suitable method(s) for the screening of drug–polymer solubility....

  2. In Silico Prediction of Chemical Toxicity for Drug Design Using Machine Learning Methods and Structural Alerts

    Science.gov (United States)

    Yang, Hongbin; Sun, Lixia; Li, Weihua; Liu, Guixia; Tang, Yun

    2018-02-01

    For a drug, safety is always the most important issue, including a variety of toxicities and adverse drug effects, which should be evaluated in preclinical and clinical trial phases. This review article at first simply introduced the computational methods used in prediction of chemical toxicity for drug design, including machine learning methods and structural alerts. Machine learning methods have been widely applied in qualitative classification and quantitative regression studies, while structural alerts can be regarded as a complementary tool for lead optimization. The emphasis of this article was put on the recent progress of predictive models built for various toxicities. Available databases and web servers were also provided. Though the methods and models are very helpful for drug design, there are still some challenges and limitations to be improved for drug safety assessment in the future.

  3. In Silico Prediction of Chemical Toxicity for Drug Design Using Machine Learning Methods and Structural Alerts

    Directory of Open Access Journals (Sweden)

    Hongbin Yang

    2018-02-01

    Full Text Available During drug development, safety is always the most important issue, including a variety of toxicities and adverse drug effects, which should be evaluated in preclinical and clinical trial phases. This review article at first simply introduced the computational methods used in prediction of chemical toxicity for drug design, including machine learning methods and structural alerts. Machine learning methods have been widely applied in qualitative classification and quantitative regression studies, while structural alerts can be regarded as a complementary tool for lead optimization. The emphasis of this article was put on the recent progress of predictive models built for various toxicities. Available databases and web servers were also provided. Though the methods and models are very helpful for drug design, there are still some challenges and limitations to be improved for drug safety assessment in the future.

  4. In Silico Prediction of Chemical Toxicity for Drug Design Using Machine Learning Methods and Structural Alerts.

    Science.gov (United States)

    Yang, Hongbin; Sun, Lixia; Li, Weihua; Liu, Guixia; Tang, Yun

    2018-01-01

    During drug development, safety is always the most important issue, including a variety of toxicities and adverse drug effects, which should be evaluated in preclinical and clinical trial phases. This review article at first simply introduced the computational methods used in prediction of chemical toxicity for drug design, including machine learning methods and structural alerts. Machine learning methods have been widely applied in qualitative classification and quantitative regression studies, while structural alerts can be regarded as a complementary tool for lead optimization. The emphasis of this article was put on the recent progress of predictive models built for various toxicities. Available databases and web servers were also provided. Though the methods and models are very helpful for drug design, there are still some challenges and limitations to be improved for drug safety assessment in the future.

  5. Anisotropic Elastoplastic Damage Mechanics Method to Predict Fatigue Life of the Structure

    Directory of Open Access Journals (Sweden)

    Hualiang Wan

    2016-01-01

    Full Text Available New damage mechanics method is proposed to predict the low-cycle fatigue life of metallic structures under multiaxial loading. The microstructure mechanical model is proposed to simulate anisotropic elastoplastic damage evolution. As the micromodel depends on few material parameters, the present method is very concise and suitable for engineering application. The material parameters in damage evolution equation are determined by fatigue experimental data of standard specimens. By employing further development on the ANSYS platform, the anisotropic elastoplastic damage mechanics-finite element method is developed. The fatigue crack propagation life of satellite structure is predicted using the present method and the computational results comply with the experimental data very well.

  6. Tracking Maneuvering Group Target with Extension Predicted and Best Model Augmentation Method Adapted

    Directory of Open Access Journals (Sweden)

    Linhai Gan

    2017-01-01

    Full Text Available The random matrix (RM method is widely applied for group target tracking. The assumption that the group extension keeps invariant in conventional RM method is not yet valid, as the orientation of the group varies rapidly while it is maneuvering; thus, a new approach with group extension predicted is derived here. To match the group maneuvering, a best model augmentation (BMA method is introduced. The existing BMA method uses a fixed basic model set, which may lead to a poor performance when it could not ensure basic coverage of true motion modes. Here, a maneuvering group target tracking algorithm is proposed, where the group extension prediction and the BMA adaption are exploited. The performance of the proposed algorithm will be illustrated by simulation.

  7. An introduction to the application of relaxation method in numerical weather prediction

    International Nuclear Information System (INIS)

    Aquino, E.M.

    1984-11-01

    This paper is intended for workers in the field of numerical weather prediction to acquire experience and gain insight on the use of the relaxation method. Two approaches were carried out, one by explaining the method using hand calculations as applied to a given problem and the second one was the discussion of how the calculations could be carried out on a digital computer. (author)

  8. Ground-State Gas-Phase Structures of Inorganic Molecules Predicted by Density Functional Theory Methods

    KAUST Repository

    Minenkov, Yury; Cavallo, Luigi

    2017-01-01

    -GGA approximations with B3PW91, APF, TPSSh, mPW1PW91, PBE0, mPW1PBE, B972, and B98 functionals, resulting in lowest errors. We recommend using these methods to predict accurate three-dimensional structures of inorganic molecules when intramolecular dispersion

  9. The Relevance Voxel Machine (RVoxM): A Bayesian Method for Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2011-01-01

    This paper presents the Relevance VoxelMachine (RVoxM), a Bayesian multivariate pattern analysis (MVPA) algorithm that is specifically designed for making predictions based on image data. In contrast to generic MVPA algorithms that have often been used for this purpose, the method is designed to ...

  10. Method for predicting future developments of traffic noise in urban areas in Europe

    NARCIS (Netherlands)

    Salomons, E.; Hout, D. van den; Janssen, S.; Kugler, U.; MacA, V.

    2010-01-01

    Traffic noise in urban areas in Europe is a major environmental stressor. In this study we present a method for predicting how environmental noise can be expected to develop in the future. In the project HEIMTSA scenarios were developed for all relevant environmental stressors to health, for all

  11. Early Diagnosis of Breas Cancer Dissemination by Tumor Markers Follow-Up and Method of Prediction

    Czech Academy of Sciences Publication Activity Database

    Nekulová, M.; Šimíčková, M.; Pecen, Ladislav; Eben, Kryštof; Vermousek, I.; Stratil, P.; Černoch, M.; Lang, B.

    1994-01-01

    Roč. 41, č. 2 (1994), s. 113-118 ISSN 0028-2685 R&D Projects: GA AV ČR IAA230106 Keywords : breast cancer * progression * CEA * CA 15-3 * MCA * TPA * mathematical method of prediction Impact factor: 0.354, year: 1994

  12. A Practical and Fast Method To Predict the Thermodynamic Preference of omega-Transaminase-Based Transformations

    DEFF Research Database (Denmark)

    Meier, Robert J.; Gundersen Deslauriers, Maria; Woodley, John

    2015-01-01

    A simple, easy-to-use, and fast approach method is proposed and validated that can predict whether a transaminase reaction is thermodynamically unfavourable. This allowed us to de-select, in the present case, at least 50% of the reactions because they were thermodynamically unfavourable as confir...

  13. A Novel Method to Predict Genomic Islands Based on Mean Shift Clustering Algorithm.

    Directory of Open Access Journals (Sweden)

    Daniel M de Brito

    Full Text Available Genomic Islands (GIs are regions of bacterial genomes that are acquired from other organisms by the phenomenon of horizontal transfer. These regions are often responsible for many important acquired adaptations of the bacteria, with great impact on their evolution and behavior. Nevertheless, these adaptations are usually associated with pathogenicity, antibiotic resistance, degradation and metabolism. Identification of such regions is of medical and industrial interest. For this reason, different approaches for genomic islands prediction have been proposed. However, none of them are capable of predicting precisely the complete repertory of GIs in a genome. The difficulties arise due to the changes in performance of different algorithms in the face of the variety of nucleotide distribution in different species. In this paper, we present a novel method to predict GIs that is built upon mean shift clustering algorithm. It does not require any information regarding the number of clusters, and the bandwidth parameter is automatically calculated based on a heuristic approach. The method was implemented in a new user-friendly tool named MSGIP--Mean Shift Genomic Island Predictor. Genomes of bacteria with GIs discussed in other papers were used to evaluate the proposed method. The application of this tool revealed the same GIs predicted by other methods and also different novel unpredicted islands. A detailed investigation of the different features related to typical GI elements inserted in these new regions confirmed its effectiveness. Stand-alone and user-friendly versions for this new methodology are available at http://msgip.integrativebioinformatics.me.

  14. Study (Prediction of Main Pipes Break Rates in Water Distribution Systems Using Intelligent and Regression Methods

    Directory of Open Access Journals (Sweden)

    Massoud Tabesh

    2011-07-01

    Full Text Available Optimum operation of water distribution networks is one of the priorities of sustainable development of water resources, considering the issues of increasing efficiency and decreasing the water losses. One of the key subjects in optimum operational management of water distribution systems is preparing rehabilitation and replacement schemes, prediction of pipes break rate and evaluation of their reliability. Several approaches have been presented in recent years regarding prediction of pipe failure rates which each one requires especial data sets. Deterministic models based on age and deterministic multi variables and stochastic group modeling are examples of the solutions which relate pipe break rates to parameters like age, material and diameters. In this paper besides the mentioned parameters, more factors such as pipe depth and hydraulic pressures are considered as well. Then using multi variable regression method, intelligent approaches (Artificial neural network and neuro fuzzy models and Evolutionary polynomial Regression method (EPR pipe burst rate are predicted. To evaluate the results of different approaches, a case study is carried out in a part ofMashhadwater distribution network. The results show the capability and advantages of ANN and EPR methods to predict pipe break rates, in comparison with neuro fuzzy and multi-variable regression methods.

  15. Selection of Prediction Methods for Thermophysical Properties for Process Modeling and Product Design of Biodiesel Manufacturing

    DEFF Research Database (Denmark)

    Su, Yung-Chieh; Liu, Y. A.; Díaz Tovar, Carlos Axel

    2011-01-01

    To optimize biodiesel manufacturing, many reported studies have built simulation models to quantify the relationship between operating conditions and process performance. For mass and energy balance simulations, it is essential to know the four fundamental thermophysical properties of the feed oil...... prediction methods on our group Web site (www.design.che.vt.edu) for the reader to download without charge....

  16. Simplified Method for Predicting a Functional Class of Proteins in Transcription Factor Complexes

    KAUST Repository

    Piatek, Marek J.; Schramm, Michael C.; Burra, Dharani Dhar; BinShbreen, Abdulaziz; Jankovic, Boris R.; Chowdhary, Rajesh; Archer, John A.C.; Bajic, Vladimir B.

    2013-01-01

    initiation. Such information is not fully available, since not all proteins that act as TFs or TcoFs are yet annotated as such, due to generally partial functional annotation of proteins. In this study we have developed a method to predict, using only

  17. Selecting the minimum prediction base of historical data to perform 5-year predictions of the cancer burden: The GoF-optimal method.

    Science.gov (United States)

    Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon

    2015-06-01

    Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Method to predict fatigue lifetimes of GRP wind turbine blades and comparison with experiments

    Energy Technology Data Exchange (ETDEWEB)

    Echtermeyer, A.T. [Det Norske Veritas Research AS, Hoevik (Norway); Kensche, C. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Stuttgart (Germany, F.R); Bach, P. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Poppen, M. [Aeronautical Research Inst. of Sweden, Bromma (Sweden); Lilholt, H.; Andersen, S.I.; Broendsted, P. [Risoe National Lab., Roskilde (Denmark)

    1996-12-01

    This paper describes a method to predict fatigue lifetimes of fiber reinforced plastics in wind turbine blades. It is based on extensive testing within the EU-Joule program. The method takes the measured fatigue properties of a material into account so that credit can be given to materials with improved fatigue properties. The large number of test results should also give confidence in the fatigue calculation method for fiber reinforced plastics. The method uses the Palmgren-Miner sum to predict lifetimes and is verified by tests using well defined load sequences. Even though this approach is generally well known in fatigue analysis, many details in the interpretation and extrapolation of the measurements need to be clearly defined, since they can influence the results considerably. The following subjects will be described: Method to measure SN curves and to obtain tolerance bounds, development of a constant lifetime diagram, evaluation of the load sequence, use of Palmgren-Miner sum, requirements for load sequence testing. The fatigue lifetime calculation method has been compared against measured data for simple loading sequences and the more complex WISPERX loading sequence for blade roots. The comparison is based on predicted mean lifetimes, using the same materials to obtain the basic SN curves and to measure laminates under complicated loading sequences. 24 refs, 7 figs, 5 tabs

  19. Method of critical power prediction based on film flow model coupled with subchannel analysis

    International Nuclear Information System (INIS)

    Tomiyama, Akio; Yokomizo, Osamu; Yoshimoto, Yuichiro; Sugawara, Satoshi.

    1988-01-01

    A new method was developed to predict critical powers for a wide variety of BWR fuel bundle designs. This method couples subchannel analysis with a liquid film flow model, instead of taking the conventional way which couples subchannel analysis with critical heat flux correlations. Flow and quality distributions in a bundle are estimated by the subchannel analysis. Using these distributions, film flow rates along fuel rods are then calculated with the film flow model. Dryout is assumed to occur where one of the film flows disappears. This method is expected to give much better adaptability to variations in geometry, heat flux, flow rate and quality distributions than the conventional methods. In order to verify the method, critical power data under BWR conditions were analyzed. Measured and calculated critical powers agreed to within ±7%. Furthermore critical power data for a tight-latticed bundle obtained by LeTourneau et al. were compared with critical powers calculated by the present method and two conventional methods, CISE correlation and subchannel analysis coupled with the CISE correlation. It was confirmed that the present method can predict critical powers more accurately than the conventional methods. (author)

  20. A NEW METHOD FOR PREDICTING SURVIVAL AND ESTIMATING UNCERTAINTY IN TRAUMA PATIENTS

    Directory of Open Access Journals (Sweden)

    V. G. Schetinin

    2017-01-01

    Full Text Available The Trauma and Injury Severity Score (TRISS is the current “gold” standard of screening patient’s condition for purposes of predicting survival probability. More than 40 years of TRISS practice revealed a number of problems, particularly, 1 unexplained fluctuation of predicted values caused by aggregation of screening tests, and 2 low accuracy of uncertainty intervals estimations. We developed a new method made it available for practitioners as a web calculator to reduce negative effect of factors given above. The method involves Bayesian methodology of statistical inference which, being computationally expensive, in theory provides most accurate predictions. We implemented and tested this approach on a data set including 571,148 patients registered in the US National Trauma Data Bank (NTDB with 1–20 injuries. These patients were distributed over the following categories: (1 174,647 with 1 injury, (2 381,137 with 2–10 injuries, and (3 15,364 with 11–20 injuries. Survival rates in each category were 0.977, 0.953, and 0.831, respectively. The proposed method has improved prediction accuracy by 0.04%, 0.36%, and 3.64% (p-value <0.05 in the categories 1, 2, and 3, respectively. Hosmer-Lemeshow statistics showed a significant improvement of the new model calibration. The uncertainty 2σ intervals were reduced from 0.628 to 0.569 for patients of the second category and from 1.227 to 0.930 for patients of the third category, both with p-value <0.005. The new method shows the statistically significant improvement (p-value <0.05 in accuracy of predicting survival and estimating the uncertainty intervals. The largest improvement has been achieved for patients with 11–20 injuries. The method is available for practitioners as a web calculator http://www.traumacalc.org.

  1. BacHbpred: Support Vector Machine Methods for the Prediction of Bacterial Hemoglobin-Like Proteins

    Directory of Open Access Journals (Sweden)

    MuthuKrishnan Selvaraj

    2016-01-01

    Full Text Available The recent upsurge in microbial genome data has revealed that hemoglobin-like (HbL proteins may be widely distributed among bacteria and that some organisms may carry more than one HbL encoding gene. However, the discovery of HbL proteins has been limited to a small number of bacteria only. This study describes the prediction of HbL proteins and their domain classification using a machine learning approach. Support vector machine (SVM models were developed for predicting HbL proteins based upon amino acid composition (AC, dipeptide composition (DC, hybrid method (AC + DC, and position specific scoring matrix (PSSM. In addition, we introduce for the first time a new prediction method based on max to min amino acid residue (MM profiles. The average accuracy, standard deviation (SD, false positive rate (FPR, confusion matrix, and receiver operating characteristic (ROC were analyzed. We also compared the performance of our proposed models in homology detection databases. The performance of the different approaches was estimated using fivefold cross-validation techniques. Prediction accuracy was further investigated through confusion matrix and ROC curve analysis. All experimental results indicate that the proposed BacHbpred can be a perspective predictor for determination of HbL related proteins. BacHbpred, a web tool, has been developed for HbL prediction.

  2. Soil-pipe interaction modeling for pipe behavior prediction with super learning based methods

    Science.gov (United States)

    Shi, Fang; Peng, Xiang; Liu, Huan; Hu, Yafei; Liu, Zheng; Li, Eric

    2018-03-01

    Underground pipelines are subject to severe distress from the surrounding expansive soil. To investigate the structural response of water mains to varying soil movements, field data, including pipe wall strains in situ soil water content, soil pressure and temperature, was collected. The research on monitoring data analysis has been reported, but the relationship between soil properties and pipe deformation has not been well-interpreted. To characterize the relationship between soil property and pipe deformation, this paper presents a super learning based approach combining feature selection algorithms to predict the water mains structural behavior in different soil environments. Furthermore, automatic variable selection method, e.i. recursive feature elimination algorithm, were used to identify the critical predictors contributing to the pipe deformations. To investigate the adaptability of super learning to different predictive models, this research employed super learning based methods to three different datasets. The predictive performance was evaluated by R-squared, root-mean-square error and mean absolute error. Based on the prediction performance evaluation, the superiority of super learning was validated and demonstrated by predicting three types of pipe deformations accurately. In addition, a comprehensive understand of the water mains working environments becomes possible.

  3. Estimation of uncertainties in predictions of environmental transfer models: evaluation of methods and application to CHERPAC

    International Nuclear Information System (INIS)

    Koch, J.; Peterson, S-R.

    1995-10-01

    Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs

  4. Simplified method to predict mutual interactions of human transcription factors based on their primary structure

    KAUST Repository

    Schmeier, Sebastian

    2011-07-05

    Background: Physical interactions between transcription factors (TFs) are necessary for forming regulatory protein complexes and thus play a crucial role in gene regulation. Currently, knowledge about the mechanisms of these TF interactions is incomplete and the number of known TF interactions is limited. Computational prediction of such interactions can help identify potential new TF interactions as well as contribute to better understanding the complex machinery involved in gene regulation. Methodology: We propose here such a method for the prediction of TF interactions. The method uses only the primary sequence information of the interacting TFs, resulting in a much greater simplicity of the prediction algorithm. Through an advanced feature selection process, we determined a subset of 97 model features that constitute the optimized model in the subset we considered. The model, based on quadratic discriminant analysis, achieves a prediction accuracy of 85.39% on a blind set of interactions. This result is achieved despite the selection for the negative data set of only those TF from the same type of proteins, i.e. TFs that function in the same cellular compartment (nucleus) and in the same type of molecular process (transcription initiation). Such selection poses significant challenges for developing models with high specificity, but at the same time better reflects real-world problems. Conclusions: The performance of our predictor compares well to those of much more complex approaches for predicting TF and general protein-protein interactions, particularly when taking the reduced complexity of model utilisation into account. © 2011 Schmeier et al.

  5. Estimation of uncertainties in predictions of environmental transfer models: evaluation of methods and application to CHERPAC

    Energy Technology Data Exchange (ETDEWEB)

    Koch, J. [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center; Peterson, S-R.

    1995-10-01

    Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs.

  6. Simplified method to predict mutual interactions of human transcription factors based on their primary structure.

    Directory of Open Access Journals (Sweden)

    Sebastian Schmeier

    Full Text Available BACKGROUND: Physical interactions between transcription factors (TFs are necessary for forming regulatory protein complexes and thus play a crucial role in gene regulation. Currently, knowledge about the mechanisms of these TF interactions is incomplete and the number of known TF interactions is limited. Computational prediction of such interactions can help identify potential new TF interactions as well as contribute to better understanding the complex machinery involved in gene regulation. METHODOLOGY: We propose here such a method for the prediction of TF interactions. The method uses only the primary sequence information of the interacting TFs, resulting in a much greater simplicity of the prediction algorithm. Through an advanced feature selection process, we determined a subset of 97 model features that constitute the optimized model in the subset we considered. The model, based on quadratic discriminant analysis, achieves a prediction accuracy of 85.39% on a blind set of interactions. This result is achieved despite the selection for the negative data set of only those TF from the same type of proteins, i.e. TFs that function in the same cellular compartment (nucleus and in the same type of molecular process (transcription initiation. Such selection poses significant challenges for developing models with high specificity, but at the same time better reflects real-world problems. CONCLUSIONS: The performance of our predictor compares well to those of much more complex approaches for predicting TF and general protein-protein interactions, particularly when taking the reduced complexity of model utilisation into account.

  7. New method for probabilistic traffic demand predictions for en route sectors based on uncertain predictions of individual flight events.

    Science.gov (United States)

    2011-06-14

    This paper presents a novel analytical approach to and techniques for translating characteristics of uncertainty in predicting sector entry times and times in sector for individual flights into characteristics of uncertainty in predicting one-minute ...

  8. K-Line Patterns’ Predictive Power Analysis Using the Methods of Similarity Match and Clustering

    Directory of Open Access Journals (Sweden)

    Lv Tao

    2017-01-01

    Full Text Available Stock price prediction based on K-line patterns is the essence of candlestick technical analysis. However, there are some disputes on whether the K-line patterns have predictive power in academia. To help resolve the debate, this paper uses the data mining methods of pattern recognition, pattern clustering, and pattern knowledge mining to research the predictive power of K-line patterns. The similarity match model and nearest neighbor-clustering algorithm are proposed for solving the problem of similarity match and clustering of K-line series, respectively. The experiment includes testing the predictive power of the Three Inside Up pattern and Three Inside Down pattern with the testing dataset of the K-line series data of Shanghai 180 index component stocks over the latest 10 years. Experimental results show that (1 the predictive power of a pattern varies a great deal for different shapes and (2 each of the existing K-line patterns requires further classification based on the shape feature for improving the prediction performance.

  9. Predicting community structure in snakes on Eastern Nearctic islands using ecological neutral theory and phylogenetic methods.

    Science.gov (United States)

    Burbrink, Frank T; McKelvy, Alexander D; Pyron, R Alexander; Myers, Edward A

    2015-11-22

    Predicting species presence and richness on islands is important for understanding the origins of communities and how likely it is that species will disperse and resist extinction. The equilibrium theory of island biogeography (ETIB) and, as a simple model of sampling abundances, the unified neutral theory of biodiversity (UNTB), predict that in situations where mainland to island migration is high, species-abundance relationships explain the presence of taxa on islands. Thus, more abundant mainland species should have a higher probability of occurring on adjacent islands. In contrast to UNTB, if certain groups have traits that permit them to disperse to islands better than other taxa, then phylogeny may be more predictive of which taxa will occur on islands. Taking surveys of 54 island snake communities in the Eastern Nearctic along with mainland communities that have abundance data for each species, we use phylogenetic assembly methods and UNTB estimates to predict island communities. Species richness is predicted by island area, whereas turnover from the mainland to island communities is random with respect to phylogeny. Community structure appears to be ecologically neutral and abundance on the mainland is the best predictor of presence on islands. With regard to young and proximate islands, where allopatric or cladogenetic speciation is not a factor, we find that simple neutral models following UNTB and ETIB predict the structure of island communities. © 2015 The Author(s).

  10. MU-LOC: A Machine-Learning Method for Predicting Mitochondrially Localized Proteins in Plants

    Directory of Open Access Journals (Sweden)

    Ning Zhang

    2018-05-01

    Full Text Available Targeting and translocation of proteins to the appropriate subcellular compartments are crucial for cell organization and function. Newly synthesized proteins are transported to mitochondria with the assistance of complex targeting sequences containing either an N-terminal pre-sequence or a multitude of internal signals. Compared with experimental approaches, computational predictions provide an efficient way to infer subcellular localization of a protein. However, it is still challenging to predict plant mitochondrially localized proteins accurately due to various limitations. Consequently, the performance of current tools can be improved with new data and new machine-learning methods. We present MU-LOC, a novel computational approach for large-scale prediction of plant mitochondrial proteins. We collected a comprehensive dataset of plant subcellular localization, extracted features including amino acid composition, protein position weight matrix, and gene co-expression information, and trained predictors using deep neural network and support vector machine. Benchmarked on two independent datasets, MU-LOC achieved substantial improvements over six state-of-the-art tools for plant mitochondrial targeting prediction. In addition, MU-LOC has the advantage of predicting plant mitochondrial proteins either possessing or lacking N-terminal pre-sequences. We applied MU-LOC to predict candidate mitochondrial proteins for the whole proteome of Arabidopsis and potato. MU-LOC is publicly available at http://mu-loc.org.

  11. Machine learning and statistical methods for the prediction of maximal oxygen uptake: recent advances.

    Science.gov (United States)

    Abut, Fatih; Akay, Mehmet Fatih

    2015-01-01

    Maximal oxygen uptake (VO2max) indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R) and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance.

  12. A Bayesian method and its variational approximation for prediction of genomic breeding values in multiple traits

    Directory of Open Access Journals (Sweden)

    Hayashi Takeshi

    2013-01-01

    Full Text Available Abstract Background Genomic selection is an effective tool for animal and plant breeding, allowing effective individual selection without phenotypic records through the prediction of genomic breeding value (GBV. To date, genomic selection has focused on a single trait. However, actual breeding often targets multiple correlated traits, and, therefore, joint analysis taking into consideration the correlation between traits, which might result in more accurate GBV prediction than analyzing each trait separately, is suitable for multi-trait genomic selection. This would require an extension of the prediction model for single-trait GBV to multi-trait case. As the computational burden of multi-trait analysis is even higher than that of single-trait analysis, an effective computational method for constructing a multi-trait prediction model is also needed. Results We described a Bayesian regression model incorporating variable selection for jointly predicting GBVs of multiple traits and devised both an MCMC iteration and variational approximation for Bayesian estimation of parameters in this multi-trait model. The proposed Bayesian procedures with MCMC iteration and variational approximation were referred to as MCBayes and varBayes, respectively. Using simulated datasets of SNP genotypes and phenotypes for three traits with high and low heritabilities, we compared the accuracy in predicting GBVs between multi-trait and single-trait analyses as well as between MCBayes and varBayes. The results showed that, compared to single-trait analysis, multi-trait analysis enabled much more accurate GBV prediction for low-heritability traits correlated with high-heritability traits, by utilizing the correlation structure between traits, while the prediction accuracy for uncorrelated low-heritability traits was comparable or less with multi-trait analysis in comparison with single-trait analysis depending on the setting for prior probability that a SNP has zero

  13. A Meta-Path-Based Prediction Method for Human miRNA-Target Association

    Directory of Open Access Journals (Sweden)

    Jiawei Luo

    2016-01-01

    Full Text Available MicroRNAs (miRNAs are short noncoding RNAs that play important roles in regulating gene expressing, and the perturbed miRNAs are often associated with development and tumorigenesis as they have effects on their target mRNA. Predicting potential miRNA-target associations from multiple types of genomic data is a considerable problem in the bioinformatics research. However, most of the existing methods did not fully use the experimentally validated miRNA-mRNA interactions. Here, we developed RMLM and RMLMSe to predict the relationship between miRNAs and their targets. RMLM and RMLMSe are global approaches as they can reconstruct the missing associations for all the miRNA-target simultaneously and RMLMSe demonstrates that the integration of sequence information can improve the performance of RMLM. In RMLM, we use RM measure to evaluate different relatedness between miRNA and its target based on different meta-paths; logistic regression and MLE method are employed to estimate the weight of different meta-paths. In RMLMSe, sequence information is utilized to improve the performance of RMLM. Here, we carry on fivefold cross validation and pathway enrichment analysis to prove the performance of our methods. The fivefold experiments show that our methods have higher AUC scores compared with other methods and the integration of sequence information can improve the performance of miRNA-target association prediction.

  14. A Review of Computational Methods to Predict the Risk of Rupture of Abdominal Aortic Aneurysms

    Directory of Open Access Journals (Sweden)

    Tejas Canchi

    2015-01-01

    Full Text Available Computational methods have played an important role in health care in recent years, as determining parameters that affect a certain medical condition is not possible in experimental conditions in many cases. Computational fluid dynamics (CFD methods have been used to accurately determine the nature of blood flow in the cardiovascular and nervous systems and air flow in the respiratory system, thereby giving the surgeon a diagnostic tool to plan treatment accordingly. Machine learning or data mining (MLD methods are currently used to develop models that learn from retrospective data to make a prediction regarding factors affecting the progression of a disease. These models have also been successful in incorporating factors such as patient history and occupation. MLD models can be used as a predictive tool to determine rupture potential in patients with abdominal aortic aneurysms (AAA along with CFD-based prediction of parameters like wall shear stress and pressure distributions. A combination of these computer methods can be pivotal in bridging the gap between translational and outcomes research in medicine. This paper reviews the use of computational methods in the diagnosis and treatment of AAA.

  15. A maintenance time prediction method considering ergonomics through virtual reality simulation.

    Science.gov (United States)

    Zhou, Dong; Zhou, Xin-Xin; Guo, Zi-Yue; Lv, Chuan

    2016-01-01

    Maintenance time is a critical quantitative index in maintainability prediction. An efficient maintenance time measurement methodology plays an important role in early stage of the maintainability design. While traditional way to measure the maintenance time ignores the differences between line production and maintenance action. This paper proposes a corrective MOD method considering several important ergonomics factors to predict the maintenance time. With the help of the DELMIA analysis tools, the influence coefficient of several factors are discussed to correct the MOD value and the designers can measure maintenance time by calculating the sum of the corrective MOD time of each maintenance therbligs. Finally a case study is introduced, by maintaining the virtual prototype of APU motor starter in DELMIA, designer obtains the actual maintenance time by the proposed method, and the result verifies the effectiveness and accuracy of the proposed method.

  16. Whole-Genome Regression and Prediction Methods Applied to Plant and Animal Breeding

    Science.gov (United States)

    de los Campos, Gustavo; Hickey, John M.; Pong-Wong, Ricardo; Daetwyler, Hans D.; Calus, Mario P. L.

    2013-01-01

    Genomic-enabled prediction is becoming increasingly important in animal and plant breeding and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of markers concurrently. Methods exist that allow implementing these large-p with small-n regressions, and genome-enabled selection (GS) is being implemented in several plant and animal breeding programs. The list of available methods is long, and the relationships between them have not been fully addressed. In this article we provide an overview of available methods for implementing parametric WGR models, discuss selected topics that emerge in applications, and present a general discussion of lessons learned from simulation and empirical data analysis in the last decade. PMID:22745228

  17. TEHRAN AIR POLLUTANTS PREDICTION BASED ON RANDOM FOREST FEATURE SELECTION METHOD

    Directory of Open Access Journals (Sweden)

    A. Shamsoddini

    2017-09-01

    Full Text Available Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  18. Tehran Air Pollutants Prediction Based on Random Forest Feature Selection Method

    Science.gov (United States)

    Shamsoddini, A.; Aboodi, M. R.; Karami, J.

    2017-09-01

    Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  19. Efficient operation scheduling for adsorption chillers using predictive optimization-based control methods

    Science.gov (United States)

    Bürger, Adrian; Sawant, Parantapa; Bohlayer, Markus; Altmann-Dieses, Angelika; Braun, Marco; Diehl, Moritz

    2017-10-01

    Within this work, the benefits of using predictive control methods for the operation of Adsorption Cooling Machines (ACMs) are shown on a simulation study. Since the internal control decisions of series-manufactured ACMs often cannot be influenced, the work focuses on optimized scheduling of an ACM considering its internal functioning as well as forecasts for load and driving energy occurrence. For illustration, an assumed solar thermal climate system is introduced and a system model suitable for use within gradient-based optimization methods is developed. The results of a system simulation using a conventional scheme for ACM scheduling are compared to the results of a predictive, optimization-based scheduling approach for the same exemplary scenario of load and driving energy occurrence. The benefits of the latter approach are shown and future actions for application of these methods for system control are addressed.

  20. Reliability residual-life prediction method for thermal aging based on performance degradation

    International Nuclear Information System (INIS)

    Ren Shuhong; Xue Fei; Yu Weiwei; Ti Wenxin; Liu Xiaotian

    2013-01-01

    The paper makes the study of the nuclear power plant main pipeline. The residual-life of the main pipeline that failed due to thermal aging has been studied by the use of performance degradation theory and Bayesian updating methods. Firstly, the thermal aging impact property degradation process of the main pipeline austenitic stainless steel has been analyzed by the accelerated thermal aging test data. Then, the thermal aging residual-life prediction model based on the impact property degradation data is built by Bayesian updating methods. Finally, these models are applied in practical situations. It is shown that the proposed methods are feasible and the prediction accuracy meets the needs of the project. Also, it provides a foundation for the scientific management of aging management of the main pipeline. (authors)

  1. A Multifeatures Fusion and Discrete Firefly Optimization Method for Prediction of Protein Tyrosine Sulfation Residues.

    Science.gov (United States)

    Guo, Song; Liu, Chunhua; Zhou, Peng; Li, Yanling

    2016-01-01

    Tyrosine sulfation is one of the ubiquitous protein posttranslational modifications, where some sulfate groups are added to the tyrosine residues. It plays significant roles in various physiological processes in eukaryotic cells. To explore the molecular mechanism of tyrosine sulfation, one of the prerequisites is to correctly identify possible protein tyrosine sulfation residues. In this paper, a novel method was presented to predict protein tyrosine sulfation residues from primary sequences. By means of informative feature construction and elaborate feature selection and parameter optimization scheme, the proposed predictor achieved promising results and outperformed many other state-of-the-art predictors. Using the optimal features subset, the proposed method achieved mean MCC of 94.41% on the benchmark dataset, and a MCC of 90.09% on the independent dataset. The experimental performance indicated that our new proposed method could be effective in identifying the important protein posttranslational modifications and the feature selection scheme would be powerful in protein functional residues prediction research fields.

  2. Relative proportions of polycyclic aromatic hydrocarbons differ between accumulation bioassays and chemical methods to predict bioavailability

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Eyles, Jose L., E-mail: j.l.gomezeyles@reading.ac.u [University of Reading, School of Human and Environmental Sciences, Department of Soil Science, Reading RG6 6DW, Berkshire (United Kingdom); Collins, Chris D.; Hodson, Mark E. [University of Reading, School of Human and Environmental Sciences, Department of Soil Science, Reading RG6 6DW, Berkshire (United Kingdom)

    2010-01-15

    Chemical methods to predict the bioavailable fraction of organic contaminants are usually validated in the literature by comparison with established bioassays. A soil spiked with polycyclic aromatic hydrocarbons (PAHs) was aged over six months and subjected to butanol, cyclodextrin and tenax extractions as well as an exhaustive extraction to determine total PAH concentrations at several time points. Earthworm (Eisenia fetida) and rye grass root (Lolium multiflorum) accumulation bioassays were conducted in parallel. Butanol extractions gave the best relationship with earthworm accumulation (r{sup 2} <= 0.54, p <= 0.01); cyclodextrin, butanol and acetone-hexane extractions all gave good predictions of accumulation in rye grass roots (r{sup 2} <= 0.86, p <= 0.01). However, the profile of the PAHs extracted by the different chemical methods was significantly different (p < 0.01) to that accumulated in the organisms. Biota accumulated a higher proportion of the heavier 4-ringed PAHs. It is concluded that bioaccumulation is a complex process that cannot be predicted by measuring the bioavailable fraction alone. - The ability of chemical methods to predict PAH accumulation in Eisenia fetida and Lolium multiflorum was hindered by the varied metabolic fate of the different PAHs within the organisms.

  3. Fast computational methods for predicting protein structure from primary amino acid sequence

    Science.gov (United States)

    Agarwal, Pratul Kumar [Knoxville, TN

    2011-07-19

    The present invention provides a method utilizing primary amino acid sequence of a protein, energy minimization, molecular dynamics and protein vibrational modes to predict three-dimensional structure of a protein. The present invention also determines possible intermediates in the protein folding pathway. The present invention has important applications to the design of novel drugs as well as protein engineering. The present invention predicts the three-dimensional structure of a protein independent of size of the protein, overcoming a significant limitation in the prior art.

  4. Methods to improve genomic prediction and GWAS using combined Holstein populations

    DEFF Research Database (Denmark)

    Li, Xiujin

    The thesis focuses on methods to improve GWAS and genomic prediction using combined Holstein populations and investigations G by E interaction. The conclusions are: 1) Prediction reliabilities for Brazilian Holsteins can be increased by adding Nordic and Frensh genotyped bulls and a large G by E...... interaction exists between populations. 2) Combining data from Chinese and Danish Holstein populations increases the power of GWAS and detects new QTL regions for milk fatty acid traits. 3) The novel multi-trait Bayesian model efficiently estimates region-specific genomic variances, covariances...

  5. Evaluation of creep-fatigue life prediction methods for low-carbon/nitrogen-added SUS316

    International Nuclear Information System (INIS)

    Takahashi, Yukio

    1998-01-01

    Low-carbon/medium nitrogen 316 stainless steel called 316FR is a principal candidate for the high-temperature structural materials of a demonstration fast reactor plant. Because creep-fatigue damage is a dominant failure mechanism of the high-temperature materials subjected to thermal cycles, it is important to establish a reliable creep-fatigue life prediction method for this steel. Long-term creep tests and strain-controlled creep-fatigue tests have been conducted at various conditions for two different heats of the steel. In the constant load creep tests, both materials showed similar creep rupture strength but different ductility. The material with lower ductility exhibited shorter life under creep-fatigue loading conditions and correlation of creep-fatigue life with rupture ductility, rather than rupture strength, was made clear. Two kinds of creep-fatigue life prediction methods, i.e. time fraction rule and ductility exhaustion method were applied to predict the creep-fatigue life. Accurate description of stress relaxation behavior was achieved by an addition of 'viscous' strain to conventional creep strain and only the latter of which was assumed to contribute to creep damage in the application of ductility exhaustion method. The current version of the ductility exhaustion method was found to have very good accuracy in creep-fatigue life prediction, while the time fraction rule overpredicted creep-fatigue life as large as a factor of 30. To make a reliable estimation of the creep damage in actual components, use of ductility exhaustion method is strongly recommended. (author)

  6. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    DEFF Research Database (Denmark)

    Petersen, Bent; Petersen, Thomas Nordahl; Andersen, Pernille

    2009-01-01

    : The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability...... comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0.79 and 0.74 are obtained using our and the compared method, respectively. This tendency is true for any selected subset....

  7. A finite element modeling method for predicting long term corrosion rates

    International Nuclear Information System (INIS)

    Fu, J.W.; Chan, S.

    1984-01-01

    For the analyses of galvanic corrosion, pitting and crevice corrosion, which have been identified as possible corrosion processes for nuclear waste isolation, a finite element method has been developed for the prediction of corrosion rates. The method uses a finite element mesh to model the corrosive environment and the polarization curves of metals are assigned as the boundary conditions to calculate the corrosion cell current distribution. A subroutine is used to calculate the chemical change with time in the crevice or the pit environments. In this paper, the finite element method is described along with experimental confirmation

  8. RANDOM FUNCTIONS AND INTERVAL METHOD FOR PREDICTING THE RESIDUAL RESOURCE OF BUILDING STRUCTURES

    Directory of Open Access Journals (Sweden)

    Shmelev Gennadiy Dmitrievich

    2017-11-01

    Full Text Available Subject: possibility of using random functions and interval prediction method for estimating the residual life of building structures in the currently used buildings. Research objectives: coordination of ranges of values to develop predictions and random functions that characterize the processes being predicted. Materials and methods: when performing this research, the method of random functions and the method of interval prediction were used. Results: in the course of this work, the basic properties of random functions, including the properties of families of random functions, are studied. The coordination of time-varying impacts and loads on building structures is considered from the viewpoint of their influence on structures and representation of the structures’ behavior in the form of random functions. Several models of random functions are proposed for predicting individual parameters of structures. For each of the proposed models, its scope of application is defined. The article notes that the considered approach of forecasting has been used many times at various sites. In addition, the available results allowed the authors to develop a methodology for assessing the technical condition and residual life of building structures for the currently used facilities. Conclusions: we studied the possibility of using random functions and processes for the purposes of forecasting the residual service lives of structures in buildings and engineering constructions. We considered the possibility of using an interval forecasting approach to estimate changes in defining parameters of building structures and their technical condition. A comprehensive technique for forecasting the residual life of building structures using the interval approach is proposed.

  9. Prediction of pKa values using the PM6 semiempirical method

    Directory of Open Access Journals (Sweden)

    Jimmy C. Kromann

    2016-08-01

    Full Text Available The PM6 semiempirical method and the dispersion and hydrogen bond-corrected PM6-D3H+ method are used together with the SMD and COSMO continuum solvation models to predict pKa values of pyridines, alcohols, phenols, benzoic acids, carboxylic acids, and phenols using isodesmic reactions and compared to published ab initio results. The pKa values of pyridines, alcohols, phenols, and benzoic acids considered in this study can generally be predicted with PM6 and ab initio methods to within the same overall accuracy, with average mean absolute differences (MADs of 0.6–0.7 pH units. For carboxylic acids, the accuracy (0.7–1.0 pH units is also comparable to ab initio results if a single outlier is removed. For primary, secondary, and tertiary amines the accuracy is, respectively, similar (0.5–0.6, slightly worse (0.5–1.0, and worse (1.0–2.5, provided that di- and tri-ethylamine are used as reference molecules for secondary and tertiary amines. When applied to a drug-like molecule where an empirical pKa predictor exhibits a large (4.9 pH unit error, we find that the errors for PM6-based predictions are roughly the same in magnitude but opposite in sign. As a result, most of the PM6-based methods predict the correct protonation state at physiological pH, while the empirical predictor does not. The computational cost is around 2–5 min per conformer per core processor, making PM6-based pKa prediction computationally efficient enough to be used for high-throughput screening using on the order of 100 core processors.

  10. Insights from triangulation of two purchase choice elicitation methods to predict social decision making in healthcare.

    Science.gov (United States)

    Whitty, Jennifer A; Rundle-Thiele, Sharyn R; Scuffham, Paul A

    2012-03-01

    Discrete choice experiments (DCEs) and the Juster scale are accepted methods for the prediction of individual purchase probabilities. Nevertheless, these methods have seldom been applied to a social decision-making context. To gain an overview of social decisions for a decision-making population through data triangulation, these two methods were used to understand purchase probability in a social decision-making context. We report an exploratory social decision-making study of pharmaceutical subsidy in Australia. A DCE and selected Juster scale profiles were presented to current and past members of the Australian Pharmaceutical Benefits Advisory Committee and its Economic Subcommittee. Across 66 observations derived from 11 respondents for 6 different pharmaceutical profiles, there was a small overall median difference of 0.024 in the predicted probability of public subsidy (p = 0.003), with the Juster scale predicting the higher likelihood. While consistency was observed at the extremes of the probability scale, the funding probability differed over the mid-range of profiles. There was larger variability in the DCE than Juster predictions within each individual respondent, suggesting the DCE is better able to discriminate between profiles. However, large variation was observed between individuals in the Juster scale but not DCE predictions. It is important to use multiple methods to obtain a complete picture of the probability of purchase or public subsidy in a social decision-making context until further research can elaborate on our findings. This exploratory analysis supports the suggestion that the mixed logit model, which was used for the DCE analysis, may fail to adequately account for preference heterogeneity in some contexts.

  11. Novel Methods for Drug-Target Interaction Prediction using Graph Mining

    KAUST Repository

    Ba Alawi, Wail

    2016-08-31

    The problem of developing drugs that can be used to cure diseases is important and requires a careful approach. Since pursuing the wrong candidate drug for a particular disease could be very costly in terms of time and money, there is a strong interest in minimizing such risks. Drug repositioning has become a hot topic of research, as it helps reduce these risks significantly at the early stages of drug development by reusing an approved drug for the treatment of a different disease. Still, finding new usage for a drug is non-trivial, as it is necessary to find out strong supporting evidence that the proposed new uses of drugs are plausible. Many computational approaches were developed to narrow the list of possible candidate drug-target interactions (DTIs) before any experiments are done. However, many of these approaches suffer from unacceptable levels of false positives. We developed two novel methods based on graph mining networks of drugs and targets. The first method (DASPfind) finds all non-cyclic paths that connect a drug and a target, and using a function that we define, calculates a score from all the paths. This score describes our confidence that DTI is correct. We show that DASPfind significantly outperforms other state-of-the-art methods in predicting the top ranked target for each drug. We demonstrate the utility of DASPfind by predicting 15 novel DTIs over a set of ion channel proteins, and confirming 12 out of these 15 DTIs through experimental evidence reported in literature and online drug databases. The second method (DASPfind+) modifies DASPfind in order to increase the confidence and reliability of the resultant predictions. Based on the structure of the drug-target interaction (DTI) networks, we introduced an optimization scheme that incrementally alters the network structure locally for each drug to achieve more robust top 1 ranked predictions. Moreover, we explored effects of several similarity measures between the targets on the prediction

  12. Alternative prediction methods of protein and energy evaluation of pig feeds.

    Science.gov (United States)

    Święch, Ewa

    2017-01-01

    Precise knowledge of the actual nutritional value of individual feedstuffs and complete diets for pigs is important for efficient livestock production. Methods of assessment of protein and energy values in pig feeds have been briefly described. In vivo determination of protein and energy values of feeds in pigs are time-consuming, expensive and very often require the use of surgically-modified animals. There is a need for more simple, rapid, inexpensive and reproducible methods for routine feed evaluation. Protein and energy values of pig feeds can be estimated using the following alternative methods: 1) prediction equations based on chemical composition; 2) animal models as rats, cockerels and growing pigs for adult animals; 3) rapid methods, such as the mobile nylon bag technique and in vitro methods. Alternative methods developed for predicting the total tract and ileal digestibility of nutrients including amino acids in feedstuffs and diets for pigs have been reviewed. This article focuses on two in vitro methods that can be used for the routine evaluation of amino acid ileal digestibility and energy value of pig feeds and on factors affecting digestibility determined in vivo in pigs and by alternative methods. Validation of alternative methods has been carried out by comparing the results obtained using these methods with those acquired in vivo in pigs. In conclusion, energy and protein values of pig feeds may be estimated with satisfactory precision in rats and by the two- or three-step in vitro methods providing equations for the calculation of standardized ileal digestibility of amino acids and metabolizable energy content. The use of alternative methods of feed evaluation is an important way for reduction of stressful animal experiments.

  13. A New Hybrid Method for Improving the Performance of Myocardial Infarction Prediction

    Directory of Open Access Journals (Sweden)

    Hojatollah Hamidi

    2016-06-01

    Full Text Available Abstract Introduction: Myocardial Infarction, also known as heart attack, normally occurs due to such causes as smoking, family history, diabetes, and so on. It is recognized as one of the leading causes of death in the world. Therefore, the present study aimed to evaluate the performance of classification models in order to predict Myocardial Infarction, using a feature selection method that includes Forward Selection and Genetic Algorithm. Materials & Methods: The Myocardial Infarction data set used in this study contains the information related to 519 visitors to Shahid Madani Specialized Hospital of Khorramabad, Iran. This data set includes 33 features. The proposed method includes a hybrid feature selection method in order to enhance the performance of classification algorithms. The first step of this method selects the features using Forward Selection. At the second step, the selected features were given to a genetic algorithm, in order to select the best features. Classification algorithms entail Ada Boost, Naïve Bayes, J48 decision tree and simpleCART are applied to the data set with selected features, for predicting Myocardial Infarction. Results: The best results have been achieved after applying the proposed feature selection method, which were obtained via simpleCART and J48 algorithms with the accuracies of 96.53% and 96.34%, respectively. Conclusion: Based on the results, the performances of classification algorithms are improved. So, applying the proposed feature selection method, along with classification algorithms seem to be considered as a confident method with respect to predicting the Myocardial Infarction.

  14. Prediction of Student Dropout in E-Learning Program Through the Use of Machine Learning Method

    Directory of Open Access Journals (Sweden)

    Mingjie Tan

    2015-02-01

    Full Text Available The high rate of dropout is a serious problem in E-learning program. Thus it has received extensive concern from the education administrators and researchers. Predicting the potential dropout students is a workable solution to prevent dropout. Based on the analysis of related literature, this study selected student’s personal characteristic and academic performance as input attributions. Prediction models were developed using Artificial Neural Network (ANN, Decision Tree (DT and Bayesian Networks (BNs. A large sample of 62375 students was utilized in the procedures of model training and testing. The results of each model were presented in confusion matrix, and analyzed by calculating the rates of accuracy, precision, recall, and F-measure. The results suggested all of the three machine learning methods were effective in student dropout prediction, and DT presented a better performance. Finally, some suggestions were made for considerable future research.

  15. Traffic Flow Prediction with Rainfall Impact Using a Deep Learning Method

    Directory of Open Access Journals (Sweden)

    Yuhan Jia

    2017-01-01

    Full Text Available Accurate traffic flow prediction is increasingly essential for successful traffic modeling, operation, and management. Traditional data driven traffic flow prediction approaches have largely assumed restrictive (shallow model architectures and do not leverage the large amount of environmental data available. Inspired by deep learning methods with more complex model architectures and effective data mining capabilities, this paper introduces the deep belief network (DBN and long short-term memory (LSTM to predict urban traffic flow considering the impact of rainfall. The rainfall-integrated DBN and LSTM can learn the features of traffic flow under various rainfall scenarios. Experimental results indicate that, with the consideration of additional rainfall factor, the deep learning predictors have better accuracy than existing predictors and also yield improvements over the original deep learning models without rainfall input. Furthermore, the LSTM can outperform the DBN to capture the time series characteristics of traffic flow data.

  16. Analysis backpropagation methods with neural network for prediction of children's ability in psychomotoric

    Science.gov (United States)

    Izhari, F.; Dhany, H. W.; Zarlis, M.; Sutarman

    2018-03-01

    A good age in optimizing aspects of development is at the age of 4-6 years, namely with psychomotor development. Psychomotor is broader, more difficult to monitor but has a meaningful value for the child's life because it directly affects his behavior and deeds. Therefore, there is a problem to predict the child's ability level based on psychomotor. This analysis uses backpropagation method analysis with artificial neural network to predict the ability of the child on the psychomotor aspect by generating predictions of the child's ability on psychomotor and testing there is a mean squared error (MSE) value at the end of the training of 0.001. There are 30% of children aged 4-6 years have a good level of psychomotor ability, excellent, less good, and good enough.

  17. On some descriptive and predictive methods for the dynamics of cancer growth

    Directory of Open Access Journals (Sweden)

    Iulian T. Vlad

    2015-09-01

    Full Text Available Cancer is a widely spread disease that affects a large proportion of the human population, and many research teams are developing algorithms to help medics to understand this disease. In particular, tumor growth has been studied from different viewpoints and several mathematical models have been proposed. In this paper, we review a set of comprehensive and modern tools that are useful for prediction of cancer growth in space and time. We comment on three alternative approaches. We first consider spatio-temporal stochastic processes within a Bayesian framework to model spatial heterogeneity, temporal dependence and spatio-temporal interactions amongst the pixels, providing a general modeling framework for such dynamics. We then consider predictions based on geometric properties of plane curves and vectors, and propose two methods of geometric prediction. Finally we focus on functional data analysis to statistically compare tumor contour evolutions. We also analyze real data on brain tumor.

  18. Predicting metabolic syndrome using decision tree and support vector machine methods

    Directory of Open Access Journals (Sweden)

    Farzaneh Karimi-Alavijeh

    2016-06-01

    Full Text Available BACKGROUND: Metabolic syndrome which underlies the increased prevalence of cardiovascular disease and Type 2 diabetes is considered as a group of metabolic abnormalities including central obesity, hypertriglyceridemia, glucose intolerance, hypertension, and dyslipidemia. Recently, artificial intelligence based health-care systems are highly regarded because of its success in diagnosis, prediction, and choice of treatment. This study employs machine learning technics for predict the metabolic syndrome. METHODS: This study aims to employ decision tree and support vector machine (SVM to predict the 7-year incidence of metabolic syndrome. This research is a practical one in which data from 2107 participants of Isfahan Cohort Study has been utilized. The subjects without metabolic syndrome according to the ATPIII criteria were selected. The features that have been used in this data set include: gender, age, weight, body mass index, waist circumference, waist-to-hip ratio, hip circumference, physical activity, smoking, hypertension, antihypertensive medication use, systolic blood pressure (BP, diastolic BP, fasting blood sugar, 2-hour blood glucose, triglycerides (TGs, total cholesterol, low-density lipoprotein, high density lipoprotein-cholesterol, mean corpuscular volume, and mean corpuscular hemoglobin. Metabolic syndrome was diagnosed based on ATPIII criteria and two methods of decision tree and SVM were selected to predict the metabolic syndrome. The criteria of sensitivity, specificity and accuracy were used for validation. RESULTS: SVM and decision tree methods were examined according to the criteria of sensitivity, specificity and accuracy. Sensitivity, specificity and accuracy were 0.774 (0.758, 0.74 (0.72 and 0.757 (0.739 in SVM (decision tree method. CONCLUSION: The results show that SVM method sensitivity, specificity and accuracy is more efficient than decision tree. The results of decision tree method show that the TG is the most

  19. Prediction of pKa Values for Druglike Molecules Using Semiempirical Quantum Chemical Methods.

    Science.gov (United States)

    Jensen, Jan H; Swain, Christopher J; Olsen, Lars

    2017-01-26

    Rapid yet accurate pK a prediction for druglike molecules is a key challenge in computational chemistry. This study uses PM6-DH+/COSMO, PM6/COSMO, PM7/COSMO, PM3/COSMO, AM1/COSMO, PM3/SMD, AM1/SMD, and DFTB3/SMD to predict the pK a values of 53 amine groups in 48 druglike compounds. The approach uses an isodesmic reaction where the pK a value is computed relative to a chemically related reference compound for which the pK a value has been measured experimentally or estimated using a standard empirical approach. The AM1- and PM3-based methods perform best with RMSE values of 1.4-1.6 pH units that have uncertainties of ±0.2-0.3 pH units, which make them statistically equivalent. However, for all but PM3/SMD and AM1/SMD the RMSEs are dominated by a single outlier, cefadroxil, caused by proton transfer in the zwitterionic protonation state. If this outlier is removed, the RMSE values for PM3/COSMO and AM1/COSMO drop to 1.0 ± 0.2 and 1.1 ± 0.3, whereas PM3/SMD and AM1/SMD remain at 1.5 ± 0.3 and 1.6 ± 0.3/0.4 pH units, making the COSMO-based predictions statistically better than the SMD-based predictions. For pK a calculations where a zwitterionic state is not involved or proton transfer in a zwitterionic state is not observed, PM3/COSMO or AM1/COSMO is the best pK a prediction method; otherwise PM3/SMD or AM1/SMD should be used. Thus, fast and relatively accurate pK a prediction for 100-1000s of druglike amines is feasible with the current setup and relatively modest computational resources.

  20. A data-driven prediction method for fast-slow systems

    Science.gov (United States)

    Groth, Andreas; Chekroun, Mickael; Kondrashov, Dmitri; Ghil, Michael

    2016-04-01

    In this work, we present a prediction method for processes that exhibit a mixture of variability on low and fast scales. The method relies on combining empirical model reduction (EMR) with singular spectrum analysis (SSA). EMR is a data-driven methodology for constructing stochastic low-dimensional models that account for nonlinearity and serial correlation in the estimated noise, while SSA provides a decomposition of the complex dynamics into low-order components that capture spatio-temporal behavior on different time scales. Our study focuses on the data-driven modeling of partial observations from dynamical systems that exhibit power spectra with broad peaks. The main result in this talk is that the combination of SSA pre-filtering with EMR modeling improves, under certain circumstances, the modeling and prediction skill of such a system, as compared to a standard EMR prediction based on raw data. Specifically, it is the separation into "fast" and "slow" temporal scales by the SSA pre-filtering that achieves the improvement. We show, in particular that the resulting EMR-SSA emulators help predict intermittent behavior such as rapid transitions between specific regions of the system's phase space. This capability of the EMR-SSA prediction will be demonstrated on two low-dimensional models: the Rössler system and a Lotka-Volterra model for interspecies competition. In either case, the chaotic dynamics is produced through a Shilnikov-type mechanism and we argue that the latter seems to be an important ingredient for the good prediction skills of EMR-SSA emulators. Shilnikov-type behavior has been shown to arise in various complex geophysical fluid models, such as baroclinic quasi-geostrophic flows in the mid-latitude atmosphere and wind-driven double-gyre ocean circulation models. This pervasiveness of the Shilnikow mechanism of fast-slow transition opens interesting perspectives for the extension of the proposed EMR-SSA approach to more realistic situations.

  1. Geometry optimization method versus predictive ability in QSPR modeling for ionic liquids

    Science.gov (United States)

    Rybinska, Anna; Sosnowska, Anita; Barycki, Maciej; Puzyn, Tomasz

    2016-02-01

    Computational techniques, such as Quantitative Structure-Property Relationship (QSPR) modeling, are very useful in predicting physicochemical properties of various chemicals. Building QSPR models requires calculating molecular descriptors and the proper choice of the geometry optimization method, which will be dedicated to specific structure of tested compounds. Herein, we examine the influence of the ionic liquids' (ILs) geometry optimization methods on the predictive ability of QSPR models by comparing three models. The models were developed based on the same experimental data on density collected for 66 ionic liquids, but with employing molecular descriptors calculated from molecular geometries optimized at three different levels of the theory, namely: (1) semi-empirical (PM7), (2) ab initio (HF/6-311+G*) and (3) density functional theory (B3LYP/6-311+G*). The model in which the descriptors were calculated by using ab initio HF/6-311+G* method indicated the best predictivity capabilities ({{Q}}_{{EXT}}2 = 0.87). However, PM7-based model has comparable values of quality parameters ({{Q}}_{{EXT}}2 = 0.84). Obtained results indicate that semi-empirical methods (faster and less expensive regarding CPU time) can be successfully employed to geometry optimization in QSPR studies for ionic liquids.

  2. Predicting Taxi-Out Time at Congested Airports with Optimization-Based Support Vector Regression Methods

    Directory of Open Access Journals (Sweden)

    Guan Lian

    2018-01-01

    Full Text Available Accurate prediction of taxi-out time is significant precondition for improving the operationality of the departure process at an airport, as well as reducing the long taxi-out time, congestion, and excessive emission of greenhouse gases. Unfortunately, several of the traditional methods of predicting taxi-out time perform unsatisfactorily at congested airports. This paper describes and tests three of those conventional methods which include Generalized Linear Model, Softmax Regression Model, and Artificial Neural Network method and two improved Support Vector Regression (SVR approaches based on swarm intelligence algorithm optimization, which include Particle Swarm Optimization (PSO and Firefly Algorithm. In order to improve the global searching ability of Firefly Algorithm, adaptive step factor and Lévy flight are implemented simultaneously when updating the location function. Six factors are analysed, of which delay is identified as one significant factor in congested airports. Through a series of specific dynamic analyses, a case study of Beijing International Airport (PEK is tested with historical data. The performance measures show that the proposed two SVR approaches, especially the Improved Firefly Algorithm (IFA optimization-based SVR method, not only perform as the best modelling measures and accuracy rate compared with the representative forecast models, but also can achieve a better predictive performance when dealing with abnormal taxi-out time states.

  3. Interior Noise Prediction of the Automobile Based on Hybrid FE-SEA Method

    Directory of Open Access Journals (Sweden)

    S. M. Chen

    2011-01-01

    created using hybrid FE-SEA method. The modal density was calculated using analytical method and finite element method; the damping loss factors of the structural and acoustic cavity subsystems were also calculated with analytical method; the coupling loss factors between structure and structure, structure and acoustic cavity were both calculated. Four different kinds of excitations including road excitations, engine mount excitations, sound radiation excitations of the engine, and wind excitations are exerted on the body of automobile when the automobile is running on the road. All the excitations were calculated using virtual prototype technology, computational fluid dynamics (CFD, and experiments realized in the design and development stage. The interior noise of the automobile was predicted and verified at speed of 120 km/h. The predicted and tested overall SPLs of the interior noise were 73.79 and 74.44 dB(A respectively. The comparison results also show that the prediction precision is satisfied, and the effectiveness and reliability of the hybrid FE-SEA model of the automobile is verified.

  4. Critical assessment of methods of protein structure prediction (CASP) - round x

    KAUST Repository

    Moult, John; Fidelis, Krzysztof; Kryshtafovych, Andriy; Schwede, Torsten; Tramontano, Anna

    2013-01-01

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the tenth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. The 10 CASP experiments span almost 20 years of progress in the field of protein structure modeling, and there have been enormous advances in methods and model accuracy in that period. Notable in this round is the first sustained improvement of models with refinement methods, using molecular dynamics. For the first time, we tested the ability of modeling methods to make use of sparse experimental three-dimensional contact information, such as may be obtained from new experimental techniques, with encouraging results. On the other hand, new contact prediction methods, though holding considerable promise, have yet to make an impact in CASP testing. The nature of CASP targets has been changing in recent CASPs, reflecting shifts in experimental structural biology, with more irregular structures, more multi-domain and multi-subunit structures, and less standard versions of known folds. When allowance is made for these factors, we continue to see steady progress in the overall accuracy of models, particularly resulting from improvement of non-template regions.

  5. Critical assessment of methods of protein structure prediction (CASP) - round x

    KAUST Repository

    Moult, John

    2013-12-17

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the tenth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. The 10 CASP experiments span almost 20 years of progress in the field of protein structure modeling, and there have been enormous advances in methods and model accuracy in that period. Notable in this round is the first sustained improvement of models with refinement methods, using molecular dynamics. For the first time, we tested the ability of modeling methods to make use of sparse experimental three-dimensional contact information, such as may be obtained from new experimental techniques, with encouraging results. On the other hand, new contact prediction methods, though holding considerable promise, have yet to make an impact in CASP testing. The nature of CASP targets has been changing in recent CASPs, reflecting shifts in experimental structural biology, with more irregular structures, more multi-domain and multi-subunit structures, and less standard versions of known folds. When allowance is made for these factors, we continue to see steady progress in the overall accuracy of models, particularly resulting from improvement of non-template regions.

  6. Network-based ranking methods for prediction of novel disease associated microRNAs.

    Science.gov (United States)

    Le, Duc-Hau

    2015-10-01

    Many studies have shown roles of microRNAs on human disease and a number of computational methods have been proposed to predict such associations by ranking candidate microRNAs according to their relevance to a disease. Among them, machine learning-based methods usually have a limitation in specifying non-disease microRNAs as negative training samples. Meanwhile, network-based methods are becoming dominant since they well exploit a "disease module" principle in microRNA functional similarity networks. Of which, random walk with restart (RWR) algorithm-based method is currently state-of-the-art. The use of this algorithm was inspired from its success in predicting disease gene because the "disease module" principle also exists in protein interaction networks. Besides, many algorithms designed for webpage ranking have been successfully applied in ranking disease candidate genes because web networks share topological properties with protein interaction networks. However, these algorithms have not yet been utilized for disease microRNA prediction. We constructed microRNA functional similarity networks based on shared targets of microRNAs, and then we integrated them with a microRNA functional synergistic network, which was recently identified. After analyzing topological properties of these networks, in addition to RWR, we assessed the performance of (i) PRINCE (PRIoritizatioN and Complex Elucidation), which was proposed for disease gene prediction; (ii) PageRank with Priors (PRP) and K-Step Markov (KSM), which were used for studying web networks; and (iii) a neighborhood-based algorithm. Analyses on topological properties showed that all microRNA functional similarity networks are small-worldness and scale-free. The performance of each algorithm was assessed based on average AUC values on 35 disease phenotypes and average rankings of newly discovered disease microRNAs. As a result, the performance on the integrated network was better than that on individual ones. In

  7. Quantitative prediction process and evaluation method for seafloor polymetallic sulfide resources

    Directory of Open Access Journals (Sweden)

    Mengyi Ren

    2016-03-01

    Full Text Available Seafloor polymetallic sulfide resources exhibit significant development potential. In 2011, China received the exploration rights for 10,000 km2 of a polymetallic sulfides area in the Southwest Indian Ocean; China will be permitted to retain only 25% of the area in 2021. However, an exploration of seafloor hydrothermal sulfide deposits in China remains in the initial stage. According to the quantitative prediction theory and the exploration status of seafloor sulfides, this paper systematically proposes a quantitative prediction evaluation process of oceanic polymetallic sulfide resources and divides it into three stages: prediction in a large area, prediction in the prospecting region, and the verification and evaluation of targets. The first two stages of the prediction process have been employed in seafloor sulfides prospecting of the Chinese contract area. The results of stage one suggest that the Chinese contract area is located in the high posterior probability area, which indicates good prospecting potential area in the Indian Ocean. In stage two, the Chinese contract area of 48°–52°E has the highest posterior probability value, which can be selected as the reserved region for additional exploration. In stage three, the method of numerical simulation is employed to reproduce the ore-forming process of sulfides to verify the accuracy of the reserved targets obtained from the three-stage prediction. By narrowing the exploration area and gradually improving the exploration accuracy, the prediction will provide a basis for the exploration and exploitation of seafloor polymetallic sulfide resources.

  8. Bayesian prediction of future ice sheet volume using local approximation Markov chain Monte Carlo methods

    Science.gov (United States)

    Davis, A. D.; Heimbach, P.; Marzouk, Y.

    2017-12-01

    We develop a Bayesian inverse modeling framework for predicting future ice sheet volume with associated formal uncertainty estimates. Marine ice sheets are drained by fast-flowing ice streams, which we simulate using a flowline model. Flowline models depend on geometric parameters (e.g., basal topography), parameterized physical processes (e.g., calving laws and basal sliding), and climate parameters (e.g., surface mass balance), most of which are unknown or uncertain. Given observations of ice surface velocity and thickness, we define a Bayesian posterior distribution over static parameters, such as basal topography. We also define a parameterized distribution over variable parameters, such as future surface mass balance, which we assume are not informed by the data. Hyperparameters are used to represent climate change scenarios, and sampling their distributions mimics internal variation. For example, a warming climate corresponds to increasing mean surface mass balance but an individual sample may have periods of increasing or decreasing surface mass balance. We characterize the predictive distribution of ice volume by evaluating the flowline model given samples from the posterior distribution and the distribution over variable parameters. Finally, we determine the effect of climate change on future ice sheet volume by investigating how changing the hyperparameters affects the predictive distribution. We use state-of-the-art Bayesian computation to address computational feasibility. Characterizing the posterior distribution (using Markov chain Monte Carlo), sampling the full range of variable parameters and evaluating the predictive model is prohibitively expensive. Furthermore, the required resolution of the inferred basal topography may be very high, which is often challenging for sampling methods. Instead, we leverage regularity in the predictive distribution to build a computationally cheaper surrogate over the low dimensional quantity of interest (future ice

  9. Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods

    Science.gov (United States)

    Davis, A. D.

    2015-12-01

    The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity

  10. Advanced validation of CFD-FDTD combined method using highly applicable solver for reentry blackout prediction

    International Nuclear Information System (INIS)

    Takahashi, Yusuke

    2016-01-01

    An analysis model of plasma flow and electromagnetic waves around a reentry vehicle for radio frequency blackout prediction during aerodynamic heating was developed in this study. The model was validated based on experimental results from the radio attenuation measurement program. The plasma flow properties, such as electron number density, in the shock layer and wake region were obtained using a newly developed unstructured grid solver that incorporated real gas effect models and could treat thermochemically non-equilibrium flow. To predict the electromagnetic waves in plasma, a frequency-dependent finite-difference time-domain method was used. Moreover, the complicated behaviour of electromagnetic waves in the plasma layer during atmospheric reentry was clarified at several altitudes. The prediction performance of the combined model was evaluated with profiles and peak values of the electron number density in the plasma layer. In addition, to validate the models, the signal losses measured during communication with the reentry vehicle were directly compared with the predicted results. Based on the study, it was suggested that the present analysis model accurately predicts the radio frequency blackout and plasma attenuation of electromagnetic waves in plasma in communication. (paper)

  11. Predicting Vascular Plant Diversity in Anthropogenic Peatlands: Comparison of Modeling Methods with Free Satellite Data

    Directory of Open Access Journals (Sweden)

    Ivan Castillo-Riffart

    2017-07-01

    Full Text Available Peatlands are ecosystems of great relevance, because they have an important number of ecological functions that provide many services to mankind. However, studies focusing on plant diversity, addressed from the remote sensing perspective, are still scarce in these environments. In the present study, predictions of vascular plant richness and diversity were performed in three anthropogenic peatlands on Chiloé Island, Chile, using free satellite data from the sensors OLI, ASTER, and MSI. Also, we compared the suitability of these sensors using two modeling methods: random forest (RF and the generalized linear model (GLM. As predictors for the empirical models, we used the spectral bands, vegetation indices and textural metrics. Variable importance was estimated using recursive feature elimination (RFE. Fourteen out of the 17 predictors chosen by RFE were textural metrics, demonstrating the importance of the spatial context to predict species richness and diversity. Non-significant differences were found between the algorithms; however, the GLM models often showed slightly better results than the RF. Predictions obtained by the different satellite sensors did not show significant differences; nevertheless, the best models were obtained with ASTER (richness: R2 = 0.62 and %RMSE = 17.2, diversity: R2 = 0.71 and %RMSE = 20.2, obtained with RF and GLM respectively, followed by OLI and MSI. Diversity obtained higher accuracies than richness; nonetheless, accurate predictions were achieved for both, demonstrating the potential of free satellite data for the prediction of relevant community characteristics in anthropogenic peatland ecosystems.

  12. A link prediction method for heterogeneous networks based on BP neural network

    Science.gov (United States)

    Li, Ji-chao; Zhao, Dan-ling; Ge, Bing-Feng; Yang, Ke-Wei; Chen, Ying-Wu

    2018-04-01

    Most real-world systems, composed of different types of objects connected via many interconnections, can be abstracted as various complex heterogeneous networks. Link prediction for heterogeneous networks is of great significance for mining missing links and reconfiguring networks according to observed information, with considerable applications in, for example, friend and location recommendations and disease-gene candidate detection. In this paper, we put forward a novel integrated framework, called MPBP (Meta-Path feature-based BP neural network model), to predict multiple types of links for heterogeneous networks. More specifically, the concept of meta-path is introduced, followed by the extraction of meta-path features for heterogeneous networks. Next, based on the extracted meta-path features, a supervised link prediction model is built with a three-layer BP neural network. Then, the solution algorithm of the proposed link prediction model is put forward to obtain predicted results by iteratively training the network. Last, numerical experiments on the dataset of examples of a gene-disease network and a combat network are conducted to verify the effectiveness and feasibility of the proposed MPBP. It shows that the MPBP with very good performance is superior to the baseline methods.

  13. A comparative study on prediction methods for China's medium- and long-term coal demand

    International Nuclear Information System (INIS)

    Li, Bing-Bing; Liang, Qiao-Mei; Wang, Jin-Cheng

    2015-01-01

    Given the dominant position of coal in China's energy structure and in order to ensure a safe and stable energy supply, it is essential to perform a scientific and effective prediction of China's medium- and long-term coal demand. Based on the historical data of coal consumption and related factors such as GDP (Gross domestic product), coal price, industrial structure, total population, energy structure, energy efficiency, coal production and urbanization rate from 1987 to 2012, this study compared the prediction effects of five types of models. These models include the VAR (vector autoregressive model), RBF (radial basis function) neural network model, GA-DEM (genetic algorithm demand estimation model), PSO-DEM (particle swarm optimization demand estimation model) and IO (input–output model). By comparing the results of different models with the corresponding actual coal consumption, it is concluded that with a testing period from 2006 to 2012, the PSO-DEM model has a relatively optimal predicted effect on China's total coal demand, where the MAPE (mean absolute percentage error) is close to or below 2%. - Highlights: • The prediction effects of five methods for China's coal demand were compared. • Each model has acceptable prediction results, with MAPE below 5%. • Particle swarm optimization demand estimation model has better forecast efficacy.

  14. A novel method for prediction of dynamic smiling expressions after orthodontic treatment: a case report.

    Science.gov (United States)

    Dai, Fanfan; Li, Yangjing; Chen, Gui; Chen, Si; Xu, Tianmin

    2016-02-01

    Smile esthetics has become increasingly important for orthodontic patients, thus prediction of post-treatment smile is necessary for a perfect treatment plan. In this study, with a combination of three-dimensional craniofacial data from the cone beam computed tomography and color-encoded structured light system, a novel method for smile prediction was proposed based on facial expression transfer, in which dynamic facial expression was interpreted as a matrix of facial depth changes. Data extracted from the pre-treatment smile expression record were applied to the post-treatment static model to realize expression transfer. Therefore smile esthetics of the patient after treatment could be evaluated in pre-treatment planning procedure. The positive and negative mean values of error for prediction accuracy were 0.9 and - 1.1 mm respectively, with the standard deviation of ± 1.5 mm, which is clinically acceptable. Further studies would be conducted to reduce the prediction error from both the static and dynamic sides as well as to explore automatically combined prediction from the two sides.

  15. Prediction Method for the Complete Characteristic Curves of a Francis Pump-Turbine

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2018-02-01

    Full Text Available Complete characteristic curves of a pump-turbine are essential for simulating the hydraulic transients and designing pumped storage power plants but are often unavailable in the preliminary design stage. To solve this issue, a prediction method for the complete characteristics of a Francis pump-turbine was proposed. First, based on Euler equations and the velocity triangles at the runners, a mathematical model describing the complete characteristics of a Francis pump-turbine was derived. According to multiple sets of measured complete characteristic curves, explicit expressions for the characteristic parameters of characteristic operating point sets (COPs, as functions of a specific speed and guide vane opening, were then developed to determine the undetermined coefficients in the mathematical model. Ultimately, by combining the mathematical model with the regression analysis of COPs, the complete characteristic curves for an arbitrary specific speed were predicted. Moreover, a case study shows that the predicted characteristic curves are in good agreement with the measured data. The results obtained by 1D numerical simulation of the hydraulic transient process using the predicted characteristics deviate little from the measured characteristics. This method is effective and sufficient for a priori simulations before obtaining the measured characteristics and provides important support for the preliminary design of pumped storage power plants.

  16. A prediction method for the wax deposition rate based on a radial basis function neural network

    Directory of Open Access Journals (Sweden)

    Ying Xie

    2017-06-01

    Full Text Available The radial basis function neural network is a popular supervised learning tool based on machinery learning technology. Its high precision having been proven, the radial basis function neural network has been applied in many areas. The accumulation of deposited materials in the pipeline may lead to the need for increased pumping power, a decreased flow rate or even to the total blockage of the line, with losses of production and capital investment, so research on predicting the wax deposition rate is significant for the safe and economical operation of an oil pipeline. This paper adopts the radial basis function neural network to predict the wax deposition rate by considering four main influencing factors, the pipe wall temperature gradient, pipe wall wax crystal solubility coefficient, pipe wall shear stress and crude oil viscosity, by the gray correlational analysis method. MATLAB software is employed to establish the RBF neural network. Compared with the previous literature, favorable consistency exists between the predicted outcomes and the experimental results, with a relative error of 1.5%. It can be concluded that the prediction method of wax deposition rate based on the RBF neural network is feasible.

  17. Prediction Model of Collapse Risk Based on Information Entropy and Distance Discriminant Analysis Method

    Directory of Open Access Journals (Sweden)

    Hujun He

    2017-01-01

    Full Text Available The prediction and risk classification of collapse is an important issue in the process of highway construction in mountainous regions. Based on the principles of information entropy and Mahalanobis distance discriminant analysis, we have produced a collapse hazard prediction model. We used the entropy measure method to reduce the influence indexes of the collapse activity and extracted the nine main indexes affecting collapse activity as the discriminant factors of the distance discriminant analysis model (i.e., slope shape, aspect, gradient, and height, along with exposure of the structural face, stratum lithology, relationship between weakness face and free face, vegetation cover rate, and degree of rock weathering. We employ postearthquake collapse data in relation to construction of the Yingxiu-Wolong highway, Hanchuan County, China, as training samples for analysis. The results were analyzed using the back substitution estimation method, showing high accuracy and no errors, and were the same as the prediction result of uncertainty measure. Results show that the classification model based on information entropy and distance discriminant analysis achieves the purpose of index optimization and has excellent performance, high prediction accuracy, and a zero false-positive rate. The model can be used as a tool for future evaluation of collapse risk.

  18. Evaluation of two methods of predicting MLC leaf positions using EPID measurements

    International Nuclear Information System (INIS)

    Parent, Laure; Seco, Joao; Evans, Phil M.; Dance, David R.; Fielding, Andrew

    2006-01-01

    In intensity modulated radiation treatments (IMRT), the position of the field edges and the modulation within the beam are often achieved with a multileaf collimator (MLC). During the MLC calibration process, due to the finite accuracy of leaf position measurements, a systematic error may be introduced to leaf positions. Thereafter leaf positions of the MLC depend on the systematic error introduced on each leaf during MLC calibration and on the accuracy of the leaf position control system (random errors). This study presents and evaluates two methods to predict the systematic errors on the leaf positions introduced during the MLC calibration. The two presented methods are based on a series of electronic portal imaging device (EPID) measurements. A comparison with film measurements showed that the EPID could be used to measure leaf positions without introducing any bias. The first method, referred to as the 'central leaf method', is based on the method currently used at this center for MLC leaf calibration. It mimics the manner in which leaf calibration parameters are specified in the MLC control system and consequently is also used by other centers. The second method, a new method proposed by the authors and referred to as the ''individual leaf method,'' involves the measurement of two positions for each leaf (-5 and +15 cm) and the interpolation and extrapolation from these two points to any other given position. The central leaf method and the individual leaf method predicted leaf positions at prescribed positions of -11, 0, 5, and 10 cm within 2.3 and 1.0 mm, respectively, with a standard deviation (SD) of 0.3 and 0.2 mm, respectively. The individual leaf method provided a better prediction of the leaf positions than the central leaf method. Reproducibility tests for leaf positions of -5 and +15 cm were performed. The reproducibility was within 0.4 mm on the same day and 0.4 mm six weeks later (1 SD). Measurements at gantry angles of 0 deg., 90 deg., and 270 deg

  19. Complex data modeling and computationally intensive methods for estimation and prediction

    CERN Document Server

    Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics

    2015-01-01

    The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...

  20. Predicting metabolic syndrome using decision tree and support vector machine methods.

    Science.gov (United States)

    Karimi-Alavijeh, Farzaneh; Jalili, Saeed; Sadeghi, Masoumeh

    2016-05-01

    Metabolic syndrome which underlies the increased prevalence of cardiovascular disease and Type 2 diabetes is considered as a group of metabolic abnormalities including central obesity, hypertriglyceridemia, glucose intolerance, hypertension, and dyslipidemia. Recently, artificial intelligence based health-care systems are highly regarded because of its success in diagnosis, prediction, and choice of treatment. This study employs machine learning technics for predict the metabolic syndrome. This study aims to employ decision tree and support vector machine (SVM) to predict the 7-year incidence of metabolic syndrome. This research is a practical one in which data from 2107 participants of Isfahan Cohort Study has been utilized. The subjects without metabolic syndrome according to the ATPIII criteria were selected. The features that have been used in this data set include: gender, age, weight, body mass index, waist circumference, waist-to-hip ratio, hip circumference, physical activity, smoking, hypertension, antihypertensive medication use, systolic blood pressure (BP), diastolic BP, fasting blood sugar, 2-hour blood glucose, triglycerides (TGs), total cholesterol, low-density lipoprotein, high density lipoprotein-cholesterol, mean corpuscular volume, and mean corpuscular hemoglobin. Metabolic syndrome was diagnosed based on ATPIII criteria and two methods of decision tree and SVM were selected to predict the metabolic syndrome. The criteria of sensitivity, specificity and accuracy were used for validation. SVM and decision tree methods were examined according to the criteria of sensitivity, specificity and accuracy. Sensitivity, specificity and accuracy were 0.774 (0.758), 0.74 (0.72) and 0.757 (0.739) in SVM (decision tree) method. The results show that SVM method sensitivity, specificity and accuracy is more efficient than decision tree. The results of decision tree method show that the TG is the most important feature in predicting metabolic syndrome. According

  1. A rapid colorimetric method for predicting the storage stability of middle distillate fuels

    Energy Technology Data Exchange (ETDEWEB)

    Marshman, S.J. [Defense Research Agency, Surrey (United Kingdom)

    1995-05-01

    Present methods used to predict the storage stability of distillate fuels such as ASTM D2274, ASTM D4625, DEF STAN 05-50 Method 40 and in-house methods are very time consuming, taking a minimum of 16 hours. In addition, some of these methods under- or over-predict the storage stability of the test fuel. A rapid colorimetric test for identifying cracked, straight run or hydrofined fuels was reported at the previous Conference. Further work has shown that while a visual appraisal is acceptable for refinery-fresh fuels, colour development may be masked by other coloured compounds in older fuels. Use of a spectrometric finish to the method has extended the scope of the method to include older fuels. The test can be correlated with total sediment from ASTM D4625 (13 weeks at 43{degrees}C) over a sediment range of 0-60mg/L. A correlation of 0.94 was obtained for 40 fuels.

  2. Prediction of high pressure vapor-liquid equilibria with mixing rule using ASOG group contribution method

    Energy Technology Data Exchange (ETDEWEB)

    Tochigi, K.; Kojima, K.; Kurihara, K.

    1985-02-01

    To develop a widely applicable method for predicting high-pressure vapor-liquid equilibria by the equation of state, a mixing rule is proposed in which mixture energy parameter ''..cap alpha..'' of theSoave-RedlichKwong, Peng-Robinson, and Martin cubic equations of state is expressed by using the ASOG group contribution method. The group pair parameters are then determined for 14 group pairs constituted by six groups, i.e. CH/sub 4/, CH/sub 3/, CH/sub 2/, N/sub 2/, H/sub 2/, and CO/sub 2/ groups. By using the group pair parameters determined, high-pressure vapor-liquid equilibria are predicted with good accuracy for binary and ternary systems constituted by n-paraffins, nitrogen, hydrogen, and carbon dioxide in the temperature range of 100 - 450K.

  3. Prediction of surface tension of binary mixtures with the parachor method

    Directory of Open Access Journals (Sweden)

    Němec Tomáš

    2015-01-01

    Full Text Available The parachor method for the estimation of the surface tension of binary mixtures is modified by considering temperature-dependent values of the parachor parameters. The temperature dependence is calculated by a least-squares fit of pure-solvent surface tension data to the binary parachor equation utilizing the Peng-Robinson equation of state for the calculation of equilibrium densities. A very good agreement between experimental binary surface tension data and the predictions of the modified parachor method are found for the case of the mixtures of carbon dioxide and butane, benzene, and cyclohexane, respectively. The surface tension is also predicted for three refrigerant mixtures, i.e. propane, isobutane, and chlorodifluoromethane, with carbon dioxide.

  4. Yeast prions and human prion-like proteins: sequence features and prediction methods.

    Science.gov (United States)

    Cascarina, Sean M; Ross, Eric D

    2014-06-01

    Prions are self-propagating infectious protein isoforms. A growing number of prions have been identified in yeast, each resulting from the conversion of soluble proteins into an insoluble amyloid form. These yeast prions have served as a powerful model system for studying the causes and consequences of prion aggregation. Remarkably, a number of human proteins containing prion-like domains, defined as domains with compositional similarity to yeast prion domains, have recently been linked to various human degenerative diseases, including amyotrophic lateral sclerosis. This suggests that the lessons learned from yeast prions may help in understanding these human diseases. In this review, we examine what has been learned about the amino acid sequence basis for prion aggregation in yeast, and how this information has been used to develop methods to predict aggregation propensity. We then discuss how this information is being applied to understand human disease, and the challenges involved in applying yeast prediction methods to higher organisms.

  5. Electronic structure prediction via data-mining the empirical pseudopotential method

    Energy Technology Data Exchange (ETDEWEB)

    Zenasni, H; Aourag, H [LEPM, URMER, Departement of Physics, University Abou Bakr Belkaid, Tlemcen 13000 (Algeria); Broderick, S R; Rajan, K [Department of Materials Science and Engineering, Iowa State University, Ames, Iowa 50011-2230 (United States)

    2010-01-15

    We introduce a new approach for accelerating the calculation of the electronic structure of new materials by utilizing the empirical pseudopotential method combined with data mining tools. Combining data mining with the empirical pseudopotential method allows us to convert an empirical approach to a predictive approach. Here we consider tetrahedrally bounded III-V Bi semiconductors, and through the prediction of form factors based on basic elemental properties we can model the band structure and charge density for these semi-conductors, for which limited results exist. This work represents a unique approach to modeling the electronic structure of a material which may be used to identify new promising semi-conductors and is one of the few efforts utilizing data mining at an electronic level. (Abstract Copyright [2010], Wiley Periodicals, Inc.)

  6. Analysis Of The Method Of Predictive Control Applicable To Active Magnetic Suspension Systems Of Aircraft Engines

    Directory of Open Access Journals (Sweden)

    Kurnyta-Mazurek Paulina

    2015-12-01

    Full Text Available Conventional controllers are usually synthesized on the basis of already known parameters associated with the model developed for the object to be controlled. However, sometimes it proves extremely difficult or even infeasible to find out these parameters, in particular when they subject to changes during the exploitation lifetime. If so, much more sophisticated control methods have to be applied, e.g. the method of predictive control. Thus, the paper deals with application of the predictive control approach to follow-up tracking of an active magnetic suspension where the mathematical and simulation models for such a control system are disclosed with preliminary results from simulation investigations of the control system in question.

  7. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  8. A hybrid method of prediction of the void fraction during depressurization of diabatic systems

    International Nuclear Information System (INIS)

    Inayatullah, G.; Nicoll, W.B.; Hancox, W.T.

    1977-01-01

    The variation in vapour volumetric fraction during transient pressure, flow and power is of considerable importance in water-cooled nuclear power-reactor safety analysis. The commonly adopted procedure to predict the transient void is to solve the conservation equations using finite differences. This present method is intermediate between numerical and analytic, hence 'hybrid'. Space and time are divided into discrete intervals. Their size, however, is dictated by the imposed heat flux and pressure variations, and not by truncation error, stability or convergence, because within an interval, the solutions applied are analytic. The relatively simple hybrid method presented here can predict the void distribution in a variety of transient, diabatic, two-phase flows with simplicity, accuracy and speed. (Auth.)

  9. Evaluation of the constant pressure panel method (CPM) for unsteady air loads prediction

    Science.gov (United States)

    Appa, Kari; Smith, Michael J. C.

    1988-01-01

    This paper evaluates the capability of the constant pressure panel method (CPM) code to predict unsteady aerodynamic pressures, lift and moment distributions, and generalized forces for general wing-body configurations in supersonic flow. Stability derivatives are computed and correlated for the X-29 and an Oblique Wing Research Aircraft, and a flutter analysis is carried out for a wing wind tunnel test example. Most results are shown to correlate well with test or published data. Although the emphasis of this paper is on evaluation, an improvement in the CPM code's handling of intersecting lifting surfaces is briefly discussed. An attractive feature of the CPM code is that it shares the basic data requirements and computational arrangements of the doublet lattice method. A unified code to predict unsteady subsonic or supersonic airloads is therefore possible.

  10. A Method of Calculating Functional Independence Measure at Discharge from Functional Independence Measure Effectiveness Predicted by Multiple Regression Analysis Has a High Degree of Predictive Accuracy.

    Science.gov (United States)

    Tokunaga, Makoto; Watanabe, Susumu; Sonoda, Shigeru

    2017-09-01

    Multiple linear regression analysis is often used to predict the outcome of stroke rehabilitation. However, the predictive accuracy may not be satisfactory. The objective of this study was to elucidate the predictive accuracy of a method of calculating motor Functional Independence Measure (mFIM) at discharge from mFIM effectiveness predicted by multiple regression analysis. The subjects were 505 patients with stroke who were hospitalized in a convalescent rehabilitation hospital. The formula "mFIM at discharge = mFIM effectiveness × (91 points - mFIM at admission) + mFIM at admission" was used. By including the predicted mFIM effectiveness obtained through multiple regression analysis in this formula, we obtained the predicted mFIM at discharge (A). We also used multiple regression analysis to directly predict mFIM at discharge (B). The correlation between the predicted and the measured values of mFIM at discharge was compared between A and B. The correlation coefficients were .916 for A and .878 for B. Calculating mFIM at discharge from mFIM effectiveness predicted by multiple regression analysis had a higher degree of predictive accuracy of mFIM at discharge than that directly predicted. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  11. SOFTWARE EFFORT PREDICTION: AN EMPIRICAL EVALUATION OF METHODS TO TREAT MISSING VALUES WITH RAPIDMINER ®

    OpenAIRE

    OLGA FEDOTOVA; GLADYS CASTILLO; LEONOR TEIXEIRA; HELENA ALVELOS

    2011-01-01

    Missing values is a common problem in the data analysis in all areas, being software engineering not an exception. Particularly, missing data is a widespread phenomenon observed during the elaboration of effort prediction models (EPMs) required for budget, time and functionalities planning. Current work presents the results of a study carried out on a Portuguese medium-sized software development organization in order to obtain a formal method for EPMs elicitation in development processes. Thi...

  12. An assessment of prediction methods of CHF in tubes with a large experimental data bank

    International Nuclear Information System (INIS)

    Leung, L.K.H.; Groeneveld, D.C.

    1993-01-01

    An assessment of prediction methods of CHF in tubes has been carried out using an expanded CHF data bank at Chalk River Laboratories (CRL). It includes eight different CHF look-up tables (two AECL versions and six USSR (or Russian) versions) and three empirical correlations. These prediction methods were developed from relatively large data bases and therefore have a wide range of application. Some limitations, however, were imposed in this study, to avoid any invalid predictions due to extrapolation of these methods. Therefore, these comparisons are limited to the specific data base that is tailored to suit the range of an individual method. This has resulted in a different number of data used in each case. The comparison of predictions against the experimental data is based on the constant inlet-condition approach (i.e., the pressure, mass flux, inlet fluid temperature and tube geometry are the primary parameters). Overall, the AECL tables have the widest range of application. They are assessed with 21 771 data points and the root-mean-square error is only 8.3%. About 60% of these data were used in the development of the AECL tables. The best version of the USSR/Russian CHF table is valid for 13 300 data points with a root-mean-square error of 8.8%. The USSR/Russian table that has the widest range of application covers a total of 18 800 data points, but the error increases to 9.3%. The range of application for empirical correlations, however, are generally much narrower than those covered by the CHF tables. The number of data used to assess these correlations is therefore further limited. Among the tested correlations, the Becker and Persson correlation covers the least amount of data (only 7 499 data points), but has the best accuracy (with a root-mean-square error of 9.71%). 33 refs., 2 figs., 3 tabs

  13. Performance prediction of high Tc superconducting small antennas using a two-fluid-moment method model

    Science.gov (United States)

    Cook, G. G.; Khamas, S. K.; Kingsley, S. P.; Woods, R. C.

    1992-01-01

    The radar cross section and Q factors of electrically small dipole and loop antennas made with a YBCO high Tc superconductor are predicted using a two-fluid-moment method model, in order to determine the effects of finite conductivity on the performances of such antennas. The results compare the useful operating bandwidths of YBCO antennas exhibiting varying degrees of impurity with their copper counterparts at 77 K, showing a linear relationship between bandwidth and impurity level.

  14. Unbiased and non-supervised learning methods for disruption prediction at JET

    International Nuclear Information System (INIS)

    Murari, A.; Vega, J.; Ratta, G.A.; Vagliasindi, G.; Johnson, M.F.; Hong, S.H.

    2009-01-01

    The importance of predicting the occurrence of disruptions is going to increase significantly in the next generation of tokamak devices. The expected energy content of ITER plasmas, for example, is such that disruptions could have a significant detrimental impact on various parts of the device, ranging from erosion of plasma facing components to structural damage. Early detection of disruptions is therefore needed with evermore increasing urgency. In this paper, the results of a series of methods to predict disruptions at JET are reported. The main objective of the investigation consists of trying to determine how early before a disruption it is possible to perform acceptable predictions on the basis of the raw data, keeping to a minimum the number of 'ad hoc' hypotheses. Therefore, the chosen learning techniques have the common characteristic of requiring a minimum number of assumptions. Classification and Regression Trees (CART) is a supervised but, on the other hand, a completely unbiased and nonlinear method, since it simply constructs the best classification tree by working directly on the input data. A series of unsupervised techniques, mainly K-means and hierarchical, have also been tested, to investigate to what extent they can autonomously distinguish between disruptive and non-disruptive groups of discharges. All these independent methods indicate that, in general, prediction with a success rate above 80% can be achieved not earlier than 180 ms before the disruption. The agreement between various completely independent methods increases the confidence in the results, which are also confirmed by a visual inspection of the data performed with pseudo Grand Tour algorithms.

  15. Detailed disc assembly temperature prediction: comparison between CFD and simplified engineering methods

    CSIR Research Space (South Africa)

    Snedden, Glen C

    2003-09-01

    Full Text Available Institute of Aeronautics and Astronautics Inc. All rights reserved. DETAILED DISC ASSEMBLY TEMPERATURE PREDICTION: COMPARISON BETWEEN CFD AND SIMPLIFIED ENGINEERING METHODS ISABE-2005-1130 Glen Snedden, Thomas Roos and Kavendra Naidoo CSIR, Defencetek... transfer and conduction code (Gaugler, 1978) Taw Adiabatic Wall Temperature y+ Near wall Reynolds number Introduction In order to calculate life degradation of gas turbine disc assemblies, it is necessary to model the transient thermal and mechanical...

  16. Predictive Methods for Dense Polymer Networks: Combating Bias with Bio-Based Structures

    Science.gov (United States)

    2016-03-16

    Combating bias with bio - based structures 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Andrew J. Guenthner...unlimited. PA Clearance 16152 Integrity  Service  Excellence Predictive methods for dense polymer networks: Combating bias with bio -based...Architectural Bias • Comparison of Petroleum-Based and Bio -Based Chemical Architectures • Continuing Research on Structure-Property Relationships using

  17. Long short-term memory neural network for air pollutant concentration predictions: Method development and evaluation

    International Nuclear Information System (INIS)

    Li, Xiang; Peng, Ling; Yao, Xiaojing; Cui, Shaolong; Hu, Yuan; You, Chengzeng; Chi, Tianhe

    2017-01-01

    Air pollutant concentration forecasting is an effective method of protecting public health by providing an early warning against harmful air pollutants. However, existing methods of air pollutant concentration prediction fail to effectively model long-term dependencies, and most neglect spatial correlations. In this paper, a novel long short-term memory neural network extended (LSTME) model that inherently considers spatiotemporal correlations is proposed for air pollutant concentration prediction. Long short-term memory (LSTM) layers were used to automatically extract inherent useful features from historical air pollutant data, and auxiliary data, including meteorological data and time stamp data, were merged into the proposed model to enhance the performance. Hourly PM 2.5 (particulate matter with an aerodynamic diameter less than or equal to 2.5 μm) concentration data collected at 12 air quality monitoring stations in Beijing City from Jan/01/2014 to May/28/2016 were used to validate the effectiveness of the proposed LSTME model. Experiments were performed using the spatiotemporal deep learning (STDL) model, the time delay neural network (TDNN) model, the autoregressive moving average (ARMA) model, the support vector regression (SVR) model, and the traditional LSTM NN model, and a comparison of the results demonstrated that the LSTME model is superior to the other statistics-based models. Additionally, the use of auxiliary data improved model performance. For the one-hour prediction tasks, the proposed model performed well and exhibited a mean absolute percentage error (MAPE) of 11.93%. In addition, we conducted multiscale predictions over different time spans and achieved satisfactory performance, even for 13–24 h prediction tasks (MAPE = 31.47%). - Highlights: • Regional air pollutant concentration shows an obvious spatiotemporal correlation. • Our prediction model presents superior performance. • Climate data and metadata can significantly

  18. Prediction method for flow boiling heat transfer in a herringbone microfin tube

    Energy Technology Data Exchange (ETDEWEB)

    Wellsandt, S; Vamling, L [Chalmers University of Technology, Gothenburg (Sweden). Department of Chemical Engineering and Environmental Science, Heat and Power Technology

    2005-09-01

    Based on experimental data for R134a, the present work deals with the development of a prediction method for heat transfer in herringbone microfin tubes. As is shown in earlier works, heat transfer coefficients for the investigated herringbone microfin tube tend to peak at lower vapour qualities than in helical microfin tubes. Correlations developed for other tube types fail to describe this behaviour. A hypothesis that the position of the peak is related to the point where the average film thickness becomes smaller than the fin height is tested and found to be consistent with observed behaviour. The proposed method accounts for this hypothesis and incorporates the well-known Steiner and Taborek correlation for the calculation of flow boiling heat transfer coefficients. The correlation is modified by introducing a surface enhancement factor and adjusting the two-phase multiplier. Experimental data for R134a are predicted with an average residual of 1.5% and a standard deviation of 21%. Tested against experimental data for mixtures R410A and R407C, the proposed method overpredicts experimental data by around 60%. An alternative adjustment of the two-phase multiplier, in order to better predict mixture data, is discussed. (author)

  19. Systems-based biological concordance and predictive reproducibility of gene set discovery methods in cardiovascular disease.

    Science.gov (United States)

    Azuaje, Francisco; Zheng, Huiru; Camargo, Anyela; Wang, Haiying

    2011-08-01

    The discovery of novel disease biomarkers is a crucial challenge for translational bioinformatics. Demonstration of both their classification power and reproducibility across independent datasets are essential requirements to assess their potential clinical relevance. Small datasets and multiplicity of putative biomarker sets may explain lack of predictive reproducibility. Studies based on pathway-driven discovery approaches have suggested that, despite such discrepancies, the resulting putative biomarkers tend to be implicated in common biological processes. Investigations of this problem have been mainly focused on datasets derived from cancer research. We investigated the predictive and functional concordance of five methods for discovering putative biomarkers in four independently-generated datasets from the cardiovascular disease domain. A diversity of biosignatures was identified by the different methods. However, we found strong biological process concordance between them, especially in the case of methods based on gene set analysis. With a few exceptions, we observed lack of classification reproducibility using independent datasets. Partial overlaps between our putative sets of biomarkers and the primary studies exist. Despite the observed limitations, pathway-driven or gene set analysis can predict potentially novel biomarkers and can jointly point to biomedically-relevant underlying molecular mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Prediction system of hydroponic plant growth and development using algorithm Fuzzy Mamdani method

    Science.gov (United States)

    Sudana, I. Made; Purnawirawan, Okta; Arief, Ulfa Mediaty

    2017-03-01

    Hydroponics is a method of farming without soil. One of the Hydroponic plants is Watercress (Nasturtium Officinale). The development and growth process of hydroponic Watercress was influenced by levels of nutrients, acidity and temperature. The independent variables can be used as input variable system to predict the value level of plants growth and development. The prediction system is using Fuzzy Algorithm Mamdani method. This system was built to implement the function of Fuzzy Inference System (Fuzzy Inference System/FIS) as a part of the Fuzzy Logic Toolbox (FLT) by using MATLAB R2007b. FIS is a computing system that works on the principle of fuzzy reasoning which is similar to humans' reasoning. Basically FIS consists of four units which are fuzzification unit, fuzzy logic reasoning unit, base knowledge unit and defuzzification unit. In addition to know the effect of independent variables on the plants growth and development that can be visualized with the function diagram of FIS output surface that is shaped three-dimensional, and statistical tests based on the data from the prediction system using multiple linear regression method, which includes multiple linear regression analysis, T test, F test, the coefficient of determination and donations predictor that are calculated using SPSS (Statistical Product and Service Solutions) software applications.

  1. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    Science.gov (United States)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  2. An improved method for predicting the evolution of the characteristic parameters of an information system

    Science.gov (United States)

    Dushkin, A. V.; Kasatkina, T. I.; Novoseltsev, V. I.; Ivanov, S. V.

    2018-03-01

    The article proposes a forecasting method that allows, based on the given values of entropy and error level of the first and second kind, to determine the allowable time for forecasting the development of the characteristic parameters of a complex information system. The main feature of the method under consideration is the determination of changes in the characteristic parameters of the development of the information system in the form of the magnitude of the increment in the ratios of its entropy. When a predetermined value of the prediction error ratio is reached, that is, the entropy of the system, the characteristic parameters of the system and the depth of the prediction in time are estimated. The resulting values of the characteristics and will be optimal, since at that moment the system possessed the best ratio of entropy as a measure of the degree of organization and orderliness of the structure of the system. To construct a method for estimating the depth of prediction, it is expedient to use the maximum principle of the value of entropy.

  3. Recursive prediction error methods for online estimation in nonlinear state-space models

    Directory of Open Access Journals (Sweden)

    Dag Ljungquist

    1994-04-01

    Full Text Available Several recursive algorithms for online, combined state and parameter estimation in nonlinear state-space models are discussed in this paper. Well-known algorithms such as the extended Kalman filter and alternative formulations of the recursive prediction error method are included, as well as a new method based on a line-search strategy. A comparison of the algorithms illustrates that they are very similar although the differences can be important for the online tracking capabilities and robustness. Simulation experiments on a simple nonlinear process show that the performance under certain conditions can be improved by including a line-search strategy.

  4. A Predictive-Control-Based Over-Modulation Method for Conventional Matrix Converters

    DEFF Research Database (Denmark)

    Zhang, Guanguan; Yang, Jian; Sun, Yao

    2018-01-01

    To increase the voltage transfer ratio of the matrix converter and improve the input/output current performance simultaneously, an over-modulation method based on predictive control is proposed in this paper, where the weighting factor is selected by an automatic adjusting mechanism, which is able...... to further enhance the system performance promptly. This method has advantages like the maximum voltage transfer ratio can reach 0.987 in the experiments; the total harmonic distortion of the input and output current are reduced, and the losses in the matrix converter are decreased. Moreover, the specific...

  5. Review of the status of reactor physics predictive methods for burnable poisons in CAGRs

    International Nuclear Information System (INIS)

    Edens, D.J.; McEllin, M.

    1983-01-01

    An essential component of the design of Commercial Advanced Gas Cooled Reactor fuel necessary to achieve higher discharge irradiations is the incorporation of burnable poisons. The poisons enable the more highly enriched fuel required to reach higher irradiation to be loaded without increasing the peak channel power. The optimum choice of fuel enrichment and poison loading will be made using reactor physics predictive methods developed by Berkeley Nuclear Laboratories. These methods and the evidence available to support them from theoretical comparisons, zero energy experiments, WAGR irradiations, and measurements on operating CAGRs are described. (author)

  6. Review of the status of reactor physics predictive methods for burnable poisons in CAGRs

    International Nuclear Information System (INIS)

    Edens, D.J.; McEllin, M.

    1983-01-01

    An essential component of the design of Commercial Advanced Gas Cooled Reactor fuel necessary to achieve higher discharge irradiations is the incorporation of burnable poisons. The poisons enable the more highly enriched fuel required to reach higher irradiation to be loaded without increasing the peak channel power. The optimum choice of fuel enrichment and poison loading will be made using reactor physics predictive methods developed by Berkeley Nuclear Laboratories. The paper describes these methods and the evidence available to support them from theoretical comparisons, zero energy experiments, WAGR irradiations, and measurements on operating CAGR's. (author)

  7. Long short-term memory neural network for air pollutant concentration predictions: Method development and evaluation.

    Science.gov (United States)

    Li, Xiang; Peng, Ling; Yao, Xiaojing; Cui, Shaolong; Hu, Yuan; You, Chengzeng; Chi, Tianhe

    2017-12-01

    Air pollutant concentration forecasting is an effective method of protecting public health by providing an early warning against harmful air pollutants. However, existing methods of air pollutant concentration prediction fail to effectively model long-term dependencies, and most neglect spatial correlations. In this paper, a novel long short-term memory neural network extended (LSTME) model that inherently considers spatiotemporal correlations is proposed for air pollutant concentration prediction. Long short-term memory (LSTM) layers were used to automatically extract inherent useful features from historical air pollutant data, and auxiliary data, including meteorological data and time stamp data, were merged into the proposed model to enhance the performance. Hourly PM 2.5 (particulate matter with an aerodynamic diameter less than or equal to 2.5 μm) concentration data collected at 12 air quality monitoring stations in Beijing City from Jan/01/2014 to May/28/2016 were used to validate the effectiveness of the proposed LSTME model. Experiments were performed using the spatiotemporal deep learning (STDL) model, the time delay neural network (TDNN) model, the autoregressive moving average (ARMA) model, the support vector regression (SVR) model, and the traditional LSTM NN model, and a comparison of the results demonstrated that the LSTME model is superior to the other statistics-based models. Additionally, the use of auxiliary data improved model performance. For the one-hour prediction tasks, the proposed model performed well and exhibited a mean absolute percentage error (MAPE) of 11.93%. In addition, we conducted multiscale predictions over different time spans and achieved satisfactory performance, even for 13-24 h prediction tasks (MAPE = 31.47%). Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Predicting residue contacts using pragmatic correlated mutations method: reducing the false positives

    Directory of Open Access Journals (Sweden)

    Alexov Emil G

    2006-11-01

    Full Text Available Abstract Background Predicting residues' contacts using primary amino acid sequence alone is an important task that can guide 3D structure modeling and can verify the quality of the predicted 3D structures. The correlated mutations (CM method serves as the most promising approach and it has been used to predict amino acids pairs that are distant in the primary sequence but form contacts in the native 3D structure of homologous proteins. Results Here we report a new implementation of the CM method with an added set of selection rules (filters. The parameters of the algorithm were optimized against fifteen high resolution crystal structures with optimization criterion that maximized the confidentiality of the predictions. The optimization resulted in a true positive ratio (TPR of 0.08 for the CM without filters and a TPR of 0.14 for the CM with filters. The protocol was further benchmarked against 65 high resolution structures that were not included in the optimization test. The benchmarking resulted in a TPR of 0.07 for the CM without filters and to a TPR of 0.09 for the CM with filters. Conclusion Thus, the inclusion of selection rules resulted to an overall improvement of 30%. In addition, the pair-wise comparison of TPR for each protein without and with filters resulted in an average improvement of 1.7. The methodology was implemented into a web server http://www.ces.clemson.edu/compbio/recon that is freely available to the public. The purpose of this implementation is to provide the 3D structure predictors with a tool that can help with ranking alternative models by satisfying the largest number of predicted contacts, as well as it can provide a confidence score for contacts in cases where structure is known.

  9. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method.

    Science.gov (United States)

    Fogel, Allison R; Rosenberg, Jason C; Lehman, Frank M; Kuperberg, Gina R; Patel, Aniruddh D

    2015-01-01

    Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such 'authentic cadence' melody was matched to a 'non-cadential' (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in

  10. A Method to Predict the Structure and Stability of RNA/RNA Complexes.

    Science.gov (United States)

    Xu, Xiaojun; Chen, Shi-Jie

    2016-01-01

    RNA/RNA interactions are essential for genomic RNA dimerization and regulation of gene expression. Intermolecular loop-loop base pairing is a widespread and functionally important tertiary structure motif in RNA machinery. However, computational prediction of intermolecular loop-loop base pairing is challenged by the entropy and free energy calculation due to the conformational constraint and the intermolecular interactions. In this chapter, we describe a recently developed statistical mechanics-based method for the prediction of RNA/RNA complex structures and stabilities. The method is based on the virtual bond RNA folding model (Vfold). The main emphasis in the method is placed on the evaluation of the entropy and free energy for the loops, especially tertiary kissing loops. The method also uses recursive partition function calculations and two-step screening algorithm for large, complicated structures of RNA/RNA complexes. As case studies, we use the HIV-1 Mal dimer and the siRNA/HIV-1 mutant (T4) to illustrate the method.

  11. Feature selection for splice site prediction: A new method using EDA-based feature ranking

    Directory of Open Access Journals (Sweden)

    Rouzé Pierre

    2004-05-01

    Full Text Available Abstract Background The identification of relevant biological features in large and complex datasets is an important step towards gaining insight in the processes underlying the data. Other advantages of feature selection include the ability of the classification system to attain good or even better solutions using a restricted subset of features, and a faster classification. Thus, robust methods for fast feature selection are of key importance in extracting knowledge from complex biological data. Results In this paper we present a novel method for feature subset selection applied to splice site prediction, based on estimation of distribution algorithms, a more general framework of genetic algorithms. From the estimated distribution of the algorithm, a feature ranking is derived. Afterwards this ranking is used to iteratively discard features. We apply this technique to the problem of splice site prediction, and show how it can be used to gain insight into the underlying biological process of splicing. Conclusion We show that this technique proves to be more robust than the traditional use of estimation of distribution algorithms for feature selection: instead of returning a single best subset of features (as they normally do this method provides a dynamical view of the feature selection process, like the traditional sequential wrapper methods. However, the method is faster than the traditional techniques, and scales better to datasets described by a large number of features.

  12. The steady performance prediction of propeller-rudder-bulb system based on potential iterative method

    International Nuclear Information System (INIS)

    Liu, Y B; Su, Y M; Ju, L; Huang, S L

    2012-01-01

    A new numerical method was developed for predicting the steady hydrodynamic performance of propeller-rudder-bulb system. In the calculation, the rudder and bulb was taken into account as a whole, the potential based surface panel method was applied both to propeller and rudder-bulb system. The interaction between propeller and rudder-bulb was taken into account by velocity potential iteration in which the influence of propeller rotation was considered by the average influence coefficient. In the influence coefficient computation, the singular value should be found and deleted. Numerical results showed that the method presented is effective for predicting the steady hydrodynamic performance of propeller-rudder system and propeller-rudder-bulb system. Comparing with the induced velocity iterative method, the method presented can save programming and calculation time. Changing dimensions, the principal parameter—bulb size that affect energy-saving effect was studied, the results show that the bulb on rudder have a optimal size at the design advance coefficient.

  13. An Improved Method of Predicting Extinction Coefficients for the Determination of Protein Concentration.

    Science.gov (United States)

    Hilario, Eric C; Stern, Alan; Wang, Charlie H; Vargas, Yenny W; Morgan, Charles J; Swartz, Trevor E; Patapoff, Thomas W

    2017-01-01

    Concentration determination is an important method of protein characterization required in the development of protein therapeutics. There are many known methods for determining the concentration of a protein solution, but the easiest to implement in a manufacturing setting is absorption spectroscopy in the ultraviolet region. For typical proteins composed of the standard amino acids, absorption at wavelengths near 280 nm is due to the three amino acid chromophores tryptophan, tyrosine, and phenylalanine in addition to a contribution from disulfide bonds. According to the Beer-Lambert law, absorbance is proportional to concentration and path length, with the proportionality constant being the extinction coefficient. Typically the extinction coefficient of proteins is experimentally determined by measuring a solution absorbance then experimentally determining the concentration, a measurement with some inherent variability depending on the method used. In this study, extinction coefficients were calculated based on the measured absorbance of model compounds of the four amino acid chromophores. These calculated values for an unfolded protein were then compared with an experimental concentration determination based on enzymatic digestion of proteins. The experimentally determined extinction coefficient for the native proteins was consistently found to be 1.05 times the calculated value for the unfolded proteins for a wide range of proteins with good accuracy and precision under well-controlled experimental conditions. The value of 1.05 times the calculated value was termed the predicted extinction coefficient. Statistical analysis shows that the differences between predicted and experimentally determined coefficients are scattered randomly, indicating no systematic bias between the values among the proteins measured. The predicted extinction coefficient was found to be accurate and not subject to the inherent variability of experimental methods. We propose the use of a

  14. COMPARISON OF TREND PROJECTION METHODS AND BACKPROPAGATION PROJECTIONS METHODS TREND IN PREDICTING THE NUMBER OF VICTIMS DIED IN TRAFFIC ACCIDENT IN TIMOR TENGAH REGENCY, NUSA TENGGARA

    Directory of Open Access Journals (Sweden)

    Aleksius Madu

    2016-10-01

    Full Text Available The purpose of this study is to predict the number of traffic accident victims who died in Timor Tengah Regency with Trend Projection method and Backpropagation method, and compare the two methods based on the degree of guilt and predict the number traffic accident victims in the Timor Tengah Regency for the coming year. This research was conducted in Timor Tengah Regency where data used in this study was obtained from Police Unit in Timor Tengah Regency. The data is on the number of traffic accidents in Timor Tengah Regency from 2000 – 2013, which is obtained by a quantitative analysis with Trend Projection and Backpropagation method. The results of the data analysis predicting the number of traffic accidents victims using Trend Projection method obtained the best model which is the quadratic trend model with equation Yk = 39.786 + (3.297 X + (0.13 X2. Whereas by using back propagation method, it is obtained the optimum network that consists of 2 inputs, 3 hidden screens, and 1 output. Based on the error rates obtained, Back propagation method is better than the Trend Projection method which means that the predicting accuracy with Back propagation method is the best method to predict the number of traffic accidents victims in Timor Tengah Regency. Thus obtained predicting the numbers of traffic accident victims for the next 5 years (Years 2014-2018 respectively - are 106 person, 115 person, 115 person, 119 person and 120 person.   Keywords: Trend Projection, Back propagation, Predicting.

  15. Variable importance and prediction methods for longitudinal problems with missing variables.

    Directory of Open Access Journals (Sweden)

    Iván Díaz

    Full Text Available We present prediction and variable importance (VIM methods for longitudinal data sets containing continuous and binary exposures subject to missingness. We demonstrate the use of these methods for prognosis of medical outcomes of severe trauma patients, a field in which current medical practice involves rules of thumb and scoring methods that only use a few variables and ignore the dynamic and high-dimensional nature of trauma recovery. Well-principled prediction and VIM methods can provide a tool to make care decisions informed by the high-dimensional patient's physiological and clinical history. Our VIM parameters are analogous to slope coefficients in adjusted regressions, but are not dependent on a specific statistical model, nor require a certain functional form of the prediction regression to be estimated. In addition, they can be causally interpreted under causal and statistical assumptions as the expected outcome under time-specific clinical interventions, related to changes in the mean of the outcome if each individual experiences a specified change in the variable (keeping other variables in the model fixed. Better yet, the targeted MLE used is doubly robust and locally efficient. Because the proposed VIM does not constrain the prediction model fit, we use a very flexible ensemble learner (the SuperLearner, which returns a linear combination of a list of user-given algorithms. Not only is such a prediction algorithm intuitive appealing, it has theoretical justification as being asymptotically equivalent to the oracle selector. The results of the analysis show effects whose size and significance would have been not been found using a parametric approach (such as stepwise regression or LASSO. In addition, the procedure is even more compelling as the predictor on which it is based showed significant improvements in cross-validated fit, for instance area under the curve (AUC for a receiver-operator curve (ROC. Thus, given that 1 our VIM

  16. A control method for agricultural greenhouses heating based on computational fluid dynamics and energy prediction model

    International Nuclear Information System (INIS)

    Chen, Jiaoliao; Xu, Fang; Tan, Dapeng; Shen, Zheng; Zhang, Libin; Ai, Qinglin

    2015-01-01

    Highlights: • A novel control method for the heating greenhouse with SWSHPS is proposed. • CFD is employed to predict the priorities of FCU loops for thermal performance. • EPM is act as an on-line tool to predict the total energy demand of greenhouse. • The CFD–EPM-based method can save energy and improve control accuracy. • The energy savings potential is between 8.7% and 15.1%. - Abstract: As energy heating is one of the main production costs, many efforts have been made to reduce the energy consumption of agricultural greenhouses. Herein, a novel control method of greenhouse heating using computational fluid dynamics (CFD) and energy prediction model (EPM) is proposed for energy savings and system performance. Based on the low-Reynolds number k–ε turbulence principle, a CFD model of heating greenhouse is developed, applying the discrete ordinates model for the radiative heat transfers and porous medium approach for plants considering plants sensible and latent heat exchanges. The CFD simulations have been validated, and used to analyze the greenhouse thermal performance and the priority of fan coil units (FCU) loops under the various heating conditions. According to the heating efficiency and temperature uniformity, the priorities of each FCU loop can be predicted to generate a database with priorities for control system. EPM is built up based on the thermal balance, and used to predict and optimize the energy demand of the greenhouse online. Combined with the priorities of FCU loops from CFD simulations offline, we have developed the CFD–EPM-based heating control system of greenhouse with surface water source heat pumps system (SWSHPS). Compared with conventional multi-zone independent control (CMIC) method, the energy savings potential is between 8.7% and 15.1%, and the control temperature deviation is decreased to between 0.1 °C and 0.6 °C in the investigated greenhouse. These results show the CFD–EPM-based method can improve system

  17. Predicting high risk births with contraceptive prevalence and contraceptive method-mix in an ecologic analysis

    Directory of Open Access Journals (Sweden)

    Jamie Perin

    2017-11-01

    Full Text Available Abstract Background Increased contraceptive use has been associated with a decrease in high parity births, births that occur close together in time, and births to very young or to older women. These types of births are also associated with high risk of under-five mortality. Previous studies have looked at the change in the level of contraception use and the average change in these types of high-risk births. We aim to predict the distribution of births in a specific country when there is a change in the level and method of modern contraception. Methods We used data from full birth histories and modern contraceptive use from 207 nationally representative Demographic and Health Surveys covering 71 countries to describe the distribution of births in each survey based on birth order, preceding birth space, and mother’s age at birth. We estimated the ecologic associations between the prevalence and method-mix of modern contraceptives and the proportion of births in each category. Hierarchical modelling was applied to these aggregated cross sectional proportions, so that random effects were estimated for countries with multiple surveys. We use these results to predict the change in type of births associated with scaling up modern contraception in three different scenarios. Results We observed marked differences between regions, in the absolute rates of contraception, the types of contraceptives in use, and in the distribution of type of birth. Contraceptive method-mix was a significant determinant of proportion of high-risk births, especially for birth spacing, but also for mother’s age and parity. Increased use of modern contraceptives is especially predictive of reduced parity and more births with longer preceding space. However, increased contraception alone is not associated with fewer births to women younger than 18 years or a decrease in short-spaced births. Conclusions Both the level and the type of contraception are important factors in

  18. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    Science.gov (United States)

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is

  19. A machine learning method for the prediction of receptor activation in the simulation of synapses.

    Directory of Open Access Journals (Sweden)

    Jesus Montes

    Full Text Available Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of

  20. A new wind power prediction method based on chaotic theory and Bernstein Neural Network

    International Nuclear Information System (INIS)

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Fan, Xiaochao

    2016-01-01

    The accuracy of wind power prediction is important for assessing the security and economy of the system operation when wind power connects to the grids. However, multiple factors cause a long delay and large errors in wind power prediction. Hence, efficient wind power forecasting approaches are still required for practical applications. In this paper, a new wind power forecasting method based on Chaos Theory and Bernstein Neural Network (BNN) is proposed. Firstly, the largest Lyapunov exponent as a judgment for wind power system's chaotic behavior is made. Secondly, Phase Space Reconstruction (PSR) is used to reconstruct the wind power series' phase space. Thirdly, the prediction model is constructed using the Bernstein polynomial and neural network. Finally, the weights and thresholds of the model are optimized by Primal Dual State Transition Algorithm (PDSTA). The practical hourly data of wind power generation in Xinjiang is used to test this forecaster. The proposed forecaster is compared with several current prominent research findings. Analytical results indicate that the forecasting error of PDSTA + BNN is 3.893% for 24 look-ahead hours, and has lower errors obtained compared with the other forecast methods discussed in this paper. The results of all cases studying confirm the validity of the new forecast method. - Highlights: • Lyapunov exponent is used to verify chaotic behavior of wind power series. • Phase Space Reconstruction is used to reconstruct chaotic wind power series. • A new Bernstein Neural Network to predict wind power series is proposed. • Primal dual state transition algorithm is chosen as the training strategy of BNN.

  1. Beyond discrimination: A comparison of calibration methods and clinical usefulness of predictive models of readmission risk.

    Science.gov (United States)

    Walsh, Colin G; Sharman, Kavya; Hripcsak, George

    2017-12-01

    Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration

  2. Power Transformer Operating State Prediction Method Based on an LSTM Network

    Directory of Open Access Journals (Sweden)

    Hui Song

    2018-04-01

    Full Text Available The state of transformer equipment is usually manifested through a variety of information. The characteristic information will change with different types of equipment defects/faults, location, severity, and other factors. For transformer operating state prediction and fault warning, the key influencing factors of the transformer panorama information are analyzed. The degree of relative deterioration is used to characterize the deterioration of the transformer state. The membership relationship between the relative deterioration degree of each indicator and the transformer state is obtained through fuzzy processing. Through the long short-term memory (LSTM network, the evolution of the transformer status is extracted, and a data-driven state prediction model is constructed to realize preliminary warning of a potential fault of the equipment. Through the LSTM network, the quantitative index and qualitative index are organically combined in order to perceive the corresponding relationship between the characteristic parameters and the operating state of the transformer. The results of different time-scale prediction cases show that the proposed method can effectively predict the operation status of power transformers and accurately reflect their status.

  3. Prediction method of seismic residual deformation of caisson quay wall in liquefied foundation

    Science.gov (United States)

    Wang, Li-Yan; Liu, Han-Long; Jiang, Peng-Ming; Chen, Xiang-Xiang

    2011-03-01

    The multi-spring shear mechanism plastic model in this paper is defined in strain space to simulate pore pressure generation and development in sands under cyclic loading and undrained conditions, and the rotation of principal stresses can also be simulated by the model with cyclic behavior of anisotropic consolidated sands. Seismic residual deformations of typical caisson quay walls under different engineering situations are analyzed in detail by the plastic model, and then an index of liquefaction extent is applied to describe the regularity of seismic residual deformation of caisson quay wall top under different engineering situations. Some correlated prediction formulas are derived from the results of regression analysis between seismic residual deformation of quay wall top and extent of liquefaction in the relative safety backfill sand site. Finally, the rationality and the reliability of the prediction methods are validated by test results of a 120 g-centrifuge shaking table, and the comparisons show that some reliable seismic residual deformation of caisson quay can be predicted by appropriate prediction formulas and appropriate index of liquefaction extent.

  4. Vibration Prediction Method of Electric Machines by using Experimental Transfer Function and Magnetostatic Finite Element Analysis

    International Nuclear Information System (INIS)

    Saito, A; Kuroishi, M; Nakai, H

    2016-01-01

    This paper concerns the noise and structural vibration caused by rotating electric machines. Special attention is given to the magnetic-force induced vibration response of interior-permanent magnet machines. In general, to accurately predict and control the vibration response caused by the electric machines, it is inevitable to model not only the magnetic force induced by the fluctuation of magnetic fields, but also the structural dynamic characteristics of the electric machines and surrounding structural components. However, due to complicated boundary conditions and material properties of the components, such as laminated magnetic cores and varnished windings, it has been a challenge to compute accurate vibration response caused by the electric machines even after their physical models are available. In this paper, we propose a highly-accurate vibration prediction method that couples experimentally-obtained discrete structural transfer functions and numerically-obtained distributed magnetic-forces. The proposed vibration synthesis methodology has been applied to predict vibration responses of an interior permanent magnet machine. The results show that the predicted vibration response of the electric machine agrees very well with the measured vibration response for several load conditions, for wide frequency ranges. (paper)

  5. A novel prediction method of vibration and acoustic radiation for rectangular plate with particle dampers

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Dongqiang; Wu, Chengjun [Xi' an Jiaotong University, Xi' an (China)

    2016-03-15

    Particle damping technology is widely used in mechanical and structural systems or civil engineering to reduce vibration and suppress noise as a result of its high efficiency, simplicity and easy implementation, low cost, and energy-saving characteristic without the need for any auxiliary power equipment. Research on particle damping theory has focused on the vibration response of the particle damping structure, but the acoustic radiation of the particle damping structure is rarely investigated. Therefore, a feasible modeling method to predict the vibration response and acoustic radiation of the particle damping structure is desirable to satisfy the actual requirements in industrial practice. In this paper, a novel simulation method based on multiphase flow theory of gas particle by COMSOL multiphysics is developed to study the vibration and acoustic radiation characteristics of a cantilever rectangular plate with Particle dampers (PDs). The frequency response functions and scattered far-field sound pressure level of the plate without and with PDs under forced vibration are predicted, and the predictions agree well with the experimental results. Results demonstrate that the added PDs have a significant effect on vibration damping and noise reduction for the primary structure. The presented work in this paper shows that the theoretical work is valid, which can provide important theoretical guidance for low-noise optimization design of particle damping structure. This model also has an important reference value for the noise control of this kind of structure.

  6. Improved prediction of meat and bone meal metabolizable energy content for ducks through in vitro methods.

    Science.gov (United States)

    Garcia, R A; Phillips, J G; Adeola, O

    2012-08-01

    Apparent metabolizable energy (AME) of meat and bone meal (MBM) for poultry is highly variable, but impractical to measure routinely. Previous efforts at developing an in vitro method for predicting AME have had limited success. The present study uses data from a previous publication on the AME of 12 MBM samples, determined using 288 White Pekin ducks, as well as composition data on these samples. Here, we investigate the hypothesis that 2 noncompositional attributes of MBM, particle size and protease resistance, will have utility in improving predictions of AME based on in vitro measurements. Using the same MBM samples as the previous study, 2 measurements of particle size were recorded and protease resistance was determined using a modified pepsin digestibility assay. Analysis of the results using a stepwise construction of multiple linear regression models revealed that the measurements of particle size were useful in building models for AME, but the measure of protease resistance was not. Relatively simple (4-term) and complex (7-term) models for both AME and nitrogen-corrected AME were constructed, with R-squared values ranging from 0.959 to 0.996. The rather minor analytical effort required to conduct the measurements involved is discussed. Although the generality of the results are limited by the number of samples involved and the species used, they suggest that AME for poultry can be accurately predicted through simple and inexpensive in vitro methods.

  7. Reducing NIR prediction errors with nonlinear methods and large populations of intact compound feedstuffs

    International Nuclear Information System (INIS)

    Fernández-Ahumada, E; Gómez, A; Vallesquino, P; Guerrero, J E; Pérez-Marín, D; Garrido-Varo, A; Fearn, T

    2008-01-01

    According to the current demands of the authorities, the manufacturers and the consumers, controls and assessments of the feed compound manufacturing process have become a key concern. Among others, it must be assured that a given compound feed is well manufactured and labelled in terms of the ingredient composition. When near-infrared spectroscopy (NIRS) together with linear models were used for the prediction of the ingredient composition, the results were not always acceptable. Therefore, the performance of nonlinear methods has been investigated. Artificial neural networks and least squares support vector machines (LS-SVM) have been applied to a large (N = 20 320) and heterogeneous population of non-milled feed compounds for the NIR prediction of the inclusion percentage of wheat and sunflower meal, as representative of two different classes of ingredients. Compared to partial least squares regression, results showed considerable reductions of standard error of prediction values for both methods and ingredients: reductions of 45% with ANN and 49% with LS-SVM for wheat and reductions of 44% with ANN and 46% with LS-SVM for sunflower meal. These improvements together with the facility of NIRS technology to be implemented in the process make it ideal for meeting the requirements of the animal feed industry

  8. Mixed price and load forecasting of electricity markets by a new iterative prediction method

    International Nuclear Information System (INIS)

    Amjady, Nima; Daraeepour, Ali

    2009-01-01

    Load and price forecasting are the two key issues for the participants of current electricity markets. However, load and price of electricity markets have complex characteristics such as nonlinearity, non-stationarity and multiple seasonality, to name a few (usually, more volatility is seen in the behavior of electricity price signal). For these reasons, much research has been devoted to load and price forecast, especially in the recent years. However, previous research works in the area separately predict load and price signals. In this paper, a mixed model for load and price forecasting is presented, which can consider interactions of these two forecast processes. The mixed model is based on an iterative neural network based prediction technique. It is shown that the proposed model can present lower forecast errors for both load and price compared with the previous separate frameworks. Another advantage of the mixed model is that all required forecast features (from load or price) are predicted within the model without assuming known values for these features. So, the proposed model can better be adapted to real conditions of an electricity market. The forecast accuracy of the proposed mixed method is evaluated by means of real data from the New York and Spanish electricity markets. The method is also compared with some of the most recent load and price forecast techniques. (author)

  9. Geostatistical methods for rock mass quality prediction using borehole and geophysical survey data

    Science.gov (United States)

    Chen, J.; Rubin, Y.; Sege, J. E.; Li, X.; Hehua, Z.

    2015-12-01

    For long, deep tunnels, the number of geotechnical borehole investigations during the preconstruction stage is generally limited. Yet tunnels are often constructed in geological structures with complex geometries, and in which the rock mass is fragmented from past structural deformations. Tunnel Geology Prediction (TGP) is a geophysical technique widely used during tunnel construction in China to ensure safety during construction and to prevent geological disasters. In this paper, geostatistical techniques were applied in order to integrate seismic velocity from TGP and borehole information into spatial predictions of RMR (Rock Mass Rating) in unexcavated areas. This approach is intended to apply conditional probability methods to transform seismic velocities to directly observed RMR values. The initial spatial distribution of RMR, inferred from the boreholes, was updated by including geophysical survey data in a co-kriging approach. The method applied to a real tunnel project shows significant improvements in rock mass quality predictions after including geophysical survey data, leading to better decision-making for construction safety design.

  10. Handling imbalance data in churn prediction using combined SMOTE and RUS with bagging method

    Science.gov (United States)

    Pura Hartati, Eka; Adiwijaya; Arif Bijaksana, Moch

    2018-03-01

    Customer churn has become a significant problem and also a challenge for Telecommunication company such as PT. Telkom Indonesia. It is necessary to evaluate whether the big problems of churn customer and the company’s managements will make appropriate strategies to minimize the churn and retaining the customer. Churn Customer data which categorized churn Atas Permintaan Sendiri (APS) in this Company is an imbalance data, and this issue is one of the challenging tasks in machine learning. This study will investigate how is handling class imbalance in churn prediction using combined Synthetic Minority Over-Sampling (SMOTE) and Random Under-Sampling (RUS) with Bagging method for a better churn prediction performance’s result. The dataset that used is Broadband Internet data which is collected from Telkom Regional 6 Kalimantan. The research firstly using data preprocessing to balance the imbalanced dataset and also to select features by sampling technique SMOTE and RUS, and then building churn prediction model using Bagging methods and C4.5.

  11. A prediction method for long-term behavior of prestressed concrete containment vessels

    International Nuclear Information System (INIS)

    Ozaki, M.; Abe, T.; Watanabe, Y.; Kato, A.; Yamaguchi, T.; Yamamoto, M.

    1995-01-01

    This paper presents results of studies on the long-term behavior of PCCVs at Taruga Unit No 2 and Ohi Unit No 3/4 power stations. The objective of this study is to evaluate the measured strain in the concrete and reduction force in the tendons, and to establish the prediction methods for long-term PCCVs behavior. Comparing the measured strains with those calculated due to creep and shrinkage of the concrete, those in contrast were investigated. Furthermore, the reduced tendon forces are calculated considering losses in elasticity, relaxation, creep and shrinkage. The measured reduction in the tendon forces is compared with the calculated. Considering changes in temperature and humidity, the measured strains and tendon forces were in good agreement with those calculated. From the above results, it was confirmed that the residual pre stresses in the PCCVs maintain the predicted values at the design stage, and that the prediction method of long-term behaviors has sufficient reliability. (author). 10 refs., 8 figs., 3 tabs

  12. Method of fission product beta spectra measurements for predicting reactor anti-neutrino emission

    Energy Technology Data Exchange (ETDEWEB)

    Asner, D.M.; Burns, K.; Campbell, L.W.; Greenfield, B.; Kos, M.S., E-mail: markskos@gmail.com; Orrell, J.L.; Schram, M.; VanDevender, B.; Wood, L.S.; Wootan, D.W.

    2015-03-11

    The nuclear fission process that occurs in the core of nuclear reactors results in unstable, neutron-rich fission products that subsequently beta decay and emit electron antineutrinos. These reactor neutrinos have served neutrino physics research from the initial discovery of the neutrino to today's precision measurements of neutrino mixing angles. The prediction of the absolute flux and energy spectrum of the emitted reactor neutrinos hinges upon a series of seminal papers based on measurements performed in the 1970s and 1980s. The steadily improving reactor neutrino measurement techniques and recent reconsiderations of the agreement between the predicted and observed reactor neutrino flux motivates revisiting the underlying beta spectra measurements. A method is proposed to use an accelerator proton beam delivered to an engineered target to yield a neutron field tailored to reproduce the neutron energy spectrum present in the core of an operating nuclear reactor. Foils of the primary reactor fissionable isotopes placed in this tailored neutron flux will ultimately emit beta particles from the resultant fission products. Measurement of these beta particles in a time projection chamber with a perpendicular magnetic field provides a distinctive set of systematic considerations for comparison to the original seminal beta spectra measurements. Ancillary measurements such as gamma-ray emission and post-irradiation radiochemical analysis will further constrain the absolute normalization of beta emissions per fission. The requirements for unfolding the beta spectra measured with this method into a predicted reactor neutrino spectrum are explored.

  13. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    Science.gov (United States)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  14. A method for gear fatigue life prediction considering the internal flow field of the gear pump

    Science.gov (United States)

    Shen, Haidong; Li, Zhiqiang; Qi, Lele; Qiao, Liang

    2018-01-01

    Gear pump is the most widely used volume type hydraulic pump, and it is the main power source of the hydraulic system. Its performance is influenced by many factors, such as working environment, maintenance, fluid pressure and so on. It is different from the gear transmission system, the internal flow field of gear pump has a greater impact on the gear life, therefore it needs to consider the internal hydraulic system when predicting the gear fatigue life. In this paper, a certain aircraft gear pump as the research object, aim at the typical failure forms, gear contact fatigue, of gear pump, proposing the prediction method based on the virtual simulation. The method use CFD (Computational fluid dynamics) software to analyze pressure distribution of internal flow field of the gear pump, and constructed the unidirectional flow-solid coupling model of gear to acquire the contact stress of tooth surface on Ansys workbench software. Finally, employing nominal stress method and Miner cumulative damage theory to calculated the gear contact fatigue life based on modified material P-S-N curve. Engineering practice show that the method is feasible and efficient.

  15. SAAS: Short Amino Acid Sequence - A Promising Protein Secondary Structure Prediction Method of Single Sequence

    Directory of Open Access Journals (Sweden)

    Zhou Yuan Wu

    2013-07-01

    Full Text Available In statistical methods of predicting protein secondary structure, many researchers focus on single amino acid frequencies in α-helices, β-sheets, and so on, or the impact near amino acids on an amino acid forming a secondary structure. But the paper considers a short sequence of amino acids (3, 4, 5 or 6 amino acids as integer, and statistics short sequence's probability forming secondary structure. Also, many researchers select low homologous sequences as statistical database. But this paper select whole PDB database. In this paper we propose a strategy to predict protein secondary structure using simple statistical method. Numerical computation shows that, short amino acids sequence as integer to statistics, which can easy see trend of short sequence forming secondary structure, and it will work well to select large statistical database (whole PDB database without considering homologous, and Q3 accuracy is ca. 74% using this paper proposed simple statistical method, but accuracy of others statistical methods is less than 70%.

  16. Prediction of solubilities for ginger bioactive compounds in hot water by the COSMO-RS method

    Science.gov (United States)

    Zaimah Syed Jaapar, Syaripah; Azian Morad, Noor; Iwai, Yoshio

    2013-04-01

    The solubilities in water of four main ginger bioactives, 6-gingerol, 6-shogaol, 8-gingerol and 10-gingerol, were predicted using a conductor-like screening model for real solvent (COSMO-RS) calculations. This study was conducted since no experimental data are available for ginger bioactive solubilities in hot water. The σ-profiles of these selected molecules were calculated using Gaussian software and the solubilities were calculated using the COSMO-RS method. The solubilities of these ginger bioactives were calculated at 50 to 200 °C. In order to validate the accuracy of the COSMO-RS method, the solubilities of five hydrocarbon molecules were calculated using the COSMO-RS method and compared with the experimental data in the literature. The selected hydrocarbon molecules were 3-pentanone, 1-hexanol, benzene, 3-methylphenol and 2-hydroxy-5-methylbenzaldehyde. The calculated results of the hydrocarbon molecules are in good agreement with the data in the literature. These results confirm that the solubilities of ginger bioactives can be predicted using the COSMO-RS method. The solubilities of the ginger bioactives are lower than 0.0001 at temperatures lower than 130 °C. At 130 to 200 °C, the solubilities increase dramatically with the highest being 6-shogaol, which is 0.00037 mole fraction, and the lowest is 10-gingerol, which is 0.000039 mole fraction at 200 °C.

  17. Prediction of solubilities for ginger bioactive compounds in hot water by the COSMO-RS method

    International Nuclear Information System (INIS)

    Jaapar, Syaripah Zaimah Syed; Iwai, Yoshio; Morad, Noor Azian

    2013-01-01

    The solubilities in water of four main ginger bioactives, 6-gingerol, 6-shogaol, 8-gingerol and 10-gingerol, were predicted using a conductor-like screening model for real solvent (COSMO-RS) calculations. This study was conducted since no experimental data are available for ginger bioactive solubilities in hot water. The σ-profiles of these selected molecules were calculated using Gaussian software and the solubilities were calculated using the COSMO-RS method. The solubilities of these ginger bioactives were calculated at 50 to 200 °C. In order to validate the accuracy of the COSMO-RS method, the solubilities of five hydrocarbon molecules were calculated using the COSMO-RS method and compared with the experimental data in the literature. The selected hydrocarbon molecules were 3-pentanone, 1-hexanol, benzene, 3-methylphenol and 2-hydroxy-5-methylbenzaldehyde. The calculated results of the hydrocarbon molecules are in good agreement with the data in the literature. These results confirm that the solubilities of ginger bioactives can be predicted using the COSMO-RS method. The solubilities of the ginger bioactives are lower than 0.0001 at temperatures lower than 130 °C. At 130 to 200 °C, the solubilities increase dramatically with the highest being 6-shogaol, which is 0.00037 mole fraction, and the lowest is 10-gingerol, which is 0.000039 mole fraction at 200 °C.

  18. Prediction of allosteric sites on protein surfaces with an elastic-network-model-based thermodynamic method.

    Science.gov (United States)

    Su, Ji Guo; Qi, Li Sheng; Li, Chun Hua; Zhu, Yan Ying; Du, Hui Jing; Hou, Yan Xue; Hao, Rui; Wang, Ji Hua

    2014-08-01

    Allostery is a rapid and efficient way in many biological processes to regulate protein functions, where binding of an effector at the allosteric site alters the activity and function at a distant active site. Allosteric regulation of protein biological functions provides a promising strategy for novel drug design. However, how to effectively identify the allosteric sites remains one of the major challenges for allosteric drug design. In the present work, a thermodynamic method based on the elastic network model was proposed to predict the allosteric sites on the protein surface. In our method, the thermodynamic coupling between the allosteric and active sites was considered, and then the allosteric sites were identified as those where the binding of an effector molecule induces a large change in the binding free energy of the protein with its ligand. Using the proposed method, two proteins, i.e., the 70 kD heat shock protein (Hsp70) and GluA2 alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid (AMPA) receptor, were studied and the allosteric sites on the protein surface were successfully identified. The predicted results are consistent with the available experimental data, which indicates that our method is a simple yet effective approach for the identification of allosteric sites on proteins.

  19. Development of formulation Q1As method for quadrupole noise prediction around a submerged cylinder

    Directory of Open Access Journals (Sweden)

    Yo-Seb Choi

    2017-09-01

    Full Text Available Recent research has shown that quadrupole noise has a significant influence on the overall characteristics of flow-induced noise and on the performance of underwater appendages such as sonar domes. However, advanced research generally uses the Ffowcs Williams–Hawkings analogy without considering the quadrupole source to reduce computational cost. In this study, flow-induced noise is predicted by using an LES turbulence model and a developed formulation, called the formulation Q1As method to properly take into account the quadrupole source. The noise around a circular cylinder in an underwater environment is examined for two cases with different velocities. The results from the method are compared to those obtained from the experiments and the permeable FW–H method. The results are in good agreement with the experimental data, with a difference of less than 1 dB, which indicates that the formulation Q1As method is suitable for use in predicting quadrupole noise around underwater appendages.

  20. Automatic Power Control for Daily Load-following Operation using Model Predictive Control Method

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Keuk Jong; Kim, Han Gon [KH, Daejeon (Korea, Republic of)

    2009-10-15

    Under the circumstances that nuclear power occupies more than 50%, nuclear power plants are required to be operated on load-following operation in order to make the effective management of electric grid system and enhanced responsiveness to rapid changes in power demand. Conventional reactors such as the OPR1000 and APR1400 have a regulating system that controls the average temperature of the reactor core relation to the reference temperature. This conventional method has the advantages of proven technology and ease of implementation. However, this method is unsuitable for controlling the axial power shape, particularly the load following operation. Accordingly, this paper reports on the development of a model predictive control method which is able to control the reactor power and the axial shape index. The purpose of this study is to analyze the behavior of nuclear reactor power and the axial power shape by using a model predictive control method when the power is increased and decreased for a daily load following operation. The study confirms that deviations in the axial shape index (ASI) are within the operating limit.

  1. Predicting high risk births with contraceptive prevalence and contraceptive method-mix in an ecologic analysis.

    Science.gov (United States)

    Perin, Jamie; Amouzou, Agbessi; Walker, Neff

    2017-11-07

    Increased contraceptive use has been associated with a decrease in high parity births, births that occur close together in time, and births to very young or to older women. These types of births are also associated with high risk of under-five mortality. Previous studies have looked at the change in the level of contraception use and the average change in these types of high-risk births. We aim to predict the distribution of births in a specific country when there is a change in the level and method of modern contraception. We used data from full birth histories and modern contraceptive use from 207 nationally representative Demographic and Health Surveys covering 71 countries to describe the distribution of births in each survey based on birth order, preceding birth space, and mother's age at birth. We estimated the ecologic associations between the prevalence and method-mix of modern contraceptives and the proportion of births in each category. Hierarchical modelling was applied to these aggregated cross sectional proportions, so that random effects were estimated for countries with multiple surveys. We use these results to predict the change in type of births associated with scaling up modern contraception in three different scenarios. We observed marked differences between regions, in the absolute rates of contraception, the types of contraceptives in use, and in the distribution of type of birth. Contraceptive method-mix was a significant determinant of proportion of high-risk births, especially for birth spacing, but also for mother's age and parity. Increased use of modern contraceptives is especially predictive of reduced parity and more births with longer preceding space. However, increased contraception alone is not associated with fewer births to women younger than 18 years or a decrease in short-spaced births. Both the level and the type of contraception are important factors in determining the effects of family planning on changes in distribution of

  2. An efficient ray tracing method for propagation prediction along a mobile route in urban environments

    Science.gov (United States)

    Hussain, S.; Brennan, C.

    2017-07-01

    This paper presents an efficient ray tracing algorithm for propagation prediction in urban environments. The work presented in this paper builds upon previous work in which the maximum coverage area where rays can propagate after interaction with a wall or vertical edge is described by a lit polygon. The shadow regions formed by buildings within the lit polygon are described by shadow polygons. In this paper, the lit polygons of images are mapped to a coarse grid superimposed over the coverage area. This mapping reduces the active image tree significantly for a given receiver point to accelerate the ray finding process. The algorithm also presents an efficient method of quickly determining the valid ray segments for a mobile receiver moving along a linear trajectory. The validation results show considerable computation time reduction with good agreement between the simulated and measured data for propagation prediction in large urban environments.

  3. Shelf Life Prediction for Canned Gudeg using Accelerated Shelf Life Testing (ASLT) Based on Arrhenius Method

    Science.gov (United States)

    Nurhayati, R.; Rahayu NH, E.; Susanto, A.; Khasanah, Y.

    2017-04-01

    Gudeg is traditional food from Yogyakarta. It is consist of jackfruit, chicken, egg and coconut milk. Gudeg generally have a short shelf life. Canning or commercial sterilization is one way to extend the shelf life of gudeg. This aims of this research is to predict the shelf life of Andrawinaloka canned gudeg with Accelerated Shelf Life Test methods, Arrhenius model. Canned gudeg stored at three different temperature, there are 37, 50 and 60°C for two months. Measuring the number of Thio Barbituric Acid (TBA), as a critical aspect, were tested every 7 days. Arrhenius model approach is done with the equation order 0 and order 1. The analysis showed that the equation of order 0 can be used as an approach to estimating the shelf life of canned gudeg. The storage of Andrawinaloka canned gudeg at 30°C is predicted untill 21 months and 24 months for 25°C.

  4. Critical assessment of methods of protein structure prediction (CASP)-round IX

    KAUST Repository

    Moult, John; Fidelis, Krzysztof; Kryshtafovych, Andriy; Tramontano, Anna

    2011-01-01

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the ninth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. Methods for modeling protein structure continue to advance, although at a more modest pace than in the early CASP experiments. CASP developments of note are indications of improvement in model accuracy for some classes of target, an improved ability to choose the most accurate of a set of generated models, and evidence of improvement in accuracy for short "new fold" models. In addition, a new analysis of regions of models not derivable from the most obvious template structure has revealed better performance than expected.

  5. Validation of engineering methods for predicting hypersonic vehicle controls forces and moments

    Science.gov (United States)

    Maughmer, M.; Straussfogel, D.; Long, L.; Ozoroski, L.

    1991-01-01

    This work examines the ability of the aerodynamic analysis methods contained in an industry standard conceptual design code, the Aerodynamic Preliminary Analysis System (APAS II), to estimate the forces and moments generated through control surface deflections from low subsonic to high hypersonic speeds. Predicted control forces and moments generated by various control effectors are compared with previously published wind-tunnel and flight-test data for three vehicles: the North American X-15, a hypersonic research airplane concept, and the Space Shuttle Orbiter. Qualitative summaries of the results are given for each force and moment coefficient and each control derivative in the various speed ranges. Results show that all predictions of longitudinal stability and control derivatives are acceptable for use at the conceptual design stage.

  6. An improved method for predicting brittleness of rocks via well logs in tight oil reservoirs

    Science.gov (United States)

    Wang, Zhenlin; Sun, Ting; Feng, Cheng; Wang, Wei; Han, Chuang

    2018-06-01

    There can be no industrial oil production in tight oil reservoirs until fracturing is undertaken. Under such conditions, the brittleness of the rocks is a very important factor. However, it has so far been difficult to predict. In this paper, the selected study area is the tight oil reservoirs in Lucaogou formation, Permian, Jimusaer sag, Junggar basin. According to the transformation of dynamic and static rock mechanics parameters and the correction of confining pressure, an improved method is proposed for quantitatively predicting the brittleness of rocks via well logs in tight oil reservoirs. First, 19 typical tight oil core samples are selected in the study area. Their static Young’s modulus, static Poisson’s ratio and petrophysical parameters are measured. In addition, the static brittleness indices of four other tight oil cores are measured under different confining pressure conditions. Second, the dynamic Young’s modulus, Poisson’s ratio and brittleness index are calculated using the compressional and shear wave velocity. With combination of the measured and calculated results, the transformation model of dynamic and static brittleness index is built based on the influence of porosity and clay content. The comparison of the predicted brittleness indices and measured results shows that the model has high accuracy. Third, on the basis of the experimental data under different confining pressure conditions, the amplifying factor of brittleness index is proposed to correct for the influence of confining pressure on the brittleness index. Finally, the above improved models are applied to formation evaluation via well logs. Compared with the results before correction, the results of the improved models agree better with the experimental data, which indicates that the improved models have better application effects. The brittleness index prediction method of tight oil reservoirs is improved in this research. It is of great importance in the optimization of

  7. Use of predictive models and rapid methods to nowcast bacteria levels at coastal beaches

    Science.gov (United States)

    Francy, Donna S.

    2009-01-01

    The need for rapid assessments of recreational water quality to better protect public health is well accepted throughout the research and regulatory communities. Rapid analytical methods, such as quantitative polymerase chain reaction (qPCR) and immunomagnetic separation/adenosine triphosphate (ATP) analysis, are being tested but are not yet ready for widespread use.Another solution is the use of predictive models, wherein variable(s) that are easily and quickly measured are surrogates for concentrations of fecal-indicator bacteria. Rainfall-based alerts, the simplest type of model, have been used by several communities for a number of years. Deterministic models use mathematical representations of the processes that affect bacteria concentrations; this type of model is being used for beach-closure decisions at one location in the USA. Multivariable statistical models are being developed and tested in many areas of the USA; however, they are only used in three areas of the Great Lakes to aid in notifications of beach advisories or closings. These “operational” statistical models can result in more accurate assessments of recreational water quality than use of the previous day's Escherichia coli (E. coli)concentration as determined by traditional culture methods. The Ohio Nowcast, at Huntington Beach, Bay Village, Ohio, is described in this paper as an example of an operational statistical model. Because predictive modeling is a dynamic process, water-resource managers continue to collect additional data to improve the predictive ability of the nowcast and expand the nowcast to other Ohio beaches and a recreational river. Although predictive models have been shown to work well at some beaches and are becoming more widely accepted, implementation in many areas is limited by funding, lack of coordinated technical leadership, and lack of supporting epidemiological data.

  8. Prediction of human core body temperature using non-invasive measurement methods.

    Science.gov (United States)

    Niedermann, Reto; Wyss, Eva; Annaheim, Simon; Psikuta, Agnes; Davey, Sarah; Rossi, René Michel

    2014-01-01

    The measurement of core body temperature is an efficient method for monitoring heat stress amongst workers in hot conditions. However, invasive measurement of core body temperature (e.g. rectal, intestinal, oesophageal temperature) is impractical for such applications. Therefore, the aim of this study was to define relevant non-invasive measures to predict core body temperature under various conditions. We conducted two human subject studies with different experimental protocols, different environmental temperatures (10 °C, 30 °C) and different subjects. In both studies the same non-invasive measurement methods (skin temperature, skin heat flux, heart rate) were applied. A principle component analysis was conducted to extract independent factors, which were then used in a linear regression model. We identified six parameters (three skin temperatures, two skin heat fluxes and heart rate), which were included for the calculation of two factors. The predictive value of these factors for core body temperature was evaluated by a multiple regression analysis. The calculated root mean square deviation (rmsd) was in the range from 0.28 °C to 0.34 °C for all environmental conditions. These errors are similar to previous models using non-invasive measures to predict core body temperature. The results from this study illustrate that multiple physiological parameters (e.g. skin temperature and skin heat fluxes) are needed to predict core body temperature. In addition, the physiological measurements chosen in this study and the algorithm defined in this work are potentially applicable as real-time core body temperature monitoring to assess health risk in broad range of working conditions.

  9. Prediction of interactions between viral and host proteins using supervised machine learning methods.

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Barman

    Full Text Available BACKGROUND: Viral-host protein-protein interaction plays a vital role in pathogenesis, since it defines viral infection of the host and regulation of the host proteins. Identification of key viral-host protein-protein interactions (PPIs has great implication for therapeutics. METHODS: In this study, a systematic attempt has been made to predict viral-host PPIs by integrating different features, including domain-domain association, network topology and sequence information using viral-host PPIs from VirusMINT. The three well-known supervised machine learning methods, such as SVM, Naïve Bayes and Random Forest, which are commonly used in the prediction of PPIs, were employed to evaluate the performance measure based on five-fold cross validation techniques. RESULTS: Out of 44 descriptors, best features were found to be domain-domain association and methionine, serine and valine amino acid composition of viral proteins. In this study, SVM-based method achieved better sensitivity of 67% over Naïve Bayes (37.49% and Random Forest (55.66%. However the specificity of Naïve Bayes was the highest (99.52% as compared with SVM (74% and Random Forest (89.08%. Overall, the SVM and Random Forest achieved accuracy of 71% and 72.41%, respectively. The proposed SVM-based method was evaluated on blind dataset and attained a sensitivity of 64%, specificity of 83%, and accuracy of 74%. In addition, unknown potential targets of hepatitis B virus-human and hepatitis E virus-human PPIs have been predicted through proposed SVM model and validated by gene ontology enrichment analysis. Our proposed model shows that, hepatitis B virus "C protein" binds to membrane docking protein, while "X protein" and "P protein" interacts with cell-killing and metabolic process proteins, respectively. CONCLUSION: The proposed method can predict large scale interspecies viral-human PPIs. The nature and function of unknown viral proteins (HBV and HEV, interacting partners of host

  10. Predicting the diversity of internal temperatures from the English residential sector using panel methods

    International Nuclear Information System (INIS)

    Kelly, Scott; Shipworth, Michelle; Shipworth, David; Gentry, Michael; Wright, Andrew; Pollitt, Michael; Crawford-Brown, Doug; Lomas, Kevin

    2013-01-01

    Highlights: ► A new method is proposed incorporating behavioural, environmental and building efficiency variables to explain internal dwelling temperatures. ► It is the first time panel methods have been used to predict internal dwelling temperatures over time. ► The proposed method is able to explain 45% of the variance of internal temperature between heterogeneous dwellings. ► Results support qualitative research on the importance of social, cultural and psychological behaviour in determining internal dwelling temperatures. behaviour. ► This method presents new opportunities to quantify the size of the direct rebound effect between heterogeneous dwellings. -- Abstract: In this paper, panel methods are applied in new and innovative ways to predict daily mean internal temperature demand across a heterogeneous domestic building stock over time. This research not only exploits a rich new dataset but presents new methodological insights and offers important linkages for connecting bottom-up building stock models to human behaviour. It represents the first time a panel model has been used to estimate the dynamics of internal temperature demand from the natural daily fluctuations of external temperature combined with important behavioural, socio-demographic and building efficiency variables. The model is able to predict internal temperatures across a heterogeneous building stock to within ∼0.71 °C at 95% confidence and explain 45% of the variance of internal temperature between dwellings. The model confirms hypothesis from sociology and psychology that habitual behaviours are important drivers of home energy consumption. In addition, the model offers the possibility to quantify take-back (direct rebound effect) owing to increased internal temperatures from the installation of energy efficiency measures. The presence of thermostats or thermostatic radiator valves (TRVs) are shown to reduce average internal temperatures, however, the use of an automatic timer

  11. Prediction of MHC class II binding affinity using SMM-align, a novel stabilization matrix alignment method

    DEFF Research Database (Denmark)

    Nielsen, Morten; Lundegaard, Claus; Lund, Ole

    2007-01-01

    the correct alignment of a peptide in the binding groove a crucial part of identifying the core of an MHC class II binding motif. Here, we present a novel stabilization matrix alignment method, SMM-align, that allows for direct prediction of peptide:MHC binding affinities. The predictive performance...... of the method is validated on a large MHC class II benchmark data set covering 14 HLA-DR (human MHC) and three mouse H2-IA alleles. RESULTS: The predictive performance of the SMM-align method was demonstrated to be superior to that of the Gibbs sampler, TEPITOPE, SVRMHC, and MHCpred methods. Cross validation...... between peptide data set obtained from different sources demonstrated that direct incorporation of peptide length potentially results in over-fitting of the binding prediction method. Focusing on amino terminal peptide flanking residues (PFR), we demonstrate a consistent gain in predictive performance...

  12. A hybrid measure-correlate-predict method for long-term wind condition assessment

    International Nuclear Information System (INIS)

    Zhang, Jie; Chowdhury, Souma; Messac, Achille; Hodge, Bri-Mathias

    2014-01-01

    Highlights: • A hybrid measure-correlate-predict (MCP) methodology with greater accuracy is developed. • Three sets of performance metrics are proposed to evaluate the hybrid MCP method. • Both wind speed and direction are considered in the hybrid MCP method. • The best combination of MCP algorithms is determined. • The developed hybrid MCP method is uniquely helpful for long-term wind resource assessment. - Abstract: This paper develops a hybrid measure-correlate-predict (MCP) strategy to assess long-term wind resource variations at a farm site. The hybrid MCP method uses recorded data from multiple reference stations to estimate long-term wind conditions at a target wind plant site with greater accuracy than is possible with data from a single reference station. The weight of each reference station in the hybrid strategy is determined by the (i) distance and (ii) elevation differences between the target farm site and each reference station. In this case, the wind data is divided into sectors according to the wind direction, and the MCP strategy is implemented for each wind direction sector separately. The applicability of the proposed hybrid strategy is investigated using five MCP methods: (i) the linear regression; (ii) the variance ratio; (iii) the Weibull scale; (iv) the artificial neural networks; and (v) the support vector regression. To implement the hybrid MCP methodology, we use hourly averaged wind data recorded at five stations in the state of Minnesota between 07-01-1996 and 06-30-2004. Three sets of performance metrics are used to evaluate the hybrid MCP method. The first set of metrics analyze the statistical performance, including the mean wind speed, wind speed variance, root mean square error, and mean absolute error. The second set of metrics evaluate the distribution of long-term wind speed; to this end, the Weibull distribution and the Multivariate and Multimodal Wind Distribution models are adopted. The third set of metrics analyze

  13. Creep/fatigue damage prediction of fast reactor components using shakedown methods

    International Nuclear Information System (INIS)

    Buckthorpe, D.E.

    1997-01-01

    The present status of the shakedown method is reviewed, the application of the shakedown based principles to complex hardening and creep behaviour is described and justified and the prediction of damage against design criteria outlined. Comparisons are made with full inelastic analysis solutions where these are available and against damage assessments using elastic and inelastic design code methods. Current and future developments of the method are described including a summary of the advances made in the development of the post process ADAPT, which has enabled the method to be applied to complex geometry features and loading cases. The paper includes a review of applications of the method to typical Fast Reactor structural example cases within the primary and secondary circuits. For the primary circuit this includes structures such as the large diameter internal shells which are surrounded by hot sodium and subject to slow and rapid thermal transient loadings. One specific case is the damage assessment associated with thermal stratifications within sodium and the effects of moving sodium surfaces arising from reactor trip conditions. Other structures covered are geometric features within components such as the Above Core structure and Intermediate Heat Exchanger. For the secondary circuit the method has been applied to alternative and more complex forms of geometry namely thick section tubeplates of the Steam Generator and a typical secondary circuit piping run. Both of these applications are in an early stage of development but are expected to show significant advantages with respect to creep and fatigue damage estimation compared with existing code methods. The principle application of the method to design has so far been focused on Austenitic Stainless steel components however current work shows some significant benefits may be possible from the application of the method to structures made from Ferritic steels such as Modified 9Cr 1Mo. This aspect is briefly

  14. Comparison of Four Weighting Methods in Fuzzy-based Land Suitability to Predict Wheat Yield

    Directory of Open Access Journals (Sweden)

    Fatemeh Rahmati

    2017-06-01

    Full Text Available Introduction: Land suitability evaluation is a process to examine the degree of land fitness for specific utilization and also makes it possible to estimate land productivity potential. In 1976, FAO provided a general framework for land suitability classification. It has not been proposed a specific method to perform this classification in the framework. In later years, a collection of methods was presented based on the FAO framework. In parametric method, different land suitability aspects are defined as completely discrete groups and are separated from each other by distinguished and consistent ranges. Therefore, land units that have moderate suitability can only choose one of the characteristics of predefined classes of land suitability. Fuzzy logic is an extension of Boolean logic by LotfiZadeh in 1965 based on the mathematical theory of fuzzy sets, which is a generalization of the classical set theory. By introducing the notion of degree in the verification of a condition, fuzzy method enables a condition to be in a state other than true or false, as well as provides a very valuable flexibility for reasoning, which makes it possible to take into account inaccuracies and uncertainties. One advantage of fuzzy logic in order to formalize human reasoning is that the rules are set in natural language. In evaluation method based on fuzzy logic, the weights are used for land characteristics. The objective of this study was to compare four methods of weight calculation in the fuzzy logic to predict the yield of wheat in the study area covering 1500 ha in Kian town in Shahrekord (Chahrmahal and Bakhtiari province, Iran. Materials and Methods: In such investigations, climatic factors, and soil physical and chemical characteristics are studied. This investigation involves several studies including a lab study, and qualitative and quantitative land suitability evaluation with fuzzy logic for wheat. Factors affecting the wheat production consist of

  15. Using Bayesian methods to predict climate impacts on groundwater availability and agricultural production in Punjab, India

    Science.gov (United States)

    Russo, T. A.; Devineni, N.; Lall, U.

    2015-12-01

    Lasting success of the Green Revolution in Punjab, India relies on continued availability of local water resources. Supplying primarily rice and wheat for the rest of India, Punjab supports crop irrigation with a canal system and groundwater, which is vastly over-exploited. The detailed data required to physically model future impacts on water supplies agricultural production is not readily available for this region, therefore we use Bayesian methods to estimate hydrologic properties and irrigation requirements for an under-constrained mass balance model. Using measured values of historical precipitation, total canal water delivery, crop yield, and water table elevation, we present a method using a Markov chain Monte Carlo (MCMC) algorithm to solve for a distribution of values for each unknown parameter in a conceptual mass balance model. Due to heterogeneity across the state, and the resolution of input data, we estimate model parameters at the district-scale using spatial pooling. The resulting model is used to predict the impact of precipitation change scenarios on groundwater availability under multiple cropping options. Predicted groundwater declines vary across the state, suggesting that crop selection and water management strategies should be determined at a local scale. This computational method can be applied in data-scarce regions across the world, where water resource management is required to resolve competition between food security and available resources in a changing climate.

  16. In Silico Prediction of Chemicals Binding to Aromatase with Machine Learning Methods.

    Science.gov (United States)

    Du, Hanwen; Cai, Yingchun; Yang, Hongbin; Zhang, Hongxiao; Xue, Yuhan; Liu, Guixia; Tang, Yun; Li, Weihua

    2017-05-15

    Environmental chemicals may affect endocrine systems through multiple mechanisms, one of which is via effects on aromatase (also known as CYP19A1), an enzyme critical for maintaining the normal balance of estrogens and androgens in the body. Therefore, rapid and efficient identification of aromatase-related endocrine disrupting chemicals (EDCs) is important for toxicology and environment risk assessment. In this study, on the basis of the Tox21 10K compound library, in silico classification models for predicting aromatase binders/nonbinders were constructed by machine learning methods. To improve the prediction ability of the models, a combined classifier (CC) strategy that combines different independent machine learning methods was adopted. Performances of the models were measured by test and external validation sets containing 1336 and 216 chemicals, respectively. The best model was obtained with the MACCS (Molecular Access System) fingerprint and CC method, which exhibited an accuracy of 0.84 for the test set and 0.91 for the external validation set. Additionally, several representative substructures for characterizing aromatase binders, such as ketone, lactone, and nitrogen-containing derivatives, were identified using information gain and substructure frequency analysis. Our study provided a systematic assessment of chemicals binding to aromatase. The built models can be helpful to rapidly identify potential EDCs targeting aromatase.

  17. Shelf life prediction of canned fried-rice using accelerated shelf life testing (ASLT) arrhenius method

    Science.gov (United States)

    Kurniadi, M.; Bintang, R.; Kusumaningrum, A.; Nursiwi, A.; Nurhikmat, A.; Susanto, A.; Angwar, M.; Triwiyono; Frediansyah, A.

    2017-12-01

    Research on shelf-life prediction of canned fried rice using Accelerated Shelf-life Test (ASLT) of Arrhenius model has been conducted. The aim of this research to predict shelf life of canned-fried rice products. Lethality value of 121°C for 15 and 20 minutes and Total Plate count methods are used to determine time and temperatures of sterilization process.Various storage temperatures of ASLT Arrhenius method were 35, 45 and 55°C during 35days. Rancidity is one of the derivation quality of canned fried rice. In this research, sample of canned fried rice is tested using rancidity value (TBA). TBA value was used as parameter which be measured once a week periodically. The use of can for fried rice without any chemical preservative is one of the advantage of the product, additionaly the use of physicalproperties such as temperature and pressure during its process can extend the shelf life and reduce the microbial contamination. The same research has never done before for fried rice as ready to eat meal. The result showed that the optimum conditions of sterilization process were 121°C,15 minutes with total plate count number of 9,3 × 101 CFU/ml. Lethality value of canned fried rice at 121°C,15 minutes was 3.63 minutes. The calculated Shelf-life of canned fried rice using Accelerated Shelf-life Test (ASLT) of Arrhenius method was 10.3 months.

  18. Improved methods for predicting peptide binding affinity to MHC class II molecules.

    Science.gov (United States)

    Jensen, Kamilla Kjaergaard; Andreatta, Massimo; Marcatili, Paolo; Buus, Søren; Greenbaum, Jason A; Yan, Zhen; Sette, Alessandro; Peters, Bjoern; Nielsen, Morten

    2018-01-06

    Major histocompatibility complex class II (MHC-II) molecules are expressed on the surface of professional antigen-presenting cells where they display peptides to T helper cells, which orchestrate the onset and outcome of many host immune responses. Understanding which peptides will be presented by the MHC-II molecule is therefore important for understanding the activation of T helper cells and can be used to identify T-cell epitopes. We here present updated versions of two MHC-II-peptide binding affinity prediction methods, NetMHCII and NetMHCIIpan. These were constructed using an extended data set of quantitative MHC-peptide binding affinity data obtained from the Immune Epitope Database covering HLA-DR, HLA-DQ, HLA-DP and H-2 mouse molecules. We show that training with this extended data set improved the performance for peptide binding predictions for both methods. Both methods are publicly available at www.cbs.dtu.dk/services/NetMHCII-2.3 and www.cbs.dtu.dk/services/NetMHCIIpan-3.2. © 2018 John Wiley & Sons Ltd.

  19. Bearing Procurement Analysis Method by Total Cost of Ownership Analysis and Reliability Prediction

    Science.gov (United States)

    Trusaji, Wildan; Akbar, Muhammad; Sukoyo; Irianto, Dradjad

    2018-03-01

    In making bearing procurement analysis, price and its reliability must be considered as decision criteria, since price determines the direct cost as acquisition cost and reliability of bearing determine the indirect cost such as maintenance cost. Despite the indirect cost is hard to identify and measured, it has high contribution to overall cost that will be incurred. So, the indirect cost of reliability must be considered when making bearing procurement analysis. This paper tries to explain bearing evaluation method with the total cost of ownership analysis to consider price and maintenance cost as decision criteria. Furthermore, since there is a lack of failure data when bearing evaluation phase is conducted, reliability prediction method is used to predict bearing reliability from its dynamic load rating parameter. With this method, bearing with a higher price but has higher reliability is preferable for long-term planning. But for short-term planning the cheaper one but has lower reliability is preferable. This contextuality can give rise to conflict between stakeholders. Thus, the planning horizon needs to be agreed by all stakeholder before making a procurement decision.

  20. A novel method to predict the highest hardness of plasma sprayed coating without micro-defects

    Science.gov (United States)

    Zhuo, Yukun; Ye, Fuxing; Wang, Feng

    2018-04-01

    The plasma sprayed coatings are stacked by splats, which are regarded generally as the elementary units of coating. Many researchers have focused on the morphology and formation mechanism of splat. However, a novel method to predict the highest hardness of plasma sprayed coating without micro-defects is proposed according to the nanohardness of splat in this paper. The effectiveness of this novel method was examined by experiments. Firstly, the microstructure of splats and coating, meanwhile the 3D topography of the splats were observed by SEM (SU1510) and video microscope (VHX-2000). Secondly, the nanohardness of splats was evaluated by nanoindentation (NHT) in order to be compared with microhardness of coating measured by microhardness tester (HV-1000A). The results show that the nanohardness of splats with diameter of 70 μm, 100 μm and 140 μm were in the scope of 11∼12 GPa while the microhardness of coating were in the range of 8∼9 GPa. Because the splats had not micro-defects such as pores and cracks in the nanohardness evaluated nano-zone, the nanohardness of the splats can be utilized to predict the highest hardness of coating without micro-defects. This method indicates the maximum of sprayed coating hardness and will reduce the test number to get high hardness coating for better wear resistance.

  1. ESLpred2: improved method for predicting subcellular localization of eukaryotic proteins

    Directory of Open Access Journals (Sweden)

    Raghava Gajendra PS

    2008-11-01

    Full Text Available Abstract Background The expansion of raw protein sequence databases in the post genomic era and availability of fresh annotated sequences for major localizations particularly motivated us to introduce a new improved version of our previously forged eukaryotic subcellular localizations prediction method namely "ESLpred". Since, subcellular localization of a protein offers essential clues about its functioning, hence, availability of localization predictor would definitely aid and expedite the protein deciphering studies. However, robustness of a predictor is highly dependent on the superiority of dataset and extracted protein attributes; hence, it becomes imperative to improve the performance of presently available method using latest dataset and crucial input features. Results Here, we describe augmentation in the prediction performance obtained for our most popular ESLpred method using new crucial features as an input to Support Vector Machine (SVM. In addition, recently available, highly non-redundant dataset encompassing three kingdoms specific protein sequence sets; 1198 fungi sequences, 2597 from animal and 491 plant sequences were also included in the present study. First, using the evolutionary information in the form of profile composition along with whole and N-terminal sequence composition as an input feature vector of 440 dimensions, overall accuracies of 72.7, 75.8 and 74.5% were achieved respectively after five-fold cross-validation. Further, enhancement in performance was observed when similarity search based results were coupled with whole and N-terminal sequence composition along with profile composition by yielding overall accuracies of 75.9, 80.8, 76.6% respectively; best accuracies reported till date on the same datasets. Conclusion These results provide confidence about the reliability and accurate prediction of SVM modules generated in the present study using sequence and profile compositions along with similarity search

  2. Slat Noise Predictions Using Higher-Order Finite-Difference Methods on Overset Grids

    Science.gov (United States)

    Housman, Jeffrey A.; Kiris, Cetin

    2016-01-01

    Computational aeroacoustic simulations using the structured overset grid approach and higher-order finite difference methods within the Launch Ascent and Vehicle Aerodynamics (LAVA) solver framework are presented for slat noise predictions. The simulations are part of a collaborative study comparing noise generation mechanisms between a conventional slat and a Krueger leading edge flap. Simulation results are compared with experimental data acquired during an aeroacoustic test in the NASA Langley Quiet Flow Facility. Details of the structured overset grid, numerical discretization, and turbulence model are provided.

  3. An alternative method to predict the S-shaped curve for logistic characteristics of phonon transport in silicon thin film

    International Nuclear Information System (INIS)

    Awad, M.M.

    2014-01-01

    The S-shaped curve was observed by Yilbas and Bin Mansoor (2013). In this study, an alternative method to predict the S-shaped curve for logistic characteristics of phonon transport in silicon thin film is presented by using an analytical prediction method. This analytical prediction method was introduced by Bejan and Lorente in 2011 and 2012. The Bejan and Lorente method is based on two-mechanism flow of fast “invasion” by convection and slow “consolidation” by diffusion.

  4. Robust Navier-Stokes method for predicting unsteady flowfield and aerodynamic characteristics of helicopter rotor

    Directory of Open Access Journals (Sweden)

    Qijun ZHAO

    2018-02-01

    Full Text Available A robust unsteady rotor flowfield solver CLORNS code is established to predict the complex unsteady aerodynamic characteristics of rotor flowfield. In order to handle the difficult problem about grid generation around rotor with complex aerodynamic shape in this CFD code, a parameterized grid generated method is established, and the moving-embedded grids are constructed by several proposed universal methods. In this work, the unsteady Reynolds-Averaged Navier-Stokes (RANS equations with Spalart-Allmaras are selected as the governing equations to predict the unsteady flowfield of helicopter rotor. The discretization of convective fluxes is accomplished by employing the second-order central difference scheme, third-order MUSCL-Roe scheme, and fifth-order WENO-Roe scheme. Aimed at simulating the unsteady aerodynamic characteristics of helicopter rotor, the dual-time scheme with implicit LU-SGS scheme is employed to accomplish the temporal discretization. In order to improve the computational efficiency of hole-cells and donor elements searching of the moving-embedded grid technology, the “disturbance diffraction method” and “minimum distance scheme of donor elements method” are established in this work. To improve the computational efficiency, Message Passing Interface (MPI parallel method based on subdivision of grid, local preconditioning method and Full Approximation Storage (FAS multi-grid method are combined in this code. By comparison of the numerical results simulated by CLORNS code with test data, it is illustrated that the present code could simulate the aerodynamic loads and aerodynamic noise characteristics of helicopter rotor accurately. Keywords: Aerodynamic characteristics, Helicopter rotor, Moving-embedded grid, Navier-Stokes equations, Upwind schemes

  5. Method and apparatus to predict the remaining service life of an operating system

    Science.gov (United States)

    Greitzer, Frank L.; Kangas, Lars J.; Terrones, Kristine M.; Maynard, Melody A.; Pawlowski, Ronald A. , Ferryman; Thomas A.; Skorpik, James R.; Wilson, Bary W.

    2008-11-25

    A method and computer-based apparatus for monitoring the degradation of, predicting the remaining service life of, and/or planning maintenance for, an operating system are disclosed. Diagnostic information on degradation of the operating system is obtained through measurement of one or more performance characteristics by one or more sensors onboard and/or proximate the operating system. Though not required, it is preferred that the sensor data are validated to improve the accuracy and reliability of the service life predictions. The condition or degree of degradation of the operating system is presented to a user by way of one or more calculated, numeric degradation figures of merit that are trended against one or more independent variables using one or more mathematical techniques. Furthermore, more than one trendline and uncertainty interval may be generated for a given degradation figure of merit/independent variable data set. The trendline(s) and uncertainty interval(s) are subsequently compared to one or more degradation figure of merit thresholds to predict the remaining service life of the operating system. The present invention enables multiple mathematical approaches in determining which trendline(s) to use to provide the best estimate of the remaining service life.

  6. e-Bitter: Bitterant Prediction by the Consensus Voting From the Machine-Learning Methods.

    Science.gov (United States)

    Zheng, Suqing; Jiang, Mengying; Zhao, Chengwei; Zhu, Rui; Hu, Zhicheng; Xu, Yong; Lin, Fu

    2018-01-01

    In-silico bitterant prediction received the considerable attention due to the expensive and laborious experimental-screening of the bitterant. In this work, we collect the fully experimental dataset containing 707 bitterants and 592 non-bitterants, which is distinct from the fully or partially hypothetical non-bitterant dataset used in the previous works. Based on this experimental dataset, we harness the consensus votes from the multiple machine-learning methods (e.g., deep learning etc.) combined with the molecular fingerprint to build the bitter/bitterless classification models with five-fold cross-validation, which are further inspected by the Y-randomization test and applicability domain analysis. One of the best consensus models affords the accuracy, precision, specificity, sensitivity, F1-score, and Matthews correlation coefficient (MCC) of 0.929, 0.918, 0.898, 0.954, 0.936, and 0.856 respectively on our test set. For the automatic prediction of bitterant, a graphic program "e-Bitter" is developed for the convenience of users via the simple mouse click. To our best knowledge, it is for the first time to adopt the consensus model for the bitterant prediction and develop the first free stand-alone software for the experimental food scientist.

  7. e-Bitter: Bitterant Prediction by the Consensus Voting From the Machine-Learning Methods

    Directory of Open Access Journals (Sweden)

    Suqing Zheng

    2018-03-01

    Full Text Available In-silico bitterant prediction received the considerable attention due to the expensive and laborious experimental-screening of the bitterant. In this work, we collect the fully experimental dataset containing 707 bitterants and 592 non-bitterants, which is distinct from the fully or partially hypothetical non-bitterant dataset used in the previous works. Based on this experimental dataset, we harness the consensus votes from the multiple machine-learning methods (e.g., deep learning etc. combined with the molecular fingerprint to build the bitter/bitterless classification models with five-fold cross-validation, which are further inspected by the Y-randomization test and applicability domain analysis. One of the best consensus models affords the accuracy, precision, specificity, sensitivity, F1-score, and Matthews correlation coefficient (MCC of 0.929, 0.918, 0.898, 0.954, 0.936, and 0.856 respectively on our test set. For the automatic prediction of bitterant, a graphic program “e-Bitter” is developed for the convenience of users via the simple mouse click. To our best knowledge, it is for the first time to adopt the consensus model for the bitterant prediction and develop the first free stand-alone software for the experimental food scientist.

  8. EPMLR: sequence-based linear B-cell epitope prediction method using multiple linear regression.

    Science.gov (United States)

    Lian, Yao; Ge, Meng; Pan, Xian-Ming

    2014-12-19

    B-cell epitopes have been studied extensively due to their immunological applications, such as peptide-based vaccine development, antibody production, and disease diagnosis and therapy. Despite several decades of research, the accurate prediction of linear B-cell epitopes has remained a challenging task. In this work, based on the antigen's primary sequence information, a novel linear B-cell epitope prediction model was developed using the multiple linear regression (MLR). A 10-fold cross-validation test on a large non-redundant dataset was performed to evaluate the performance of our model. To alleviate the problem caused by the noise of negative dataset, 300 experiments utilizing 300 sub-datasets were performed. We achieved overall sensitivity of 81.8%, precision of 64.1% and area under the receiver operating characteristic curve (AUC) of 0.728. We have presented a reliable method for the identification of linear B cell epitope using antigen's primary sequence information. Moreover, a web server EPMLR has been developed for linear B-cell epitope prediction: http://www.bioinfo.tsinghua.edu.cn/epitope/EPMLR/ .

  9. Prediction of Backbreak in Open-Pit Blasting Operations Using the Machine Learning Method

    Science.gov (United States)

    Khandelwal, Manoj; Monjezi, M.

    2013-03-01

    Backbreak is an undesirable phenomenon in blasting operations. It can cause instability of mine walls, falling down of machinery, improper fragmentation, reduced efficiency of drilling, etc. The existence of various effective parameters and their unknown relationships are the main reasons for inaccuracy of the empirical models. Presently, the application of new approaches such as artificial intelligence is highly recommended. In this paper, an attempt has been made to predict backbreak in blasting operations of Soungun iron mine, Iran, incorporating rock properties and blast design parameters using the support vector machine (SVM) method. To investigate the suitability of this approach, the predictions by SVM have been compared with multivariate regression analysis (MVRA). The coefficient of determination (CoD) and the mean absolute error (MAE) were taken as performance measures. It was found that the CoD between measured and predicted backbreak was 0.987 and 0.89 by SVM and MVRA, respectively, whereas the MAE was 0.29 and 1.07 by SVM and MVRA, respectively.

  10. e-Bitter: Bitterant Prediction by the Consensus Voting From the Machine-learning Methods

    Science.gov (United States)

    Zheng, Suqing; Jiang, Mengying; Zhao, Chengwei; Zhu, Rui; Hu, Zhicheng; Xu, Yong; Lin, Fu

    2018-03-01

    In-silico bitterant prediction received the considerable attention due to the expensive and laborious experimental-screening of the bitterant. In this work, we collect the fully experimental dataset containing 707 bitterants and 592 non-bitterants, which is distinct from the fully or partially hypothetical non-bitterant dataset used in the previous works. Based on this experimental dataset, we harness the consensus votes from the multiple machine-learning methods (e.g., deep learning etc.) combined with the molecular fingerprint to build the bitter/bitterless classification models with five-fold cross-validation, which are further inspected by the Y-randomization test and applicability domain analysis. One of the best consensus models affords the accuracy, precision, specificity, sensitivity, F1-score, and Matthews correlation coefficient (MCC) of 0.929, 0.918, 0.898, 0.954, 0.936, and 0.856 respectively on our test set. For the automatic prediction of bitterant, a graphic program “e-Bitter” is developed for the convenience of users via the simple mouse click. To our best knowledge, it is for the first time to adopt the consensus model for the bitterant prediction and develop the first free stand-alone software for the experimental food scientist.

  11. Prediction of springback in V-die air bending process by using finite element method

    Directory of Open Access Journals (Sweden)

    Trzepiecinski Tomasz

    2017-01-01

    Full Text Available Springback phenomenon affects the dimensional and geometrical accuracy of the bent parts. The prediction of springback is a key problem in sheet metal forming. The aim of this paper is the numerical analysis of the possibility to predict the springback of anisotropic steel sheets. The experiments are conducted on 40 x 100 mm steel sheets. The mechanical properties of the sheet metals have been determined through uniaxial tensile tests of samples cut along three directions with respect to the rolling direction. The numerical model of air V-bending is built in finite element method (FEM based ABAQUS/Standard 2016.HF2 (Dassault Systemes Simulia Corp., USA program. The FEM results were verified by experimental investigations. The simulation model has taken into consideration material anisotropy and strain hardening phenomenon. The results of FEM simulations confirmed the ability of numerical prediction of springback amount. It was also found that the directional microstructure of the sheet metal resulted from rolling process affects the elastic-plastic deformation of the sheets through the sample width.

  12. Esophageal cancer prediction based on qualitative features using adaptive fuzzy reasoning method

    Directory of Open Access Journals (Sweden)

    Raed I. Hamed

    2015-04-01

    Full Text Available Esophageal cancer is one of the most common cancers world-wide and also the most common cause of cancer death. In this paper, we present an adaptive fuzzy reasoning algorithm for rule-based systems using fuzzy Petri nets (FPNs, where the fuzzy production rules are represented by FPN. We developed an adaptive fuzzy Petri net (AFPN reasoning algorithm as a prognostic system to predict the outcome for esophageal cancer based on the serum concentrations of C-reactive protein and albumin as a set of input variables. The system can perform fuzzy reasoning automatically to evaluate the degree of truth of the proposition representing the risk degree value with a weight value to be optimally tuned based on the observed data. In addition, the implementation process for esophageal cancer prediction is fuzzily deducted by the AFPN algorithm. Performance of the composite model is evaluated through a set of experiments. Simulations and experimental results demonstrate the effectiveness and performance of the proposed algorithms. A comparison of the predictive performance of AFPN models with other methods and the analysis of the curve showed the same results with an intuitive behavior of AFPN models.

  13. CaFE: a tool for binding affinity prediction using end-point free energy methods.

    Science.gov (United States)

    Liu, Hui; Hou, Tingjun

    2016-07-15

    Accurate prediction of binding free energy is of particular importance to computational biology and structure-based drug design. Among those methods for binding affinity predictions, the end-point approaches, such as MM/PBSA and LIE, have been widely used because they can achieve a good balance between prediction accuracy and computational cost. Here we present an easy-to-use pipeline tool named Calculation of Free Energy (CaFE) to conduct MM/PBSA and LIE calculations. Powered by the VMD and NAMD programs, CaFE is able to handle numerous static coordinate and molecular dynamics trajectory file formats generated by different molecular simulation packages and supports various force field parameters. CaFE source code and documentation are freely available under the GNU General Public License via GitHub at https://github.com/huiliucode/cafe_plugin It is a VMD plugin written in Tcl and the usage is platform-independent. tingjunhou@zju.edu.cn. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Assessment of NASA and RAE viscous-inviscid interaction methods for predicting transonic flow over nozzle afterbodies

    Science.gov (United States)

    Putnam, L. E.; Hodges, J.

    1983-01-01

    The Langley Research Center of the National Aeronautics and Space Administration and the Royal Aircraft Establishment have undertaken a cooperative program to conduct an assessment of their patched viscous-inviscid interaction methods for predicting the transonic flow over nozzle afterbodies. The assessment was made by comparing the predictions of the two methods with experimental pressure distributions and boattail pressure drag for several convergent circular-arc nozzle configurations. Comparisons of the predictions of the two methods with the experimental data showed that both methods provided good predictions of the flow characteristics of nozzles with attached boundary layer flow. The RAE method also provided reasonable predictions of the pressure distributions and drag for the nozzles investigated that had separated boundary layers. The NASA method provided good predictions of the pressure distribution on separated flow nozzles that had relatively thin boundary layers. However, the NASA method was in poor agreement with experiment for separated nozzles with thick boundary layers due primarily to deficiencies in the method used to predict the separation location.

  15. Prediction of methylmercury accumulation in rice grains by chemical extraction methods

    International Nuclear Information System (INIS)

    Zhu, Dai-Wen; Zhong, Huan; Zeng, Qi-Long; Yin, Ying

    2015-01-01

    To explore the possibility of using chemical extraction methods to predict phytoavailability/bioaccumulation of soil-bound MeHg, MeHg extractions by three widely-used extractants (CaCl 2 , DTPA, and (NH 4 ) 2 S 2 O 3 ) were compared with MeHg accumulation in rice grains. Despite of variations in characteristics of different soils, MeHg extracted by (NH 4 ) 2 S 2 O 3 (highly affinitive to MeHg) correlated well with grain MeHg levels. Thus (NH 4 ) 2 S 2 O 3 extraction, solubilizing not only weakly-bound and but also strongly-bound MeHg, may provide a measure of ‘phytoavailable MeHg pool’ for rice plants. Besides, a better prediction of grain MeHg levels was obtained when growing condition of rice plants was also considered. However, MeHg extracted by CaCl 2 or DTPA, possibly quantifying ‘exchangeable MeHg pool’ or ‘weakly-complexed MeHg pool’ in soils, may not indicate phytoavailable MeHg or predict grain MeHg levels. Our results provided the possibility of predicting MeHg phytoavailability/bioaccumulation by (NH 4 ) 2 S 2 O 3 extraction, which could be useful in screening soils for rice cultivation in contaminated areas. - Highlights: • MeHg extraction by (NH 4 ) 2 S 2 O 3 correlates well with its accumulation in rice grains. • MeHg extraction by (NH 4 ) 2 S 2 O 3 provides a measure of phytoavailable MeHg in soils. • Some strongly-bound MeHg could be desorbed from soils and available to rice plants. • MeHg extraction by CaCl 2 or DTPA could not predict grain MeHg levels. - Methylmercury extraction from soils by (NH 4 ) 2 S 2 O 3 could possibly be used for predicting methylmercury phytoavailability and its bioaccumulation in rice grains

  16. Decision support system in Predicting the Best teacher with Multi Atribute Decesion Making Weighted Product (MADMWP Method

    Directory of Open Access Journals (Sweden)

    Solikhun Solikhun

    2017-06-01

    Full Text Available Predicting of the best teacher in Indonesia aims to spur the development of the growth and improve the quality of the education. In this paper, the predicting  of the best teacher is implemented based on predefined criteria. To help the predicting process, a decision support system is needed. This paper employs Multi Atribute Decesion Making Weighted Product (MADMWP method. The result of this method is tested some teachers in  junior high school islamic boarding Al-Barokah school, Simalungun, North Sumatera, Indonesia. This system can be used to help in solving problems of the best teacher prediction.

  17. Toxicity of ionic liquids: Database and prediction via quantitative structure–activity relationship method

    International Nuclear Information System (INIS)

    Zhao, Yongsheng; Zhao, Jihong; Huang, Ying; Zhou, Qing; Zhang, Xiangping; Zhang, Suojiang

    2014-01-01

    Highlights: • A comprehensive database on toxicity of ionic liquids (ILs) was established. • Relationship between structure and toxicity of IL has been analyzed qualitatively. • Two new QSAR models were developed for predicting toxicity of ILs to IPC-81. • Accuracy of proposed nonlinear SVM model is much higher than the linear MLR model. • The established models can be explored in designing novel green agents. - Abstract: A comprehensive database on toxicity of ionic liquids (ILs) is established. The database includes over 4000 pieces of data. Based on the database, the relationship between IL's structure and its toxicity has been analyzed qualitatively. Furthermore, Quantitative Structure–Activity relationships (QSAR) model is conducted to predict the toxicities (EC 50 values) of various ILs toward the Leukemia rat cell line IPC-81. Four parameters selected by the heuristic method (HM) are used to perform the studies of multiple linear regression (MLR) and support vector machine (SVM). The squared correlation coefficient (R 2 ) and the root mean square error (RMSE) of training sets by two QSAR models are 0.918 and 0.959, 0.258 and 0.179, respectively. The prediction R 2 and RMSE of QSAR test sets by MLR model are 0.892 and 0.329, by SVM model are 0.958 and 0.234, respectively. The nonlinear model developed by SVM algorithm is much outperformed MLR, which indicates that SVM model is more reliable in the prediction of toxicity of ILs. This study shows that increasing the relative number of O atoms of molecules leads to decrease in the toxicity of ILs

  18. Eyeball Position in Facial Approximation: Accuracy of Methods for Predicting Globe Positioning in Lateral View.

    Science.gov (United States)

    Zednikova Mala, Pavla; Veleminska, Jana

    2018-01-01

    This study measured the accuracy of traditional and validated newly proposed methods for globe positioning in lateral view. Eighty lateral head cephalograms of adult subjects from Central Europe were taken, and the actual and predicted dimensions were compared. The anteroposterior eyeball position was estimated as the most accurate method based on the proportion of the orbital height (SEE = 1.9 mm) and was followed by the "tangent to the iris method" showing SEE = 2.4 mm. The traditional "tangent to the cornea method" underestimated the eyeball projection by SEE = 5.8 mm. Concerning the superoinferior eyeball position, the results showed a deviation from a central to a more superior position by 0.3 mm, on average, and the traditional method of central positioning of the globe could not be rejected as inaccurate (SEE = 0.3 mm). Based on regression analyzes or proportionality of the orbital height, the SEE = 2.1 mm. © 2017 American Academy of Forensic Sciences.

  19. Generalized method for calculation and prediction of vapour-liquid equilibria at high pressures

    Energy Technology Data Exchange (ETDEWEB)

    Drahos, J; Wichterle, I; Hala, E

    1978-02-01

    Following the approaches of K.C. Chao and J.D. Seader (see Gas Abstr. 18,24 (1962) Jan.) and B.I. Lee, J.H. Erbar, and W.C. Edmister (see Gas Abst. 29, 73-0331), the Czechoslovak Academy of Sciences developed a generalized method for prediction of vapor-liquid equilibria in hydrocarbon mixtures containing some nonhydrocarbon gases at high pressures. The method proposed is based on three equations: (1) a generalized equation of state for vapor-phase calculations; (2) a generalized expression for the pure-liquid fugacity coefficient; and (3) an activity coefficient expression based on a surface modification of the regular solution model. The equations used contain only one partially generalized binary parameter, which was evaluated from experimental K-value data. Researchers tested the proposed method by computing K-values and pressures in binary and multicomponent systems consisting of 13 hydrocarbons and 3 nonhydrocarbon gases. The results show that the method is applicable over a wide range of conditions with a degree of accuracy comparable with that of more complicated methods.

  20. Ground-State Gas-Phase Structures of Inorganic Molecules Predicted by Density Functional Theory Methods

    KAUST Repository

    Minenkov, Yury

    2017-11-29

    We tested a battery of density functional theory (DFT) methods ranging from generalized gradient approximation (GGA) via meta-GGA to hybrid meta-GGA schemes as well as Møller–Plesset perturbation theory of the second order and a single and double excitation coupled-cluster (CCSD) theory for their ability to reproduce accurate gas-phase structures of di- and triatomic molecules derived from microwave spectroscopy. We obtained the most accurate molecular structures using the hybrid and hybrid meta-GGA approximations with B3PW91, APF, TPSSh, mPW1PW91, PBE0, mPW1PBE, B972, and B98 functionals, resulting in lowest errors. We recommend using these methods to predict accurate three-dimensional structures of inorganic molecules when intramolecular dispersion interactions play an insignificant role. The structures that the CCSD method predicts are of similar quality although at considerably larger computational cost. The structures that GGA and meta-GGA schemes predict are less accurate with the largest absolute errors detected with BLYP and M11-L, suggesting that these methods should not be used if accurate three-dimensional molecular structures are required. Because of numerical problems related to the integration of the exchange–correlation part of the functional and large scattering of errors, most of the Minnesota models tested, particularly MN12-L, M11, M06-L, SOGGA11, and VSXC, are also not recommended for geometry optimization. When maintaining a low computational budget is essential, the nonseparable gradient functional N12 might work within an acceptable range of error. As expected, the DFT-D3 dispersion correction had a negligible effect on the internuclear distances when combined with the functionals tested on nonweakly bonded di- and triatomic inorganic molecules. By contrast, the dispersion correction for the APF-D functional has been found to shorten the bonds significantly, up to 0.064 Å (AgI), in Ag halides, BaO, BaS, BaF, BaCl, Cu halides, and Li and

  1. Real-data comparison of data mining methods in prediction of coronary artery disease in Iran

    Directory of Open Access Journals (Sweden)

    Azam Dekamin

    2017-07-01

    Full Text Available Introduction: Cardiovascular diseases are currently of broad prevalence and constitute one of the major causes of mortality in different societies. Angiography is one of the most accurate methods to diagnose heart diseases; it incurs high expenses and comes with side effects. Data mining is intended to enable timely prognosis of diseases with the least expenses possible, making use of the patients’ information. The present study aims to provide replies for the question whether it is possible to predict coronary artery diseases with higher efficiency and fewer errors and identify the factors impacting the disease using data mining techniques. Method: In this study, the data under investigation was collected from a number of 303 persons referring to the heart unit in Shahid Rajaie hospital (Iranian hospital from 2011 to 2013. It included 54 features. Attempts are made to take advantage of a higher number of characteristics which are helpful for diagnosis of diseases. In addition, Information Gain, Gini, and SVM methods were applied to select influential features, and variables with higher weights were chosen for modeling purposes. In the modeling phase, a combination of classification algorithms and ensemble methods was applied to develop a prediction with fewer errors. Rapid Miner Software was adopted to conduct this study. Results: Findings of this research indicated that the suggested model, if weighted by SVM index, had the highest efficiency, i.e. 95.83%. This model, moreover, was able to accurately predict all patients with coronary artery disease in Iran. According to the proposed model and obtained accuracies, weighting with SVM was found to be the most effective filtering method, and age as well as typical and atypical chest pain were identified to be the most effective features of coronary artery disease. (Graph 3 Conclusion: This study can contribute to the diagnosis of influential factors which lead to cardiovascular disease in Iran

  2. Prediction of Physicochemical Properties of Organic Molecules Using Semi-Empirical Methods

    International Nuclear Information System (INIS)

    Kim, Chan Kyung; Kim, Chang Kon; Kim, Miri; Lee, Hai Whang; Cho, Soo Gyeong

    2013-01-01

    Prediction of physicochemical properties of organic molecules is an important process in chemistry and chemical engineering. The MSEP approach developed in our lab calculates the molecular surface electrostatic potential (ESP) on van der Waals (vdW) surfaces of molecules. This approach includes geometry optimization and frequency calculation using hybrid density functional theory, B3LYP, at the 6-31G(d) basis set to find minima on the potential energy surface, and is known to give satisfactory QSPR results for various properties of organic molecules. However, this MSEP method is not applicable to screen large database because geometry optimization and frequency calculation require considerable computing time. To develop a fast but yet reliable approach, we have re-examined our previous work on organic molecules using two semi-empirical methods, AM1 and PM3. This new approach can be an efficient protocol in designing new molecules with improved properties

  3. Development of Test Method for Simple Shear and Prediction of Hardening Behavior Considering the Branchings Effect

    International Nuclear Information System (INIS)

    Kim, Dongwook; Bang, Sungsik; Kim, Minsoo; Lee, Hyungyil; Kim, Naksoo

    2013-01-01

    In this study we establish a process to predict hardening behavior considering the Branchings effect for zircaloy-4 sheets. When a metal is compressed after tension in forming, the yield strength decreases. For this reason, the Branchings effect should be considered in FE simulations of spring-back. We suggested a suitable specimen size and a method for determining the optimum tightening torque for simple shear tests. Shear stress-strain curves are obtained for five materials. We developed a method to convert the shear load-displacement curve to the effective stress-strain curve with Fea. We simulated the simple shear forward/reverse test using the combined isotropic/kinematic hardening model. We also investigated the change of the load-displacement curve by varying the hardening coefficients. We determined the hardening coefficients so that they follow the hardening behavior of zircaloy-4 in experiments

  4. Development of Test Method for Simple Shear and Prediction of Hardening Behavior Considering the Branchings Effect

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dongwook; Bang, Sungsik; Kim, Minsoo; Lee, Hyungyil; Kim, Naksoo [Sogang Univ., Seoul (Korea, Republic of)

    2013-10-15

    In this study we establish a process to predict hardening behavior considering the Branchings effect for zircaloy-4 sheets. When a metal is compressed after tension in forming, the yield strength decreases. For this reason, the Branchings effect should be considered in FE simulations of spring-back. We suggested a suitable specimen size and a method for determining the optimum tightening torque for simple shear tests. Shear stress-strain curves are obtained for five materials. We developed a method to convert the shear load-displacement curve to the effective stress-strain curve with Fea. We simulated the simple shear forward/reverse test using the combined isotropic/kinematic hardening model. We also investigated the change of the load-displacement curve by varying the hardening coefficients. We determined the hardening coefficients so that they follow the hardening behavior of zircaloy-4 in experiments.

  5. Multi-model predictive control method for nuclear steam generator water level

    International Nuclear Information System (INIS)

    Hu Ke; Yuan Jingqi

    2008-01-01

    The dynamics of a nuclear steam generator (SG) is very different according to the power levels and changes as time goes on. Therefore, it is an intractable as well as challenging task to improve the water level control system of the SG. In this paper, a robust model predictive control (RMPC) method is developed for the level control problem. Based on a multi-model framework, a combination of a local nominal model with a polytopic uncertain linear parameter varying (LPV) model is built to approximate the system's non-linear behavior. The optimization problem solved here is based on a receding horizon scheme involving the linear matrix inequality (LMI) technique. Closed loop stability and constraints satisfaction in the entire operating range are guaranteed by the feasibility of the optimization problem. Finally, simulation results show the effectiveness and the good performance of the proposed method

  6. Application of a simple parameter estimation method to predict effluent transport in the Savannah River

    International Nuclear Information System (INIS)

    Hensel, S.J.; Hayes, D.W.

    1993-01-01

    A simple parameter estimation method has been developed to determine the dispersion and velocity parameters associated with stream/river transport. The unsteady one dimensional Burgers' equation was chosen as the model equation, and the method has been applied to recent Savannah River dye tracer studies. The computed Savannah River transport coefficients compare favorably with documented values, and the time/concentration curves calculated from these coefficients compare well with the actual tracer data. The coefficients were used as a predictive capability and applied to Savannah River tritium concentration data obtained during the December 1991 accidental tritium discharge from the Savannah River Site. The peak tritium concentration at the intersection of Highway 301 and the Savannah River was underpredicted by only 5% using the coefficients computed from the dye data

  7. Combining Pathway Identification and Breast Cancer Survival Prediction via Screening-Network Methods

    Directory of Open Access Journals (Sweden)

    Antonella Iuliano

    2018-06-01

    Full Text Available Breast cancer is one of the most common invasive tumors causing high mortality among women. It is characterized by high heterogeneity regarding its biological and clinical characteristics. Several high-throughput assays have been used to collect genome-wide information for many patients in large collaborative studies. This knowledge has improved our understanding of its biology and led to new methods of diagnosing and treating the disease. In particular, system biology has become a valid approach to obtain better insights into breast cancer biological mechanisms. A crucial component of current research lies in identifying novel biomarkers that can be predictive for breast cancer patient prognosis on the basis of the molecular signature of the tumor sample. However, the high dimension and low sample size of data greatly increase the difficulty of cancer survival analysis demanding for the development of ad-hoc statistical methods. In this work, we propose novel screening-network methods that predict patient survival outcome by screening key survival-related genes and we assess the capability of the proposed approaches using METABRIC dataset. In particular, we first identify a subset of genes by using variable screening techniques on gene expression data. Then, we perform Cox regression analysis by incorporating network information associated with the selected subset of genes. The novelty of this work consists in the improved prediction of survival responses due to the different types of screenings (i.e., a biomedical-driven, data-driven and a combination of the two before building the network-penalized model. Indeed, the combination of the two screening approaches allows us to use the available biological knowledge on breast cancer and complement it with additional information emerging from the data used for the analysis. Moreover, we also illustrate how to extend the proposed approaches to integrate an additional omic layer, such as copy number

  8. Long-term response to recombinant human growth hormone treatment: a new predictive mathematical method.

    Science.gov (United States)

    Migliaretti, G; Ditaranto, S; Guiot, C; Vannelli, S; Matarazzo, P; Cappello, N; Stura, I; Cavallo, F

    2018-07-01

    Recombinant GH has been offered to GH-deficient (GHD) subjects for more than 30 years, in order to improve height and growth velocity in children and to enhance metabolic effects in adults. The aim of our work is to describe the long-term effect of rhGH treatment in GHD pediatric patients, suggesting a growth prediction model. A homogeneous database is defined for diagnosis and treatment modalities, based on GHD patients afferent to Hospital Regina Margherita in Turin (Italy). In this study, 232 GHD patients are selected (204 idiopathic GHD and 28 organic GHD). Each measure is shown in terms of mean with relative standard deviations (SD) and 95% confidence interval (95% CI). To estimate the final height of each patient on the basis of few measures, a mathematical growth prediction model [based on Gompertzian function and a mixed method based on the radial basis functions (RBFs) and the particle swarm optimization (PSO) models] was performed. The results seem to highlight the benefits of an early start of treatment, further confirming what is suggested by the literature. Generally, the RBF-PSO method shows a good reliability in the prediction of the final height. Indeed, RMSE is always lower than 4, i.e., in average the forecast will differ at most of 4 cm to the real value. In conclusion, the large and accurate database of Italian GHD patients allowed us to assess the rhGH treatment efficacy and compare the results with those obtained in other Countries. Moreover, we proposed and validated a new mathematical model forecasting the expected final height after therapy which was validated on our cohort.

  9. A review on fatigue life prediction methods for anti-vibration rubber materials

    Directory of Open Access Journals (Sweden)

    Xiaoli WANG

    2016-08-01

    Full Text Available Anti-vibration rubber, because of its superior elasticity, plasticity, waterproof and trapping characteristics, is widely used in the automotive industry, national defense, construction and other fields. The theory and technology of predicting fatigue life is of great significance to improve the durability design and manufacturing of anti-vibration rubber products. According to the characteristics of the anti-vibration rubber products in service, the technical difficulties for analyzing fatigue properties of anti-vibration rubber materials are pointed out. The research progress of the fatigue properties of rubber materials is reviewed from three angles including methods of fatigue crack initiation, fatigue crack propagation and fatigue damage accumulation. It is put forward that some nonlinear characteristics of rubber under fatigue loading, including the Mullins effect, permanent deformation and cyclic stress softening, should be considered in the further study of rubber materials. Meanwhile, it is indicated that the fatigue damage accumulation method based on continuum damage mechanics might be more appropriate to solve fatigue damage and life prediction problems for complex rubber materials and structures under fatigue loading.

  10. Precision Radiology: Predicting longevity using feature engineering and deep learning methods in a radiomics framework.

    Science.gov (United States)

    Oakden-Rayner, Luke; Carneiro, Gustavo; Bessen, Taryn; Nascimento, Jacinto C; Bradley, Andrew P; Palmer, Lyle J

    2017-05-10

    Precision medicine approaches rely on obtaining precise knowledge of the true state of health of an individual patient, which results from a combination of their genetic risks and environmental exposures. This approach is currently limited by the lack of effective and efficient non-invasive medical tests to define the full range of phenotypic variation associated with individual health. Such knowledge is critical for improved early intervention, for better treatment decisions, and for ameliorating the steadily worsening epidemic of chronic disease. We present proof-of-concept experiments to demonstrate how routinely acquired cross-sectional CT imaging may be used to predict patient longevity as a proxy for overall individual health and disease status using computer image analysis techniques. Despite the limitations of a modest dataset and the use of off-the-shelf machine learning methods, our results are comparable to previous 'manual' clinical methods for longevity prediction. This work demonstrates that radiomics techniques can be used to extract biomarkers relevant to one of the most widely used outcomes in epidemiological and clinical research - mortality, and that deep learning with convolutional neural networks can