WorldWideScience

Sample records for area prediction method

  1. Method for predicting future developments of traffic noise in urban areas in Europe

    NARCIS (Netherlands)

    Salomons, E.; Hout, D. van den; Janssen, S.; Kugler, U.; MacA, V.

    2010-01-01

    Traffic noise in urban areas in Europe is a major environmental stressor. In this study we present a method for predicting how environmental noise can be expected to develop in the future. In the project HEIMTSA scenarios were developed for all relevant environmental stressors to health, for all

  2. TEMPERATURE PREDICTION IN 3013 CONTAINERS IN K AREA MATERIAL STORAGE (KAMS) FACILITY USING REGRESSION METHODS

    International Nuclear Information System (INIS)

    Gupta, N

    2008-01-01

    3013 containers are designed in accordance with the DOE-STD-3013-2004. These containers are qualified to store plutonium (Pu) bearing materials such as PuO2 for 50 years. DOT shipping packages such as the 9975 are used to store the 3013 containers in the K-Area Material Storage (KAMS) facility at Savannah River Site (SRS). DOE-STD-3013-2004 requires that a comprehensive surveillance program be set up to ensure that the 3013 container design parameters are not violated during the long term storage. To ensure structural integrity of the 3013 containers, thermal analyses using finite element models were performed to predict the contents and component temperatures for different but well defined parameters such as storage ambient temperature, PuO 2 density, fill heights, weights, and thermal loading. Interpolation is normally used to calculate temperatures if the actual parameter values are different from the analyzed values. A statistical analysis technique using regression methods is proposed to develop simple polynomial relations to predict temperatures for the actual parameter values found in the containers. The analysis shows that regression analysis is a powerful tool to develop simple relations to assess component temperatures

  3. Leaf Area Prediction Using Three Alternative Sampling Methods for Seven Sierra Nevada Conifer Species

    Directory of Open Access Journals (Sweden)

    Dryw A. Jones

    2015-07-01

    Full Text Available Prediction of projected tree leaf area using allometric relationships with sapwood cross-sectional area is common in tree- and stand-level production studies. Measuring sapwood is difficult and often requires destructive sampling. This study tested multiple leaf area prediction models across seven diverse conifer species in the Sierra Nevada of California. The best-fit whole tree leaf area prediction model for overall simplicity, accuracy, and utility for all seven species was a nonlinear model with basal area as the primary covariate. A new non-destructive procedure was introduced to extend the branch summation approach to leaf area data collection on trees that cannot be destructively sampled. There were no significant differences between fixed effects assigned to sampling procedures, indicating that data from the tested sampling procedures can be combined for whole tree leaf area modeling purposes. These results indicate that, for the species sampled, accurate leaf area estimates can be obtained through partially-destructive sampling and using common forest inventory data.

  4. EVALUATION AND PREDICTIVE METHODS OF EPIDEMICAL SITUATION IN THE AREA OF ACUTE ENTERIC INFECTIONS

    Directory of Open Access Journals (Sweden)

    Malysh N.G.

    2017-06-01

    Full Text Available Introduction. Despite the fact, that nowadays acute intestinal infections (AII sick rate is decreasing, the aggravation of the epidemical situation is always there. Increased attention to AII caused by unpredictable epidemical rises of the AII diseases, which cannot be prevented without assessing the epidemical situation of these infections and forecasting of the levels of sick rate. However, developed mathematical methods of forecasting in most cases do not take into account the risk factors, also they are time-consuming and it is difficult to calculate them; and developed special computer programs, which predict infectious sick rates, are often in the lack in the institutions of sanitary-epidemiological service. An urgent problem for today is establishing of the most influential social and environmental factors, which can make a contribution to the spread of AII. The aim of this work was to improve the method of assessment and prediction of epidemic situation of AII by identifying the influence of climatic and demographic factors. Materials and methods. In order to determine the influence of meteorological and demographic factors on the epidemic process of acute intestinal infections the official reports of the State Sanitary and Epidemiological Service of Ukraine in Sumy region, the Department of Statistics, Sumy Regional Center for Hydrometeorology and Environmental Monitoring have been studied. Results and discussion. The work on the evaluation of the epidemiological situation of the AII begins from collecting data, according to the AII sick rate. The main source of this information is the logbook of infectious diseases, which recorded all sick people that were found in the area. It is necessary to gather the initial information, calculate the sick rate and monthly distribution of AII cases on investigated area and evaluate the tendency. At the same time with accounting of AII cases on investigated territory, takes place a monitoring of air

  5. An Adaptive Model Predictive Load Frequency Control Method for Multi-Area Interconnected Power Systems with Photovoltaic Generations

    Directory of Open Access Journals (Sweden)

    Guo-Qiang Zeng

    2017-11-01

    Full Text Available As the penetration level of renewable distributed generations such as wind turbine generator and photovoltaic stations increases, the load frequency control issue of a multi-area interconnected power system becomes more challenging. This paper presents an adaptive model predictive load frequency control method for a multi-area interconnected power system with photovoltaic generation by considering some nonlinear features such as a dead band for governor and generation rate constraint for steam turbine. The dynamic characteristic of this system is formulated as a discrete-time state space model firstly. Then, the predictive dynamic model is obtained by introducing an expanded state vector, and rolling optimization of control signal is implemented based on a cost function by minimizing the weighted sum of square predicted errors and square future control values. The simulation results on a typical two-area power system consisting of photovoltaic and thermal generator have demonstrated the superiority of the proposed model predictive control method to these state-of-the-art control techniques such as firefly algorithm, genetic algorithm, and population extremal optimization-based proportional-integral control methods in cases of normal conditions, load disturbance and parameters uncertainty.

  6. A Mathematical Method to Calculate Tumor Contact Surface Area: An Effective Parameter to Predict Renal Function after Partial Nephrectomy.

    Science.gov (United States)

    Hsieh, Po-Fan; Wang, Yu-De; Huang, Chi-Ping; Wu, Hsi-Chin; Yang, Che-Rei; Chen, Guang-Heng; Chang, Chao-Hsiang

    2016-07-01

    We proposed a mathematical formula to calculate contact surface area between a tumor and renal parenchyma. We examined the applicability of using contact surface area to predict renal function after partial nephrectomy. We performed this retrospective study in patients who underwent partial nephrectomy between January 2012 and December 2014. Based on abdominopelvic computerized tomography or magnetic resonance imaging, we calculated the contact surface area using the formula (2*π*radius*depth) developed by integral calculus. We then evaluated the correlation between contact surface area and perioperative parameters, and compared contact surface area and R.E.N.A.L. (Radius/Exophytic/endophytic/Nearness to collecting system/Anterior/Location) score in predicting a reduction in renal function. Overall 35, 26 and 45 patients underwent partial nephrectomy with open, laparoscopic and robotic approaches, respectively. Mean ± SD contact surface area was 30.7±26.1 cm(2) and median (IQR) R.E.N.A.L. score was 7 (2.25). Spearman correlation analysis showed that contact surface area was significantly associated with estimated blood loss (p=0.04), operative time (p=0.04) and percent change in estimated glomerular filtration rate (p contact surface area and R.E.N.A.L. score independently affected percent change in estimated glomerular filtration rate (p contact surface area was a better independent predictor of a greater than 10% change in estimated glomerular filtration rate compared to R.E.N.A.L. score (AUC 0.86 vs 0.69). Using this simple mathematical method, contact surface area was associated with surgical outcomes. Compared to R.E.N.A.L. score, contact surface area was a better predictor of functional change after partial nephrectomy. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  7. Prediction Method for Rain Rate and Rain Propagation Attenuation for K-Band Satellite Communications Links in Tropical Areas

    Directory of Open Access Journals (Sweden)

    Baso Maruddani

    2015-01-01

    Full Text Available This paper deals with the prediction method using hidden Markov model (HMM for rain rate and rain propagation attenuation for K-band satellite communication link at tropical area. As is well known, the K-band frequency is susceptible of being affected by atmospheric condition, especially in rainy condition. The wavelength of K-band frequency which approaches to the size of rain droplet causes the signal strength is easily attenuated and absorbed by the rain droplet. In order to keep the quality of system performance for K-band satellite communication link, therefore a special attention has to be paid for rain rate and rain propagation attenuation. Thus, a prediction method for rain rate and rain propagation attenuation based on HMM is developed to process the measurement data. The measured and predicted data are then compared with the ITU-R recommendation. From the result, it is shown that the measured and predicted data show similarity with the model of ITU-R P.837-5 recommendation for rain rate and the model of ITU-R P.618-10 recommendation for rain propagation attenuation. Meanwhile, statistical data for measured and predicted data such as fade duration and interfade duration have insignificant discrepancy with the model of ITU-R P.1623-1 recommendation.

  8. Landslide prediction using combined deterministic and probabilistic methods in hilly area of Mt. Medvednica in Zagreb City, Croatia

    Science.gov (United States)

    Wang, Chunxiang; Watanabe, Naoki; Marui, Hideaki

    2013-04-01

    The hilly slopes of Mt. Medvednica are stretched in the northwestern part of Zagreb City, Croatia, and extend to approximately 180km2. In this area, landslides, e.g. Kostanjek landslide and Črešnjevec landslide, have brought damage to many houses, roads, farmlands, grassland and etc. Therefore, it is necessary to predict the potential landslides and to enhance landslide inventory for hazard mitigation and security management of local society in this area. We combined deterministic method and probabilistic method to assess potential landslides including their locations, size and sliding surfaces. Firstly, this study area is divided into several slope units that have similar topographic and geological characteristics using the hydrology analysis tool in ArcGIS. Then, a GIS-based modified three-dimensional Hovland's method for slope stability analysis system is developed to identify the sliding surface and corresponding three-dimensional safety factor for each slope unit. Each sliding surface is assumed to be the lower part of each ellipsoid. The direction of inclination of the ellipsoid is considered to be the same as the main dip direction of the slope unit. The center point of the ellipsoid is randomly set to the center point of a grid cell in the slope unit. The minimum three-dimensional safety factor and corresponding critical sliding surface are also obtained for each slope unit. Thirdly, since a single value of safety factor is insufficient to evaluate the slope stability of a slope unit, the ratio of the number of calculation cases in which the three-dimensional safety factor values less than 1.0 to the total number of trial calculation is defined as the failure probability of the slope unit. If the failure probability is more than 80%, the slope unit is distinguished as 'unstable' from other slope units and the landslide hazard can be mapped for the whole study area.

  9. Earthquake prediction by Kina Method

    International Nuclear Information System (INIS)

    Kianoosh, H.; Keypour, H.; Naderzadeh, A.; Motlagh, H.F.

    2005-01-01

    Earthquake prediction has been one of the earliest desires of the man. Scientists have worked hard to predict earthquakes for a long time. The results of these efforts can generally be divided into two methods of prediction: 1) Statistical Method, and 2) Empirical Method. In the first method, earthquakes are predicted using statistics and probabilities, while the second method utilizes variety of precursors for earthquake prediction. The latter method is time consuming and more costly. However, the result of neither method has fully satisfied the man up to now. In this paper a new method entitled 'Kiana Method' is introduced for earthquake prediction. This method offers more accurate results yet lower cost comparing to other conventional methods. In Kiana method the electrical and magnetic precursors are measured in an area. Then, the time and the magnitude of an earthquake in the future is calculated using electrical, and in particular, electrical capacitors formulas. In this method, by daily measurement of electrical resistance in an area we make clear that the area is capable of earthquake occurrence in the future or not. If the result shows a positive sign, then the occurrence time and the magnitude can be estimated by the measured quantities. This paper explains the procedure and details of this prediction method. (authors)

  10. Epitope prediction methods

    DEFF Research Database (Denmark)

    Karosiene, Edita

    Analysis. The chapter provides detailed explanations on how to use different methods for T cell epitope discovery research, explaining how input should be given as well as how to interpret the output. In the last chapter, I present the results of a bioinformatics analysis of epitopes from the yellow fever...... peptide-MHC interactions. Furthermore, using yellow fever virus epitopes, we demonstrated the power of the %Rank score when compared with the binding affinity score of MHC prediction methods, suggesting that this score should be considered to be used for selecting potential T cell epitopes. In summary...... immune responses. Therefore, it is of great importance to be able to identify peptides that bind to MHC molecules, in order to understand the nature of immune responses and discover T cell epitopes useful for designing new vaccines and immunotherapies. MHC molecules in humans, referred to as human...

  11. Motor degradation prediction methods

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, J.R.; Kelly, J.F.; Delzingaro, M.J.

    1996-12-01

    Motor Operated Valve (MOV) squirrel cage AC motor rotors are susceptible to degradation under certain conditions. Premature failure can result due to high humidity/temperature environments, high running load conditions, extended periods at locked rotor conditions (i.e. > 15 seconds) or exceeding the motor`s duty cycle by frequent starts or multiple valve stroking. Exposure to high heat and moisture due to packing leaks, pressure seal ring leakage or other causes can significantly accelerate the degradation. ComEd and Liberty Technologies have worked together to provide and validate a non-intrusive method using motor power diagnostics to evaluate MOV rotor condition and predict failure. These techniques have provided a quick, low radiation dose method to evaluate inaccessible motors, identify degradation and allow scheduled replacement of motors prior to catastrophic failures.

  12. Motor degradation prediction methods

    International Nuclear Information System (INIS)

    Arnold, J.R.; Kelly, J.F.; Delzingaro, M.J.

    1996-01-01

    Motor Operated Valve (MOV) squirrel cage AC motor rotors are susceptible to degradation under certain conditions. Premature failure can result due to high humidity/temperature environments, high running load conditions, extended periods at locked rotor conditions (i.e. > 15 seconds) or exceeding the motor's duty cycle by frequent starts or multiple valve stroking. Exposure to high heat and moisture due to packing leaks, pressure seal ring leakage or other causes can significantly accelerate the degradation. ComEd and Liberty Technologies have worked together to provide and validate a non-intrusive method using motor power diagnostics to evaluate MOV rotor condition and predict failure. These techniques have provided a quick, low radiation dose method to evaluate inaccessible motors, identify degradation and allow scheduled replacement of motors prior to catastrophic failures

  13. Ensemble methods for seasonal limited area forecasts

    DEFF Research Database (Denmark)

    Arritt, Raymond W.; Anderson, Christopher J.; Takle, Eugene S.

    2004-01-01

    The ensemble prediction methods used for seasonal limited area forecasts were examined by comparing methods for generating ensemble simulations of seasonal precipitation. The summer 1993 model over the north-central US was used as a test case. The four methods examined included the lagged-average...

  14. Empirical Flutter Prediction Method.

    Science.gov (United States)

    1988-03-05

    been used in this way to discover species or subspecies of animals, and to discover different types of voter or comsumer requiring different persuasions...respect to behavior or performance or response variables. Once this were done, corresponding clusters might be sought among descriptive or predictive or...jump in a response. The first sort of usage does not apply to the flutter prediction problem. Here the types of behavior are the different kinds of

  15. Prediction method abstracts

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-31

    This conference was held December 4--8, 1994 in Asilomar, California. The purpose of this meeting was to provide a forum for exchange of state-of-the-art information concerning the prediction of protein structure. Attention if focused on the following: comparative modeling; sequence to fold assignment; and ab initio folding.

  16. Predictive Methods of Pople

    Indian Academy of Sciences (India)

    Chemistry for their pioneering contri butions to the development of computational methods in quantum chemistry and density functional theory .... program of Pop Ie for ab-initio electronic structure calculation of molecules. This ab-initio MO ...

  17. Robust small area prediction for counts.

    Science.gov (United States)

    Tzavidis, Nikos; Ranalli, M Giovanna; Salvati, Nicola; Dreassi, Emanuela; Chambers, Ray

    2015-06-01

    A new semiparametric approach to model-based small area prediction for counts is proposed and used for estimating the average number of visits to physicians for Health Districts in Central Italy. The proposed small area predictor can be viewed as an outlier robust alternative to the more commonly used empirical plug-in predictor that is based on a Poisson generalized linear mixed model with Gaussian random effects. Results from the real data application and from a simulation experiment confirm that the proposed small area predictor has good robustness properties and in some cases can be more efficient than alternative small area approaches. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  18. Predicting artificailly drained areas by means of selective model ensemble

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Beucher, Amélie; Iversen, Bo Vangsø

    . The approaches employed include decision trees, discriminant analysis, regression models, neural networks and support vector machines amongst others. Several models are trained with each method, using variously the original soil covariates and principal components of the covariates. With a large ensemble...... out since the mid-19th century, and it has been estimated that half of the cultivated area is artificially drained (Olesen, 2009). A number of machine learning approaches can be used to predict artificially drained areas in geographic space. However, instead of choosing the most accurate model....... The study aims firstly to train a large number of models to predict the extent of artificially drained areas using various machine learning approaches. Secondly, the study will develop a method for selecting the models, which give a good prediction of artificially drained areas, when used in conjunction...

  19. Rainfall prediction with backpropagation method

    Science.gov (United States)

    Wahyuni, E. G.; Fauzan, L. M. F.; Abriyani, F.; Muchlis, N. F.; Ulfa, M.

    2018-03-01

    Rainfall is an important factor in many fields, such as aviation and agriculture. Although it has been assisted by technology but the accuracy can not reach 100% and there is still the possibility of error. Though current rainfall prediction information is needed in various fields, such as agriculture and aviation fields. In the field of agriculture, to obtain abundant and quality yields, farmers are very dependent on weather conditions, especially rainfall. Rainfall is one of the factors that affect the safety of aircraft. To overcome the problems above, then it’s required a system that can accurately predict rainfall. In predicting rainfall, artificial neural network modeling is applied in this research. The method used in modeling this artificial neural network is backpropagation method. Backpropagation methods can result in better performance in repetitive exercises. This means that the weight of the ANN interconnection can approach the weight it should be. Another advantage of this method is the ability in the learning process adaptively and multilayer owned on this method there is a process of weight changes so as to minimize error (fault tolerance). Therefore, this method can guarantee good system resilience and consistently work well. The network is designed using 4 input variables, namely air temperature, air humidity, wind speed, and sunshine duration and 3 output variables ie low rainfall, medium rainfall, and high rainfall. Based on the research that has been done, the network can be used properly, as evidenced by the results of the prediction of the system precipitation is the same as the results of manual calculations.

  20. Ensemble method for dengue prediction.

    Science.gov (United States)

    Buczak, Anna L; Baugher, Benjamin; Moniz, Linda J; Bagley, Thomas; Babin, Steven M; Guven, Erhan

    2018-01-01

    In the 2015 NOAA Dengue Challenge, participants made three dengue target predictions for two locations (Iquitos, Peru, and San Juan, Puerto Rico) during four dengue seasons: 1) peak height (i.e., maximum weekly number of cases during a transmission season; 2) peak week (i.e., week in which the maximum weekly number of cases occurred); and 3) total number of cases reported during a transmission season. A dengue transmission season is the 12-month period commencing with the location-specific, historical week with the lowest number of cases. At the beginning of the Dengue Challenge, participants were provided with the same input data for developing the models, with the prediction testing data provided at a later date. Our approach used ensemble models created by combining three disparate types of component models: 1) two-dimensional Method of Analogues models incorporating both dengue and climate data; 2) additive seasonal Holt-Winters models with and without wavelet smoothing; and 3) simple historical models. Of the individual component models created, those with the best performance on the prior four years of data were incorporated into the ensemble models. There were separate ensembles for predicting each of the three targets at each of the two locations. Our ensemble models scored higher for peak height and total dengue case counts reported in a transmission season for Iquitos than all other models submitted to the Dengue Challenge. However, the ensemble models did not do nearly as well when predicting the peak week. The Dengue Challenge organizers scored the dengue predictions of the Challenge participant groups. Our ensemble approach was the best in predicting the total number of dengue cases reported for transmission season and peak height for Iquitos, Peru.

  1. Ensemble method for dengue prediction.

    Directory of Open Access Journals (Sweden)

    Anna L Buczak

    Full Text Available In the 2015 NOAA Dengue Challenge, participants made three dengue target predictions for two locations (Iquitos, Peru, and San Juan, Puerto Rico during four dengue seasons: 1 peak height (i.e., maximum weekly number of cases during a transmission season; 2 peak week (i.e., week in which the maximum weekly number of cases occurred; and 3 total number of cases reported during a transmission season. A dengue transmission season is the 12-month period commencing with the location-specific, historical week with the lowest number of cases. At the beginning of the Dengue Challenge, participants were provided with the same input data for developing the models, with the prediction testing data provided at a later date.Our approach used ensemble models created by combining three disparate types of component models: 1 two-dimensional Method of Analogues models incorporating both dengue and climate data; 2 additive seasonal Holt-Winters models with and without wavelet smoothing; and 3 simple historical models. Of the individual component models created, those with the best performance on the prior four years of data were incorporated into the ensemble models. There were separate ensembles for predicting each of the three targets at each of the two locations.Our ensemble models scored higher for peak height and total dengue case counts reported in a transmission season for Iquitos than all other models submitted to the Dengue Challenge. However, the ensemble models did not do nearly as well when predicting the peak week.The Dengue Challenge organizers scored the dengue predictions of the Challenge participant groups. Our ensemble approach was the best in predicting the total number of dengue cases reported for transmission season and peak height for Iquitos, Peru.

  2. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  3. Prediction methods environmental-effect reporting

    International Nuclear Information System (INIS)

    Jonker, R.J.; Koester, H.W.

    1987-12-01

    This report provides a survey of prediction methods which can be applied to the calculation of emissions in cuclear-reactor accidents, in the framework of environment-effect reports (dutch m.e.r.) or risk analyses. Also emissions during normal operation are important for m.e.r.. These can be derived from measured emissions of power plants being in operation. Data concerning the latter are reported. The report consists of an introduction into reactor technology, among which a description of some reactor types, the corresponding fuel cycle and dismantling scenarios - a discussion of risk-analyses for nuclear power plants and the physical processes which can play a role during accidents - a discussion of prediction methods to be employed and the expected developments in this area - some background information. (aughor). 145 refs.; 21 figs.; 20 tabs

  4. Can foot anthropometric measurements predict dynamic plantar surface contact area?

    Directory of Open Access Journals (Sweden)

    Collins Natalie

    2009-10-01

    Full Text Available Abstract Background Previous studies have suggested that increased plantar surface area, associated with pes planus, is a risk factor for the development of lower extremity overuse injuries. The intent of this study was to determine if a single or combination of foot anthropometric measures could be used to predict plantar surface area. Methods Six foot measurements were collected on 155 subjects (97 females, 58 males, mean age 24.5 ± 3.5 years. The measurements as well as one ratio were entered into a stepwise regression analysis to determine the optimal set of measurements associated with total plantar contact area either including or excluding the toe region. The predicted values were used to calculate plantar surface area and were compared to the actual values obtained dynamically using a pressure sensor platform. Results A three variable model was found to describe the relationship between the foot measures/ratio and total plantar contact area (R2 = 0.77, p R2 = 0.76, p Conclusion The results of this study indicate that the clinician can use a combination of simple, reliable, and time efficient foot anthropometric measurements to explain over 75% of the plantar surface contact area, either including or excluding the toe region.

  5. NEURAL METHODS FOR THE FINANCIAL PREDICTION

    OpenAIRE

    Jerzy Balicki; Piotr Dryja; Waldemar Korłub; Piotr Przybyłek; Maciej Tyszka; Marcin Zadroga; Marcin Zakidalski

    2016-01-01

    Artificial neural networks can be used to predict share investment on the stock market, assess the reliability of credit client or predicting banking crises. Moreover, this paper discusses the principles of cooperation neural network algorithms with evolutionary method, and support vector machines. In addition, a reference is made to other methods of artificial intelligence, which are used in finance prediction.

  6. NEURAL METHODS FOR THE FINANCIAL PREDICTION

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2016-06-01

    Full Text Available Artificial neural networks can be used to predict share investment on the stock market, assess the reliability of credit client or predicting banking crises. Moreover, this paper discusses the principles of cooperation neural network algorithms with evolutionary method, and support vector machines. In addition, a reference is made to other methods of artificial intelligence, which are used in finance prediction.

  7. Leaf area prediction models for Tsuga canadensis in Maine

    Science.gov (United States)

    Laura S. Kenefic; R.S. Seymour

    1999-01-01

    Tsuga canadensis (L.) Carr. (eastern hemlock) is a common species throughout the Acadian forest. Studies of leaf area and growth efficiency in this forest type have been limited by the lack of equations to predict leaf area of this species. We found that sapwood area was an effective leaf area surrogate in T. canadensis, though...

  8. Prediction methods and databases within chemoinformatics

    DEFF Research Database (Denmark)

    Jónsdóttir, Svava Osk; Jørgensen, Flemming Steen; Brunak, Søren

    2005-01-01

    MOTIVATION: To gather information about available databases and chemoinformatics methods for prediction of properties relevant to the drug discovery and optimization process. RESULTS: We present an overview of the most important databases with 2-dimensional and 3-dimensional structural information...... about drugs and drug candidates, and of databases with relevant properties. Access to experimental data and numerical methods for selecting and utilizing these data is crucial for developing accurate predictive in silico models. Many interesting predictive methods for classifying the suitability...

  9. Machine learning methods for metabolic pathway prediction

    Directory of Open Access Journals (Sweden)

    Karp Peter D

    2010-01-01

    Full Text Available Abstract Background A key challenge in systems biology is the reconstruction of an organism's metabolic network from its genome sequence. One strategy for addressing this problem is to predict which metabolic pathways, from a reference database of known pathways, are present in the organism, based on the annotated genome of the organism. Results To quantitatively validate methods for pathway prediction, we developed a large "gold standard" dataset of 5,610 pathway instances known to be present or absent in curated metabolic pathway databases for six organisms. We defined a collection of 123 pathway features, whose information content we evaluated with respect to the gold standard. Feature data were used as input to an extensive collection of machine learning (ML methods, including naïve Bayes, decision trees, and logistic regression, together with feature selection and ensemble methods. We compared the ML methods to the previous PathoLogic algorithm for pathway prediction using the gold standard dataset. We found that ML-based prediction methods can match the performance of the PathoLogic algorithm. PathoLogic achieved an accuracy of 91% and an F-measure of 0.786. The ML-based prediction methods achieved accuracy as high as 91.2% and F-measure as high as 0.787. The ML-based methods output a probability for each predicted pathway, whereas PathoLogic does not, which provides more information to the user and facilitates filtering of predicted pathways. Conclusions ML methods for pathway prediction perform as well as existing methods, and have qualitative advantages in terms of extensibility, tunability, and explainability. More advanced prediction methods and/or more sophisticated input features may improve the performance of ML methods. However, pathway prediction performance appears to be limited largely by the ability to correctly match enzymes to the reactions they catalyze based on genome annotations.

  10. Machine learning methods for metabolic pathway prediction

    Science.gov (United States)

    2010-01-01

    Background A key challenge in systems biology is the reconstruction of an organism's metabolic network from its genome sequence. One strategy for addressing this problem is to predict which metabolic pathways, from a reference database of known pathways, are present in the organism, based on the annotated genome of the organism. Results To quantitatively validate methods for pathway prediction, we developed a large "gold standard" dataset of 5,610 pathway instances known to be present or absent in curated metabolic pathway databases for six organisms. We defined a collection of 123 pathway features, whose information content we evaluated with respect to the gold standard. Feature data were used as input to an extensive collection of machine learning (ML) methods, including naïve Bayes, decision trees, and logistic regression, together with feature selection and ensemble methods. We compared the ML methods to the previous PathoLogic algorithm for pathway prediction using the gold standard dataset. We found that ML-based prediction methods can match the performance of the PathoLogic algorithm. PathoLogic achieved an accuracy of 91% and an F-measure of 0.786. The ML-based prediction methods achieved accuracy as high as 91.2% and F-measure as high as 0.787. The ML-based methods output a probability for each predicted pathway, whereas PathoLogic does not, which provides more information to the user and facilitates filtering of predicted pathways. Conclusions ML methods for pathway prediction perform as well as existing methods, and have qualitative advantages in terms of extensibility, tunability, and explainability. More advanced prediction methods and/or more sophisticated input features may improve the performance of ML methods. However, pathway prediction performance appears to be limited largely by the ability to correctly match enzymes to the reactions they catalyze based on genome annotations. PMID:20064214

  11. An Automated Processing Method for Agglomeration Areas

    Directory of Open Access Journals (Sweden)

    Chengming Li

    2018-05-01

    Full Text Available Agglomeration operations are a core component of the automated generalization of aggregated area groups. However, because geographical elements that possess agglomeration features are relatively scarce, the current literature has not given sufficient attention to agglomeration operations. Furthermore, most reports on the subject are limited to the general conceptual level. Consequently, current agglomeration methods are highly reliant on subjective determinations and cannot support intelligent computer processing. This paper proposes an automated processing method for agglomeration areas. Firstly, the proposed method automatically identifies agglomeration areas based on the width of the striped bridging area, distribution pattern index (DPI, shape similarity index (SSI, and overlap index (OI. Next, the progressive agglomeration operation is carried out, including the computation of the external boundary outlines and the extraction of agglomeration lines. The effectiveness and rationality of the proposed method has been validated by using actual census data of Chinese geographical conditions in the Jiangsu Province.

  12. Seminal quality prediction using data mining methods.

    Science.gov (United States)

    Sahoo, Anoop J; Kumar, Yugal

    2014-01-01

    Now-a-days, some new classes of diseases have come into existences which are known as lifestyle diseases. The main reasons behind these diseases are changes in the lifestyle of people such as alcohol drinking, smoking, food habits etc. After going through the various lifestyle diseases, it has been found that the fertility rates (sperm quantity) in men has considerably been decreasing in last two decades. Lifestyle factors as well as environmental factors are mainly responsible for the change in the semen quality. The objective of this paper is to identify the lifestyle and environmental features that affects the seminal quality and also fertility rate in man using data mining methods. The five artificial intelligence techniques such as Multilayer perceptron (MLP), Decision Tree (DT), Navie Bayes (Kernel), Support vector machine+Particle swarm optimization (SVM+PSO) and Support vector machine (SVM) have been applied on fertility dataset to evaluate the seminal quality and also to predict the person is either normal or having altered fertility rate. While the eight feature selection techniques such as support vector machine (SVM), neural network (NN), evolutionary logistic regression (LR), support vector machine plus particle swarm optimization (SVM+PSO), principle component analysis (PCA), chi-square test, correlation and T-test methods have been used to identify more relevant features which affect the seminal quality. These techniques are applied on fertility dataset which contains 100 instances with nine attribute with two classes. The experimental result shows that SVM+PSO provides higher accuracy and area under curve (AUC) rate (94% & 0.932) among multi-layer perceptron (MLP) (92% & 0.728), Support Vector Machines (91% & 0.758), Navie Bayes (Kernel) (89% & 0.850) and Decision Tree (89% & 0.735) for some of the seminal parameters. This paper also focuses on the feature selection process i.e. how to select the features which are more important for prediction of

  13. Predicting future forestland area: a comparison of econometric approaches.

    Science.gov (United States)

    SoEun Ahn; Andrew J. Plantinga; Ralph J. Alig

    2000-01-01

    Predictions of future forestland area are an important component of forest policy analyses. In this article, we test the ability of econometric land use models to accurately forecast forest area. We construct a panel data set for Alabama consisting of county and time-series observation for the period 1964 to 1992. We estimate models using restricted data sets-namely,...

  14. An Efficient Vital Area Identification Method

    International Nuclear Information System (INIS)

    Jung, Woo Sik

    2017-01-01

    A new Vital Area Identification (VAI) method was developed in this study for minimizing the burden of VAI procedure. It was accomplished by performing simplification of sabotage event trees or Probabilistic Safety Assessment (PSA) event trees at the very first stage of VAI procedure. Target sets and prevention sets are calculated from the sabotage fault tree. The rooms in the shortest (most economical) prevention set are selected and protected as vital areas. All physical protection is emphasized to protect these vital areas. All rooms in the protected area, the sabotage of which could lead to core damage, should be incorporated into sabotage fault tree. So, sabotage fault tree development is a very difficult task that requires high engineering costs. IAEA published INFCIRC/225/Rev.5 in 2011 which includes principal international guidelines for the physical protection of nuclear material and nuclear installations. A new efficient VAI method was developed and demonstrated in this study. Since this method drastically reduces VAI problem size, it provides very quick and economical VAI procedure. A consistent and integrated VAI procedure had been developed by taking advantage of PSA results, and more efficient VAI method was further developed in this study by inserting PSA event tree simplification at the initial stage of VAI procedure.

  15. Method for Predicting Thermal Buckling in Rails

    Science.gov (United States)

    2018-01-01

    A method is proposed herein for predicting the onset of thermal buckling in rails in such a way as to provide a means of avoiding this type of potentially devastating failure. The method consists of the development of a thermomechanical model of rail...

  16. Prediction Methods for Blood Glucose Concentration

    DEFF Research Database (Denmark)

    “Recent Results on Glucose–Insulin Predictions by Means of a State Observer for Time-Delay Systems” by Pasquale Palumbo et al. introduces a prediction model which in real time predicts the insulin concentration in blood which in turn is used in a control system. The method is tested in simulation...... EEG signals to predict upcoming hypoglycemic situations in real-time by employing artificial neural networks. The results of a 30-day long clinical study with the implanted device and the developed algorithm are presented. The chapter “Meta-Learning Based Blood Glucose Predictor for Diabetic......, but the insulin amount is chosen using factors that account for this expectation. The increasing availability of more accurate continuous blood glucose measurement (CGM) systems is attracting much interest to the possibilities of explicit prediction of future BG values. Against this background, in 2014 a two...

  17. A method for predicting monthly rainfall patterns

    International Nuclear Information System (INIS)

    Njau, E.C.

    1987-11-01

    A brief survey is made of previous methods that have been used to predict rainfall trends or drought spells in different parts of the earth. The basic methodologies or theoretical strategies used in these methods are compared with contents of a recent theory of Sun-Weather/Climate links (Njau, 1985a; 1985b; 1986; 1987a; 1987b; 1987c) which point towards the possibility of practical climatic predictions. It is shown that not only is the theoretical basis of each of these methodologies or strategies fully incorporated into the above-named theory, but also this theory may be used to develop a technique by which future monthly rainfall patterns can be predicted in further and finer details. We describe the latter technique and then illustrate its workability by means of predictions made on monthly rainfall patterns in some East African meteorological stations. (author). 43 refs, 11 figs, 2 tabs

  18. Deep learning methods for protein torsion angle prediction.

    Science.gov (United States)

    Li, Haiou; Hou, Jie; Adhikari, Badri; Lyu, Qiang; Cheng, Jianlin

    2017-09-18

    Deep learning is one of the most powerful machine learning methods that has achieved the state-of-the-art performance in many domains. Since deep learning was introduced to the field of bioinformatics in 2012, it has achieved success in a number of areas such as protein residue-residue contact prediction, secondary structure prediction, and fold recognition. In this work, we developed deep learning methods to improve the prediction of torsion (dihedral) angles of proteins. We design four different deep learning architectures to predict protein torsion angles. The architectures including deep neural network (DNN) and deep restricted Boltzmann machine (DRBN), deep recurrent neural network (DRNN) and deep recurrent restricted Boltzmann machine (DReRBM) since the protein torsion angle prediction is a sequence related problem. In addition to existing protein features, two new features (predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments) are used as input to each of the four deep learning architectures to predict phi and psi angles of protein backbone. The mean absolute error (MAE) of phi and psi angles predicted by DRNN, DReRBM, DRBM and DNN is about 20-21° and 29-30° on an independent dataset. The MAE of phi angle is comparable to the existing methods, but the MAE of psi angle is 29°, 2° lower than the existing methods. On the latest CASP12 targets, our methods also achieved the performance better than or comparable to a state-of-the art method. Our experiment demonstrates that deep learning is a valuable method for predicting protein torsion angles. The deep recurrent network architecture performs slightly better than deep feed-forward architecture, and the predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments are useful features for improving prediction accuracy.

  19. Development of motion image prediction method using principal component analysis

    International Nuclear Information System (INIS)

    Chhatkuli, Ritu Bhusal; Demachi, Kazuyuki; Kawai, Masaki; Sakakibara, Hiroshi; Kamiaka, Kazuma

    2012-01-01

    Respiratory motion can induce the limit in the accuracy of area irradiated during lung cancer radiation therapy. Many methods have been introduced to minimize the impact of healthy tissue irradiation due to the lung tumor motion. The purpose of this research is to develop an algorithm for the improvement of image guided radiation therapy by the prediction of motion images. We predict the motion images by using principal component analysis (PCA) and multi-channel singular spectral analysis (MSSA) method. The images/movies were successfully predicted and verified using the developed algorithm. With the proposed prediction method it is possible to forecast the tumor images over the next breathing period. The implementation of this method in real time is believed to be significant for higher level of tumor tracking including the detection of sudden abdominal changes during radiation therapy. (author)

  20. Investigation into Methods for Predicting Connection Temperatures

    Directory of Open Access Journals (Sweden)

    K. Anderson

    2009-01-01

    Full Text Available The mechanical response of connections in fire is largely based on material strength degradation and the interactions between the various components of the connection. In order to predict connection performance in fire, temperature profiles must initially be established in order to evaluate the material strength degradation over time. This paper examines two current methods for predicting connection temperatures: The percentage method, where connection temperatures are calculated as a percentage of the adjacent beam lower-flange, mid-span temperatures; and the lumped capacitance method, based on the lumped mass of the connection. Results from the percentage method do not correlate well with experimental results, whereas the lumped capacitance method shows much better agreement with average connection temperatures. A 3D finite element heat transfer model was also created in Abaqus, and showed good correlation with experimental results. 

  1. Soft Computing Methods for Disulfide Connectivity Prediction.

    Science.gov (United States)

    Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.

  2. New prediction methods for collaborative filtering

    Directory of Open Access Journals (Sweden)

    Hasan BULUT

    2016-05-01

    Full Text Available Companies, in particular e-commerce companies, aims to increase customer satisfaction, hence in turn increase their profits, using recommender systems. Recommender Systems are widely used nowadays and they provide strategic advantages to the companies that use them. These systems consist of different stages. In the first stage, the similarities between the active user and other users are computed using the user-product ratings matrix. Then, the neighbors of the active user are found from these similarities. In prediction calculation stage, the similarities computed at the first stage are used to generate the weight vector of the closer neighbors. Neighbors affect the prediction value by the corresponding value of the weight vector. In this study, we developed two new methods for the prediction calculation stage which is the last stage of collaborative filtering. The performance of these methods are measured with evaluation metrics used in the literature and compared with other studies in this field.

  3. Novel hyperspectral prediction method and apparatus

    Science.gov (United States)

    Kemeny, Gabor J.; Crothers, Natalie A.; Groth, Gard A.; Speck, Kathy A.; Marbach, Ralf

    2009-05-01

    Both the power and the challenge of hyperspectral technologies is the very large amount of data produced by spectral cameras. While off-line methodologies allow the collection of gigabytes of data, extended data analysis sessions are required to convert the data into useful information. In contrast, real-time monitoring, such as on-line process control, requires that compression of spectral data and analysis occur at a sustained full camera data rate. Efficient, high-speed practical methods for calibration and prediction are therefore sought to optimize the value of hyperspectral imaging. A novel method of matched filtering known as science based multivariate calibration (SBC) was developed for hyperspectral calibration. Classical (MLR) and inverse (PLS, PCR) methods are combined by spectroscopically measuring the spectral "signal" and by statistically estimating the spectral "noise." The accuracy of the inverse model is thus combined with the easy interpretability of the classical model. The SBC method is optimized for hyperspectral data in the Hyper-CalTM software used for the present work. The prediction algorithms can then be downloaded into a dedicated FPGA based High-Speed Prediction EngineTM module. Spectral pretreatments and calibration coefficients are stored on interchangeable SD memory cards, and predicted compositions are produced on a USB interface at real-time camera output rates. Applications include minerals, pharmaceuticals, food processing and remote sensing.

  4. Radiation area monitor device and method

    Science.gov (United States)

    Vencelj, Matjaz; Stowe, Ashley C.; Petrovic, Toni; Morrell, Jonathan S.; Kosicek, Andrej

    2018-01-30

    A radiation area monitor device/method, utilizing: a radiation sensor; a rotating radiation shield disposed about the radiation sensor, wherein the rotating radiation shield defines one or more ports that are transparent to radiation; and a processor operable for analyzing and storing a radiation fingerprint acquired by the radiation sensor as the rotating radiation shield is rotated about the radiation sensor. Optionally, the radiation sensor includes a gamma and/or neutron radiation sensor. The device/method selectively operates in: a first supervised mode during which a baseline radiation fingerprint is acquired by the radiation sensor as the rotating radiation shield is rotated about the radiation sensor; and a second unsupervised mode during which a subsequent radiation fingerprint is acquired by the radiation sensor as the rotating radiation shield is rotated about the radiation sensor, wherein the subsequent radiation fingerprint is compared to the baseline radiation fingerprint and, if a predetermined difference threshold is exceeded, an alert is issued.

  5. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  6. Machine Learning Methods to Predict Diabetes Complications.

    Science.gov (United States)

    Dagliati, Arianna; Marini, Simone; Sacchi, Lucia; Cogni, Giulia; Teliti, Marsida; Tibollo, Valentina; De Cata, Pasquale; Chiovato, Luca; Bellazzi, Riccardo

    2018-03-01

    One of the areas where Artificial Intelligence is having more impact is machine learning, which develops algorithms able to learn patterns and decision rules from data. Machine learning algorithms have been embedded into data mining pipelines, which can combine them with classical statistical strategies, to extract knowledge from data. Within the EU-funded MOSAIC project, a data mining pipeline has been used to derive a set of predictive models of type 2 diabetes mellitus (T2DM) complications based on electronic health record data of nearly one thousand patients. Such pipeline comprises clinical center profiling, predictive model targeting, predictive model construction and model validation. After having dealt with missing data by means of random forest (RF) and having applied suitable strategies to handle class imbalance, we have used Logistic Regression with stepwise feature selection to predict the onset of retinopathy, neuropathy, or nephropathy, at different time scenarios, at 3, 5, and 7 years from the first visit at the Hospital Center for Diabetes (not from the diagnosis). Considered variables are gender, age, time from diagnosis, body mass index (BMI), glycated hemoglobin (HbA1c), hypertension, and smoking habit. Final models, tailored in accordance with the complications, provided an accuracy up to 0.838. Different variables were selected for each complication and time scenario, leading to specialized models easy to translate to the clinical practice.

  7. An assessment on epitope prediction methods for protozoa genomes

    Directory of Open Access Journals (Sweden)

    Resende Daniela M

    2012-11-01

    Full Text Available Abstract Background Epitope prediction using computational methods represents one of the most promising approaches to vaccine development. Reduction of time, cost, and the availability of completely sequenced genomes are key points and highly motivating regarding the use of reverse vaccinology. Parasites of genus Leishmania are widely spread and they are the etiologic agents of leishmaniasis. Currently, there is no efficient vaccine against this pathogen and the drug treatment is highly toxic. The lack of sufficiently large datasets of experimentally validated parasites epitopes represents a serious limitation, especially for trypanomatids genomes. In this work we highlight the predictive performances of several algorithms that were evaluated through the development of a MySQL database built with the purpose of: a evaluating individual algorithms prediction performances and their combination for CD8+ T cell epitopes, B-cell epitopes and subcellular localization by means of AUC (Area Under Curve performance and a threshold dependent method that employs a confusion matrix; b integrating data from experimentally validated and in silico predicted epitopes; and c integrating the subcellular localization predictions and experimental data. NetCTL, NetMHC, BepiPred, BCPred12, and AAP12 algorithms were used for in silico epitope prediction and WoLF PSORT, Sigcleave and TargetP for in silico subcellular localization prediction against trypanosomatid genomes. Results A database-driven epitope prediction method was developed with built-in functions that were capable of: a removing experimental data redundancy; b parsing algorithms predictions and storage experimental validated and predict data; and c evaluating algorithm performances. Results show that a better performance is achieved when the combined prediction is considered. This is particularly true for B cell epitope predictors, where the combined prediction of AAP12 and BCPred12 reached an AUC value

  8. A comparison of methods for cascade prediction

    OpenAIRE

    Guo, Ruocheng; Shakarian, Paulo

    2016-01-01

    Information cascades exist in a wide variety of platforms on Internet. A very important real-world problem is to identify which information cascades can go viral. A system addressing this problem can be used in a variety of applications including public health, marketing and counter-terrorism. As a cascade can be considered as compound of the social network and the time series. However, in related literature where methods for solving the cascade prediction problem were proposed, the experimen...

  9. Controlling factors of uranium mineralization and prospect prediction in Qimantage area

    International Nuclear Information System (INIS)

    Yao Chunling; Zhu Pengfei; Cai Yuqi; Zhang Wenming; Zhao Yong'an; Song Jiye; Zhang Xiaojin

    2011-01-01

    Based on the analysis of regional geology in Qimantage area, the condition for uranium mineralization is summarized in regional geology setting, volcanic, granite and faults. This study shows that this area has favorable prospect for uranium mineralization. The metallogenic model is built up according to the controlling factors over uranium mineralization. Under this model, six potential areas are predicted in MRAS software with mineralization factors of synthetically geological information method. (authors)

  10. Radiation sensitive area detection device and method

    Science.gov (United States)

    Carter, Daniel C. (Inventor); Hecht, Diana L. (Inventor); Witherow, William K. (Inventor)

    1991-01-01

    A radiation sensitive area detection device for use in conjunction with an X ray, ultraviolet or other radiation source is provided which comprises a phosphor containing film which releases a stored diffraction pattern image in response to incoming light or other electromagnetic wave. A light source such as a helium-neon laser, an optical fiber capable of directing light from the laser source onto the phosphor film and also capable of channelling the fluoresced light from the phosphor film to an integrating sphere which directs the light to a signal processing means including a light receiving means such as a photomultiplier tube. The signal processing means allows translation of the fluoresced light in order to detect the original pattern caused by the diffraction of the radiation by the original sample. The optical fiber is retained directly in front of the phosphor screen by a thin metal holder which moves up and down across the phosphor screen and which features a replaceable pinhole which allows easy adjustment of the resolution of the light projected onto the phosphor film. The device produces near real time images with high spatial resolution and without the distortion that accompanies prior art devices employing photomultiplier tubes. A method is also provided for carrying out radiation area detection using the device of the invention.

  11. Hybrid methods for airframe noise numerical prediction

    Energy Technology Data Exchange (ETDEWEB)

    Terracol, M.; Manoha, E.; Herrero, C.; Labourasse, E.; Redonnet, S. [ONERA, Department of CFD and Aeroacoustics, BP 72, Chatillon (France); Sagaut, P. [Laboratoire de Modelisation en Mecanique - UPMC/CNRS, Paris (France)

    2005-07-01

    This paper describes some significant steps made towards the numerical simulation of the noise radiated by the high-lift devices of a plane. Since the full numerical simulation of such configuration is still out of reach for present supercomputers, some hybrid strategies have been developed to reduce the overall cost of such simulations. The proposed strategy relies on the coupling of an unsteady nearfield CFD with an acoustic propagation solver based on the resolution of the Euler equations for midfield propagation in an inhomogeneous field, and the use of an integral solver for farfield acoustic predictions. In the first part of this paper, this CFD/CAA coupling strategy is presented. In particular, the numerical method used in the propagation solver is detailed, and two applications of this coupling method to the numerical prediction of the aerodynamic noise of an airfoil are presented. Then, a hybrid RANS/LES method is proposed in order to perform some unsteady simulations of complex noise sources. This method allows for significant reduction of the cost of such a simulation by considerably reducing the extent of the LES zone. This method is described and some results of the numerical simulation of the three-dimensional unsteady flow in the slat cove of a high-lift profile are presented. While these results remain very difficult to validate with experiments on similar configurations, they represent up to now the first 3D computations of this kind of flow. (orig.)

  12. Sub-kilometer Numerical Weather Prediction in complex urban areas

    Science.gov (United States)

    Leroyer, S.; Bélair, S.; Husain, S.; Vionnet, V.

    2013-12-01

    A Sub-kilometer atmospheric modeling system with grid-spacings of 2.5 km, 1 km and 250 m and including urban processes is currently being developed at the Meteorological Service of Canada (MSC) in order to provide more accurate weather forecasts at the city scale. Atmospheric lateral boundary conditions are provided with the 15-km Canadian Regional Deterministic Prediction System (RDPS). Surface physical processes are represented with the Town Energy Balance (TEB) model for the built-up covers and with the Interactions between the Surface, Biosphere, and Atmosphere (ISBA) land surface model for the natural covers. In this study, several research experiments over large metropolitan areas and using observational networks at the urban scale are presented, with a special emphasis on the representation of local atmospheric circulations and their impact on extreme weather forecasting. First, numerical simulations are performed over the Vancouver metropolitan area during a summertime Intense Observing Period (IOP of 14-15 August 2008) of the Environmental Prediction in Canadian Cities (EPiCC) observational network. The influence of the horizontal resolution on the fine-scale representation of the sea-breeze development over the city is highlighted (Leroyer et al., 2013). Then severe storms cases occurring in summertime within the Greater Toronto Area (GTA) are simulated. In view of supporting the 2015 PanAmerican and Para-Pan games to be hold in GTA, a dense observational network has been recently deployed over this region to support model evaluations at the urban and meso scales. In particular, simulations are conducted for the case of 8 July 2013 when exceptional rainfalls were recorded. Leroyer, S., S. Bélair, J. Mailhot, S.Z. Husain, 2013: Sub-kilometer Numerical Weather Prediction in an Urban Coastal Area: A case study over the Vancouver Metropolitan Area, submitted to Journal of Applied Meteorology and Climatology.

  13. Mechatronics technology in predictive maintenance method

    Science.gov (United States)

    Majid, Nurul Afiqah A.; Muthalif, Asan G. A.

    2017-11-01

    This paper presents recent mechatronics technology that can help to implement predictive maintenance by combining intelligent and predictive maintenance instrument. Vibration Fault Simulation System (VFSS) is an example of mechatronics system. The focus of this study is the prediction on the use of critical machines to detect vibration. Vibration measurement is often used as the key indicator of the state of the machine. This paper shows the choice of the appropriate strategy in the vibration of diagnostic process of the mechanical system, especially rotating machines, in recognition of the failure during the working process. In this paper, the vibration signature analysis is implemented to detect faults in rotary machining that includes imbalance, mechanical looseness, bent shaft, misalignment, missing blade bearing fault, balancing mass and critical speed. In order to perform vibration signature analysis for rotating machinery faults, studies have been made on how mechatronics technology is used as predictive maintenance methods. Vibration Faults Simulation Rig (VFSR) is designed to simulate and understand faults signatures. These techniques are based on the processing of vibrational data in frequency-domain. The LabVIEW-based spectrum analyzer software is developed to acquire and extract frequency contents of faults signals. This system is successfully tested based on the unique vibration fault signatures that always occur in a rotating machinery.

  14. Development of a regional ensemble prediction method for probabilistic weather prediction

    International Nuclear Information System (INIS)

    Nohara, Daisuke; Tamura, Hidetoshi; Hirakuchi, Hiromaru

    2015-01-01

    A regional ensemble prediction method has been developed to provide probabilistic weather prediction using a numerical weather prediction model. To obtain consistent perturbations with the synoptic weather pattern, both of initial and lateral boundary perturbations were given by differences between control and ensemble member of the Japan Meteorological Agency (JMA)'s operational one-week ensemble forecast. The method provides a multiple ensemble member with a horizontal resolution of 15 km for 48-hour based on a downscaling of the JMA's operational global forecast accompanied with the perturbations. The ensemble prediction was examined in the case of heavy snow fall event in Kanto area on January 14, 2013. The results showed that the predictions represent different features of high-resolution spatiotemporal distribution of precipitation affected by intensity and location of extra-tropical cyclone in each ensemble member. Although the ensemble prediction has model bias of mean values and variances in some variables such as wind speed and solar radiation, the ensemble prediction has a potential to append a probabilistic information to a deterministic prediction. (author)

  15. Prediction of forest fires occurrences with area-level Poisson mixed models.

    Science.gov (United States)

    Boubeta, Miguel; Lombardía, María José; Marey-Pérez, Manuel Francisco; Morales, Domingo

    2015-05-01

    The number of fires in forest areas of Galicia (north-west of Spain) during the summer period is quite high. Local authorities are interested in analyzing the factors that explain this phenomenon. Poisson regression models are good tools for describing and predicting the number of fires per forest areas. This work employs area-level Poisson mixed models for treating real data about fires in forest areas. A parametric bootstrap method is applied for estimating the mean squared errors of fires predictors. The developed methodology and software are applied to a real data set of fires in forest areas of Galicia. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Prediction Methods for Blood Glucose Concentration

    DEFF Research Database (Denmark)

    -day workshop on the design, use and evaluation of prediction methods for blood glucose concentration was held at the Johannes Kepler University Linz, Austria. One intention of the workshop was to bring together experts working in various fields on the same topic, in order to shed light from different angles...... discussions which allowed to receive direct feedback from the point of view of different disciplines. This book is based on the contributions of that workshop and is intended to convey an overview of the different aspects involved in the prediction. The individual chapters are based on the presentations given...... in the process of writing this book: All authors for their individual contributions, all reviewers of the book chapters, Daniela Hummer for the entire organization of the workshop, Boris Tasevski for helping with the typesetting, Florian Reiterer for his help editing the book, as well as Oliver Jackson and Karin...

  17. Using Tree Detection Algorithms to Predict Stand Sapwood Area, Basal Area and Stocking Density in Eucalyptus regnans Forest

    Directory of Open Access Journals (Sweden)

    Dominik Jaskierniak

    2015-06-01

    Full Text Available Managers of forested water supply catchments require efficient and accurate methods to quantify changes in forest water use due to changes in forest structure and density after disturbance. Using Light Detection and Ranging (LiDAR data with as few as 0.9 pulses m−2, we applied a local maximum filtering (LMF method and normalised cut (NCut algorithm to predict stocking density (SDen of a 69-year-old Eucalyptus regnans forest comprising 251 plots with resolution of the order of 0.04 ha. Using the NCut method we predicted basal area (BAHa per hectare and sapwood area (SAHa per hectare, a well-established proxy for transpiration. Sapwood area was also indirectly estimated with allometric relationships dependent on LiDAR derived SDen and BAHa using a computationally efficient procedure. The individual tree detection (ITD rates for the LMF and NCut methods respectively had 72% and 68% of stems correctly identified, 25% and 20% of stems missed, and 2% and 12% of stems over-segmented. The significantly higher computational requirement of the NCut algorithm makes the LMF method more suitable for predicting SDen across large forested areas. Using NCut derived ITD segments, observed versus predicted stand BAHa had R2 ranging from 0.70 to 0.98 across six catchments, whereas a generalised parsimonious model applied to all sites used the portion of hits greater than 37 m in height (PH37 to explain 68% of BAHa. For extrapolating one ha resolution SAHa estimates across large forested catchments, we found that directly relating SAHa to NCut derived LiDAR indices (R2 = 0.56 was slightly more accurate but computationally more demanding than indirect estimates of SAHa using allometric relationships consisting of BAHa (R2 = 0.50 or a sapwood perimeter index, defined as (BAHaSDen½ (R2 = 0.48.

  18. Computational predictive methods for fracture and fatigue

    Science.gov (United States)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-09-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  19. Does area V3A predict positions of moving objects?

    Directory of Open Access Journals (Sweden)

    Gerrit W Maus

    2010-11-01

    Full Text Available A gradually fading moving object is perceived to disappear at positions beyond its luminance detection threshold, whereas abrupt offsets are usually localised accurately. What role does retinotopic activity in visual cortex play in this motion-induced mislocalization of the endpoint of fading objects? Using functional magnetic resonance imaging (fMRI, we localised regions of interest (ROIs in retinotopic maps abutting the trajectory endpoint of a bar moving either towards or away from this position while gradually decreasing or increasing in luminance. Area V3A showed predictive activity, with stronger fMRI responses for motion towards versus away from the ROI. This effect was independent of the change in luminance. In Area V1 we found higher activity for high-contrast onsets and offsets near the ROI, but no significant differences between motion directions. We suggest that perceived final positions of moving objects are based on an interplay of predictive position representations in higher motion-sensitive retinotopic areas and offset transients in primary visual cortex.

  20. Predicting Traffic Flow in Local Area Networks by the Largest Lyapunov Exponent

    Directory of Open Access Journals (Sweden)

    Yan Liu

    2016-01-01

    Full Text Available The dynamics of network traffic are complex and nonlinear, and chaotic behaviors and their prediction, which play an important role in local area networks (LANs, are studied in detail, using the largest Lyapunov exponent. With the introduction of phase space reconstruction based on the time sequence, the high-dimensional traffic is projected onto the low dimension reconstructed phase space, and a reduced dynamic system is obtained from the dynamic system viewpoint. Then, a numerical method for computing the largest Lyapunov exponent of the low-dimensional dynamic system is presented. Further, the longest predictable time, which is related to chaotic behaviors in the system, is studied using the largest Lyapunov exponent, and the Wolf method is used to predict the evolution of the traffic in a local area network by both Dot and Interval predictions, and a reliable result is obtained by the presented method. As the conclusion, the results show that the largest Lyapunov exponent can be used to describe the sensitivity of the trajectory in the reconstructed phase space to the initial values. Moreover, Dot Prediction can effectively predict the flow burst. The numerical simulation also shows that the presented method is feasible and efficient for predicting the complex dynamic behaviors in LAN traffic, especially for congestion and attack in networks, which are the main two complex phenomena behaving as chaos in networks.

  1. Usefulness of radionuclide angiocardiography in predicting stenotic mitral orifice area

    International Nuclear Information System (INIS)

    Burns, R.J.; Armitage, D.L.; Fountas, P.N.; Tremblay, P.C.; Druck, M.N.

    1986-01-01

    Fifteen patients with pure mitral stenosis (MS) underwent high-temporal-resolution radionuclide angiocardiography for calculation of the ratio of peak left ventricular (LV) filling rate divided by mean LV filling rate (filling ratio). Whereas LV filling normally occurs in 3 phases, in MS it is more uniform. Thus, in 13 patients the filling ratio was below the normal range of 2.21 to 2.88 (p less than 0.001). In 11 patients in atrial fibrillation, filling ratio divided by mean cardiac cycle length and by LV ejection fraction provided good correlation (r = 0.85) with modified Gorlin formula derived mitral area and excellent correlation with echocardiographic mitral area (r = 0.95). Significant MS can be detected using radionuclide angiocardiography to calculate filling ratio. In the absence of the confounding influence of atrial systole calculation of 0.14 (filling ratio divided by cardiac cycle length divided by LV ejection fraction) + 0.40 cm2 enables accurate prediction of mitral area (+/- 4%). Our data support the contention that the modified Gorlin formula, based on steady-state hemodynamics, provides less certain estimates of mitral area for patients with MS and atrial fibrillation, in whom echocardiography and radionuclide angiocardiography may be more accurate

  2. A Versatile Nonlinear Method for Predictive Modeling

    Science.gov (United States)

    Liou, Meng-Sing; Yao, Weigang

    2015-01-01

    As computational fluid dynamics techniques and tools become widely accepted for realworld practice today, it is intriguing to ask: what areas can it be utilized to its potential in the future. Some promising areas include design optimization and exploration of fluid dynamics phenomena (the concept of numerical wind tunnel), in which both have the common feature where some parameters are varied repeatedly and the computation can be costly. We are especially interested in the need for an accurate and efficient approach for handling these applications: (1) capturing complex nonlinear dynamics inherent in a system under consideration and (2) versatility (robustness) to encompass a range of parametric variations. In our previous paper, we proposed to use first-order Taylor expansion collected at numerous sampling points along a trajectory and assembled together via nonlinear weighting functions. The validity and performance of this approach was demonstrated for a number of problems with a vastly different input functions. In this study, we are especially interested in enhancing the method's accuracy; we extend it to include the second-orer Taylor expansion, which however requires a complicated evaluation of Hessian matrices for a system of equations, like in fluid dynamics. We propose a method to avoid these Hessian matrices, while maintaining the accuracy. Results based on the method are presented to confirm its validity.

  3. Simple, spatial and predictive approach for cereal yield prediction in the semi-arid areas

    Science.gov (United States)

    Toumi, Jihad; Khabba, Said; Er-Raki, Salah; Le page, Michel; Chahbi Bellakanji, Aicha; Lili Chabaane, Zohra; Ezzahar, Jamal; Zribi, Mehrez; Jarlan, Lionel

    2016-04-01

    The objective is to develop a simple, spatial and predictive approach of dry matter (DM) and grain yield (GY) of cereal in the semi-arid areas. The proposed method is based on the three efficiencies model of Monteith (1972). This approach summarizes the transformation of solar radiation to the dry matter (DM) by the climate (ɛc), interception (ɛi) and conversion (ɛconv) efficiencies. The method combines the maximum of ɛi and ɛconv (noted ɛimax and ɛconvmax) into a single parameter denoted ɛmax, calculating as a function of cumulating growing degree day (CGDD). Also, the stress coefficient ks, which affects the conversion of solar radiation to the biomass was calculated by the surface temperature or the water balance at the root zone. In addition, the expression of ks has been improved by the consideration of the results achieved by deficit irrigation (AquaCrop and STICS models) which showed that the value of ks from 0.7 to 1 didn't affect significantly the cereal production. For the partitioning of the dry matter developed, between straw and grain, the method proposed calculates a variable Harvest Index coefficient (HI). HI is deducted from CGDD and HI0max (maximal final harvest Index in the region of study). Finally, the approach calculates DM depending Satellite Information (NDVI and surface temperature Ts) and climatic data (solar radiation and air temperature). In the case of no availability of Ts, the amount of irrigation is required to calculate ks. Until now, the developed model has been calibrated and validated on the irrigated area R3, located 40 Km east of Marrakech. The evolutions of DM and GY were reproduced satisfactorily. R2 and RMSE are respectively 0.98 and 0.35 t/ha and 0.98 and 0.19 t/ha, respectively. Currently, additional tests are in progress on data relating to the Kairouan plain of Tunisia.

  4. Watershed area ratio accurately predicts daily streamflow in nested catchments in the Catskills, New York

    Directory of Open Access Journals (Sweden)

    Chris C. Gianfagna

    2015-09-01

    New hydrological insights for the region: Watershed area ratio was the most important basin parameter for estimating flow at upstream sites based on downstream flow. The area ratio alone explained 93% of the variance in the slopes of relationships between upstream and downstream flows. Regression analysis indicated that flow at any upstream point can be estimated by multiplying the flow at a downstream reference gage by the watershed area ratio. This method accurately predicted upstream flows at area ratios as low as 0.005. We also observed a very strong relationship (R2 = 0.79 between area ratio and flow–flow slopes in non-nested catchments. Our results indicate that a simple flow estimation method based on watershed area ratios is justifiable, and indeed preferred, for the estimation of daily streamflow in ungaged watersheds in the Catskills region.

  5. ECOLOGICAL REGIONALIZATION METHODS OF OIL PRODUCING AREAS

    Directory of Open Access Journals (Sweden)

    Inna Ivanovna Pivovarova

    2017-01-01

    Full Text Available The paper analyses territory zoning methods with varying degrees of anthropogenic pollution risk. The summarized results of spatial analysis of oil pollution of surface water in the most developed oil-producing region of Russia. An example of GIS-zoning according to the degree of environmental hazard is presented. All possible algorithms of cluster analysis are considered for isolation of homogeneous data structures. The conclusion is made on the benefits of using combined methods of analysis for assessing the homogeneity of specific environmental characteristics in selected territories.

  6. New methods for fall risk prediction.

    Science.gov (United States)

    Ejupi, Andreas; Lord, Stephen R; Delbaere, Kim

    2014-09-01

    Accidental falls are the leading cause of injury-related death and hospitalization in old age, with over one-third of the older adults experiencing at least one fall or more each year. Because of limited healthcare resources, regular objective fall risk assessments are not possible in the community on a large scale. New methods for fall prediction are necessary to identify and monitor those older people at high risk of falling who would benefit from participating in falls prevention programmes. Technological advances have enabled less expensive ways to quantify physical fall risk in clinical practice and in the homes of older people. Recently, several studies have demonstrated that sensor-based fall risk assessments of postural sway, functional mobility, stepping and walking can discriminate between fallers and nonfallers. Recent research has used low-cost, portable and objective measuring instruments to assess fall risk in older people. Future use of these technologies holds promise for assessing fall risk accurately in an unobtrusive manner in clinical and daily life settings.

  7. A prediction method of natural gas hydrate formation in deepwater gas well and its application

    Directory of Open Access Journals (Sweden)

    Yanli Guo

    2016-09-01

    Full Text Available To prevent the deposition of natural gas hydrate in deepwater gas well, the hydrate formation area in wellbore must be predicted. Herein, by comparing four prediction methods of temperature in pipe with field data and comparing five prediction methods of hydrate formation with experiment data, a method based on OLGA & PVTsim for predicting the hydrate formation area in wellbore was proposed. Meanwhile, The hydrate formation under the conditions of steady production, throttling and shut-in was predicted by using this method based on a well data in the South China Sea. The results indicate that the hydrate formation area decreases with the increase of gas production, inhibitor concentrations and the thickness of insulation materials and increases with the increase of thermal conductivity of insulation materials and shutdown time. Throttling effect causes a plunge in temperature and pressure in wellbore, thus leading to an increase of hydrate formation area.

  8. Fingerprint image reconstruction for swipe sensor using Predictive Overlap Method

    Directory of Open Access Journals (Sweden)

    Mardiansyah Ahmad Zafrullah

    2018-01-01

    Full Text Available Swipe sensor is one of many biometric authentication sensor types that widely applied to embedded devices. The sensor produces an overlap on every pixel block of the image, so the picture requires a reconstruction process before heading to the feature extraction process. Conventional reconstruction methods require extensive computation, causing difficult to apply to embedded devices that have limited computing process. In this paper, image reconstruction is proposed using predictive overlap method, which determines the image block shift from the previous set of change data. The experiments were performed using 36 images generated by a swipe sensor with 128 x 8 pixels size of the area, where each image has an overlap in each block. The results reveal computation can increase up to 86.44% compared with conventional methods, with accuracy decreasing to 0.008% in average.

  9. Different protein-protein interface patterns predicted by different machine learning methods.

    Science.gov (United States)

    Wang, Wei; Yang, Yongxiao; Yin, Jianxin; Gong, Xinqi

    2017-11-22

    Different types of protein-protein interactions make different protein-protein interface patterns. Different machine learning methods are suitable to deal with different types of data. Then, is it the same situation that different interface patterns are preferred for prediction by different machine learning methods? Here, four different machine learning methods were employed to predict protein-protein interface residue pairs on different interface patterns. The performances of the methods for different types of proteins are different, which suggest that different machine learning methods tend to predict different protein-protein interface patterns. We made use of ANOVA and variable selection to prove our result. Our proposed methods taking advantages of different single methods also got a good prediction result compared to single methods. In addition to the prediction of protein-protein interactions, this idea can be extended to other research areas such as protein structure prediction and design.

  10. Development of nondestructive method for prediction of crack instability

    International Nuclear Information System (INIS)

    Schroeder, J.L.; Eylon, D.; Shell, E.B.; Matikas, T.E.

    2000-01-01

    A method to characterize the deformation zone at a crack tip and predict upcoming fracture under load using white light interference microscopy was developed and studied. Cracks were initiated in notched Ti-6Al-4V specimens through fatigue loading. Following crack initiation, specimens were subjected to static loading during in-situ observation of the deformation area ahead of the crack. Nondestructive in-situ observations were performed using white light interference microscopy. Profilometer measurements quantified the area, volume, and shape of the deformation ahead of the crack front. Results showed an exponential relationship between the area and volume of deformation and the stress intensity factor of the cracked alloy. These findings also indicate that it is possible to determine a critical rate of change in deformation versus the stress intensity factor that can predict oncoming catastrophic failure. In addition, crack front deformation zones were measured as a function of time under sustained load, and crack tip deformation zone enlargement over time was observed

  11. Analytical methods for predicting contaminant transport

    International Nuclear Information System (INIS)

    Pigford, T.H.

    1989-09-01

    This paper summarizes some of the previous and recent work at the University of California on analytical solutions for predicting contaminate transport in porous and fractured geologic media. Emphasis is given here to the theories for predicting near-field transport, needed to derive the time-dependent source term for predicting far-field transport and overall repository performance. New theories summarized include solubility-limited release rate with flow backfill in rock, near-field transport of radioactive decay chains, interactive transport of colloid and solute, transport of carbon-14 as carbon dioxide in unsaturated rock, and flow of gases out of and a waste container through cracks and penetrations. 28 refs., 4 figs

  12. Spatial prediction of malaria prevalence in an endemic area of Bangladesh

    Directory of Open Access Journals (Sweden)

    Islam Akramul

    2010-05-01

    Full Text Available Abstract Background Malaria is a major public health burden in Southeastern Bangladesh, particularly in the Chittagong Hill Tracts region. Malaria is endemic in 13 districts of Bangladesh and the highest prevalence occurs in Khagrachari (15.47%. Methods A risk map was developed and geographic risk factors identified using a Bayesian approach. The Bayesian geostatistical model was developed from previously identified individual and environmental covariates (p Results Predicted high prevalence areas were located along the north-eastern areas, and central part of the study area. Low to moderate prevalence areas were predicted in the southwestern, southeastern and central regions. Individual age and nearness to fragmented forest were associated with malaria prevalence after adjusting the spatial auto-correlation. Conclusion A Bayesian analytical approach using multiple enabling technologies (geographic information systems, global positioning systems, and remote sensing provide a strategy to characterize spatial heterogeneity in malaria risk at a fine scale. Even in the most hyper endemic region of Bangladesh there is substantial spatial heterogeneity in risk. Areas that are predicted to be at high risk, based on the environment but that have not been reached by surveys are identified.

  13. Different Methods of Predicting Permeability in Shale

    DEFF Research Database (Denmark)

    Mbia, Ernest Ncha; Fabricius, Ida Lykke; Krogsbøll, Anette

    by two to five orders of magnitudes at lower vertical effective stress below 40 MPa as the content of clay minerals increases causing heterogeneity in shale material. Indirect permeability from consolidation can give maximum and minimum values of shale permeability needed in simulating fluid flow......Permeability is often very difficult to measure or predict in shale lithology. In this work we are determining shale permeability from consolidation tests data using Wissa et al., (1971) approach and comparing the results with predicted permeability from Kozeny’s model. Core and cuttings materials...... effective stress to 9 μD at high vertical effective stress of 100 MPa. The indirect permeability calculated from consolidation tests falls in the same magnitude at higher vertical effective stress, above 40 MPa, as that of the Kozeny model for shale samples with high non-clay content ≥ 70% but are higher...

  14. Snow multivariable data assimilation for hydrological predictions in mountain areas

    Science.gov (United States)

    Piazzi, Gaia; Campo, Lorenzo; Gabellani, Simone; Rudari, Roberto; Castelli, Fabio; Cremonese, Edoardo; Morra di Cella, Umberto; Stevenin, Hervé; Ratto, Sara Maria

    2016-04-01

    -based and remotely sensed data of different snow-related variables (snow albedo and surface temperature, Snow Water Equivalent from passive microwave sensors and Snow Cover Area). SMASH performance was evaluated in the period June 2012 - December 2013 at the meteorological station of Torgnon (Tellinod, 2 160 msl), located in Aosta Valley, a mountain region in northwestern Italy. The EnKF algorithm was firstly tested by assimilating several ground-based measurements: snow depth, land surface temperature, snow density and albedo. The assimilation of snow observed data revealed an overall considerable enhancement of model predictions with respect to the open loop experiments. A first attempt to integrate also remote sensed information was performed by assimilating the Land Surface Temperature (LST) from METEOSAT Second Generation (MSG), leading to good results. The analysis allowed identifying the snow depth and the snowpack surface temperature as the most impacting variables in the assimilation process. In order to pinpoint an optimal number of ensemble instances, SMASH performances were also quantitatively evaluated by varying the instances amount. Furthermore, the impact of the data assimilation frequency was analyzed by varying the assimilation time step (3h, 6h, 12h, 24h).

  15. Connecting clinical and actuarial prediction with rule-based methods

    NARCIS (Netherlands)

    Fokkema, M.; Smits, N.; Kelderman, H.; Penninx, B.W.J.H.

    2015-01-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction

  16. Can Morphing Methods Predict Intermediate Structures?

    Science.gov (United States)

    Weiss, Dahlia R.; Levitt, Michael

    2009-01-01

    Movement is crucial to the biological function of many proteins, yet crystallographic structures of proteins can give us only a static snapshot. The protein dynamics that are important to biological function often happen on a timescale that is unattainable through detailed simulation methods such as molecular dynamics as they often involve crossing high-energy barriers. To address this coarse-grained motion, several methods have been implemented as web servers in which a set of coordinates is usually linearly interpolated from an initial crystallographic structure to a final crystallographic structure. We present a new morphing method that does not extrapolate linearly and can therefore go around high-energy barriers and which can produce different trajectories between the same two starting points. In this work, we evaluate our method and other established coarse-grained methods according to an objective measure: how close a coarse-grained dynamics method comes to a crystallographically determined intermediate structure when calculating a trajectory between the initial and final crystal protein structure. We test this with a set of five proteins with at least three crystallographically determined on-pathway high-resolution intermediate structures from the Protein Data Bank. For simple hinging motions involving a small conformational change, segmentation of the protein into two rigid sections outperforms other more computationally involved methods. However, large-scale conformational change is best addressed using a nonlinear approach and we suggest that there is merit in further developing such methods. PMID:18996395

  17. Prediction Methods in Science and Technology

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    Presents the H-principle, the Heisenberg modelling principle. General properties of the Heisenberg modelling procedure is developed. The theory is applied to principal component analysis and linear regression analysis. It is shown that the H-principle leads to PLS regression in case the task...... is linear regression analysis. The book contains different methods to find the dimensions of linear models, to carry out sensitivity analysis in latent structure models, variable selection methods and presentation of results from analysis....

  18. Selection of industrial robots using the Polygons area method

    Directory of Open Access Journals (Sweden)

    Mortaza Honarmande Azimi

    2014-08-01

    Full Text Available Selection of robots from the several proposed alternatives is a very important and tedious task. Decision makers are not limited to one method and several methods have been proposed for solving this problem. This study presents Polygons Area Method (PAM as a multi attribute decision making method for robot selection problem. In this method, the maximum polygons area obtained from the attributes of an alternative robot on the radar chart is introduced as a decision-making criterion. The results of this method are compared with other typical multiple attribute decision-making methods (SAW, WPM, TOPSIS, and VIKOR by giving two examples. To find similarity in ranking given by different methods, Spearman’s rank correlation coefficients are obtained for different pairs of MADM methods. It was observed that the introduced method is in good agreement with other well-known MADM methods in the robot selection problem.

  19. Generic methods for aero-engine exhaust emission prediction

    NARCIS (Netherlands)

    Shakariyants, S.A.

    2008-01-01

    In the thesis, generic methods have been developed for aero-engine combustor performance, combustion chemistry, as well as airplane aerodynamics, airplane and engine performance. These methods specifically aim to support diverse emission prediction studies coupled with airplane and engine

  20. Force prediction in cold rolling mills by polynomial methods

    Directory of Open Access Journals (Sweden)

    Nicu ROMAN

    2007-12-01

    Full Text Available A method for steel and aluminium strip thickness control is provided including a new technique for predictive rolling force estimation method by statistic model based on polynomial techniques.

  1. An Approximate Method for Pitch-Damping Prediction

    National Research Council Canada - National Science Library

    Danberg, James

    2003-01-01

    ...) method for predicting the pitch-damping coefficients has been employed. The CFD method provides important details necessary to derive the correlation functions that are unavailable from the current experimental database...

  2. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    Science.gov (United States)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  3. Large-area dry bean yield prediction modeling in Mexico

    Science.gov (United States)

    Given the importance of dry bean in Mexico, crop yield predictions before harvest are valuable for authorities of the agricultural sector, in order to define support for producers. The aim of this study was to develop an empirical model to estimate the yield of dry bean at the regional level prior t...

  4. Seasonal rainfall predictability over the Lake Kariba catchment area ...

    African Journals Online (AJOL)

    Retroactive forecasts are produced for lead times of up to 5 months and probabilistic forecast performances evaluated for extreme rainfall thresholds of the 25th and 75th percentile values of the climatological record. The verification of the retroactive forecasts shows that rainfall over the catchment is predictable at extended ...

  5. Walking path-planning method for multiple radiation areas

    International Nuclear Information System (INIS)

    Liu, Yong-kuo; Li, Meng-kun; Peng, Min-jun; Xie, Chun-li; Yuan, Cheng-qian; Wang, Shuang-yu; Chao, Nan

    2016-01-01

    Highlights: • Radiation environment modeling method is designed. • Path-evaluating method and segmented path-planning method are proposed. • Path-planning simulation platform for radiation environment is built. • The method avoids to be misled by minimum dose path in single area. - Abstract: Based on minimum dose path-searching method, walking path-planning method for multiple radiation areas was designed to solve minimum dose path problem in single area and find minimum dose path in the whole space in this paper. Path-planning simulation platform was built using C# programming language and DirectX engine. The simulation platform was used in simulations dealing with virtual nuclear facilities. Simulation results indicated that the walking-path planning method is effective in providing safety for people walking in nuclear facilities.

  6. DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail

    2016-03-16

    Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind

  7. Computational methods in sequence and structure prediction

    Science.gov (United States)

    Lang, Caiyi

    This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed

  8. Predicting human height by Victorian and genomic methods.

    Science.gov (United States)

    Aulchenko, Yurii S; Struchalin, Maksim V; Belonogova, Nadezhda M; Axenovich, Tatiana I; Weedon, Michael N; Hofman, Albert; Uitterlinden, Andre G; Kayser, Manfred; Oostra, Ben A; van Duijn, Cornelia M; Janssens, A Cecile J W; Borodin, Pavel M

    2009-08-01

    In the Victorian era, Sir Francis Galton showed that 'when dealing with the transmission of stature from parents to children, the average height of the two parents, ... is all we need care to know about them' (1886). One hundred and twenty-two years after Galton's work was published, 54 loci showing strong statistical evidence for association to human height were described, providing us with potential genomic means of human height prediction. In a population-based study of 5748 people, we find that a 54-loci genomic profile explained 4-6% of the sex- and age-adjusted height variance, and had limited ability to discriminate tall/short people, as characterized by the area under the receiver-operating characteristic curve (AUC). In a family-based study of 550 people, with both parents having height measurements, we find that the Galtonian mid-parental prediction method explained 40% of the sex- and age-adjusted height variance, and showed high discriminative accuracy. We have also explored how much variance a genomic profile should explain to reach certain AUC values. For highly heritable traits such as height, we conclude that in applications in which parental phenotypic information is available (eg, medicine), the Victorian Galton's method will long stay unsurpassed, in terms of both discriminative accuracy and costs. For less heritable traits, and in situations in which parental information is not available (eg, forensics), genomic methods may provide an alternative, given that the variants determining an essential proportion of the trait's variation can be identified.

  9. Seasonal rainfall predictability over the Lake Kariba catchment area

    CSIR Research Space (South Africa)

    Muchuru, S

    2014-07-01

    Full Text Available The Lake Kariba catchment area in southern Africa has one of the most variable climates of any major river basin, with an extreme range of conditions across the catchment and through time. Marked seasonal and interannual fluctuations in rainfall...

  10. Neural network and area method interpretation of pulsed experiments

    Energy Technology Data Exchange (ETDEWEB)

    Dulla, S.; Picca, P.; Ravetto, P. [Politecnico di Torino, Dipartimento di Energetica, Corso Duca degli Abruzzi, 24 - 10129 Torino (Italy); Canepa, S. [Lab of Reactor Physics and Systems Behaviour LRS, Paul Scherrer Inst., 5232 Villigen (Switzerland)

    2012-07-01

    The determination of the subcriticality level is an important issue in accelerator-driven system technology. The area method, originally introduced by N. G. Sjoestrand, is a classical technique to interpret flux measurement for pulsed experiments in order to reconstruct the reactivity value. In recent times other methods have also been developed, to account for spatial and spectral effects, which were not included in the area method, since it is based on the point kinetic model. The artificial neural network approach can be an efficient technique to infer reactivities from pulsed experiments. In the present work, some comparisons between the two methods are carried out and discussed. (authors)

  11. Assessment Methods of Groundwater Overdraft Area and Its Application

    Science.gov (United States)

    Dong, Yanan; Xing, Liting; Zhang, Xinhui; Cao, Qianqian; Lan, Xiaoxun

    2018-05-01

    Groundwater is an important source of water, and long-term large demand make groundwater over-exploited. Over-exploitation cause a lot of environmental and geological problems. This paper explores the concept of over-exploitation area, summarizes the natural and social attributes of over-exploitation area, as well as expounds its evaluation methods, including single factor evaluation, multi-factor system analysis and numerical method. At the same time, the different methods are compared and analyzed. And then taking Northern Weifang as an example, this paper introduces the practicality of appraisal method.

  12. [Comparision of Different Methods of Area Measurement in Irregular Scar].

    Science.gov (United States)

    Ran, D; Li, W J; Sun, Q G; Li, J Q; Xia, Q

    2016-10-01

    To determine a measurement standard of irregular scar area by comparing the advantages and disadvantages of different measurement methods in measuring same irregular scar area. Irregular scar area was scanned by digital scanning and measured by coordinate reading method, AutoCAD pixel method, Photoshop lasso pixel method, Photoshop magic bar filled pixel method and Foxit PDF reading software, and some aspects of these methods such as measurement time, repeatability, whether could be recorded and whether could be traced were compared and analyzed. There was no significant difference in the scar areas by the measurement methods above. However, there was statistical difference in the measurement time and repeatability by one or multi performers and only Foxit PDF reading software could be traced back. The methods above can be used for measuring scar area, but each one has its advantages and disadvantages. It is necessary to develop new measurement software for forensic identification. Copyright© by the Editorial Department of Journal of Forensic Medicine

  13. Selection method of terrain matching area for TERCOM algorithm

    Science.gov (United States)

    Zhang, Qieqie; Zhao, Long

    2017-10-01

    The performance of terrain aided navigation is closely related to the selection of terrain matching area. The different matching algorithms have different adaptability to terrain. This paper mainly studies the adaptability to terrain of TERCOM algorithm, analyze the relation between terrain feature and terrain characteristic parameters by qualitative and quantitative methods, and then research the relation between matching probability and terrain characteristic parameters by the Monte Carlo method. After that, we propose a selection method of terrain matching area for TERCOM algorithm, and verify the method correctness with real terrain data by simulation experiment. Experimental results show that the matching area obtained by the method in this paper has the good navigation performance and the matching probability of TERCOM algorithm is great than 90%

  14. Body surface area prediction in normal, hypermuscular, and obese mice.

    Science.gov (United States)

    Cheung, Michael C; Spalding, Paul B; Gutierrez, Juan C; Balkan, Wayne; Namias, Nicholas; Koniaris, Leonidas G; Zimmers, Teresa A

    2009-05-15

    Accurate determination of body surface area (BSA) in experimental animals is essential for modeling effects of burn injury or drug metabolism. Two-dimensional surface area is related to three-dimensional body volume, which in turn can be estimated from body mass. The Meeh equation relates body surface area to the two-thirds power of body mass, through a constant, k, which must be determined empirically by species and size. We found older values of k overestimated BSA in certain mice; thus we determined empirically k for various strains of normal, obese, and hypermuscular mice. BSA was computed from digitally scanned pelts and nonlinear regression analysis was used to determine the best-fit k. The empirically determined k for C57BL/6J mice of 9.82 was not significantly different from other inbred and outbred mouse strains of normal body composition. However, mean k of the nearly spheroid, obese lepr(db/db) mice (k = 8.29) was significantly lower than for normals, as were values for dumbbell-shaped, hypermuscular mice with either targeted deletion of the myostatin gene (Mstn) (k = 8.48) or with skeletal muscle specific expression of a dominant negative myostatin receptor (Acvr2b) (k = 8.80). Hypermuscular and obese mice differ substantially from normals in shape and density, resulting in considerably altered k values. This suggests Meeh constants should be determined empirically for animals of altered body composition. Use of these new, improved Meeh constants will allow greater accuracy in experimental models of burn injury and pharmacokinetics.

  15. Life prediction methods for the combined creep-fatigue endurance

    International Nuclear Information System (INIS)

    Wareing, J.; Lloyd, G.J.

    1980-09-01

    The basis and current status of development of the various approaches to the prediction of the combined creep-fatigue endurance are reviewed. It is concluded that an inadequate materials data base makes it difficult to draw sensible conclusions about the prediction capabilities of each of the available methods. Correlation with data for stainless steel 304 and 316 is presented. (U.K.)

  16. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  17. What Predicts Use of Learning-Centered, Interactive Engagement Methods?

    Science.gov (United States)

    Madson, Laura; Trafimow, David; Gray, Tara; Gutowitz, Michael

    2014-01-01

    What makes some faculty members more likely to use interactive engagement methods than others? We use the theory of reasoned action to predict faculty members' use of interactive engagement methods. Results indicate that faculty members' beliefs about the personal positive consequences of using these methods (e.g., "Using interactive…

  18. Method for Predicting Solubilities of Solids in Mixed Solvents

    DEFF Research Database (Denmark)

    Ellegaard, Martin Dela; Abildskov, Jens; O'Connell, J. P.

    2009-01-01

    A method is presented for predicting solubilities of solid solutes in mixed solvents, based on excess Henry's law constants. The basis is statistical mechanical fluctuation solution theory for composition derivatives of solute/solvent infinite dilution activity coefficients. Suitable approximatio...

  19. Fast Prediction Method for Steady-State Heat Convection

    KAUST Repository

    Wá ng, Yì ; Yu, Bo; Sun, Shuyu

    2012-01-01

    , the nonuniform POD-Galerkin projection method exhibits high accuracy, good suitability, and fast computation. It has universal significance for accurate and fast prediction. Also, the methodology can be applied to more complex modeling in chemical engineering

  20. Conductive sapwood area prediction from stem and canopy areas - allometric equations of Kalahari trees, Botswana

    NARCIS (Netherlands)

    Lubczynski, M.W.; Chavarro-Rincon, D.C.; Rossiter, David

    2017-01-01

    Conductive sapwood (xylem) area (Ax) of all trees in a given forested area is the main factor contributing to spatial tree transpiration. One hundred ninety-five trees of 9 species in the Kalahari region of Botswana were felled, stained, cut into discs, and measured to develop allometric equations

  1. Simple area-based measurement for multidetector computed tomography to predict left ventricular size

    International Nuclear Information System (INIS)

    Schlett, Christopher L.; Kwait, Dylan C.; Mahabadi, Amir A.; Hoffmann, Udo; Bamberg, Fabian; O'Donnell, Christopher J.; Fox, Caroline S.

    2010-01-01

    Measures of left ventricular (LV) mass and dimensions are independent predictors of morbidity and mortality. We determined whether an axial area-based method by computed tomography (CT) provides an accurate estimate of LV mass and volume. A total of 45 subjects (49% female, 56.0 ± 12 years) with a wide range of LV geometry underwent contrast-enhanced 64-slice CT. LV mass and volume were derived from 3D data. 2D images were analysed to determine LV area, the direct transverse cardiac diameter (dTCD) and the cardiothoracic ratio (CTR). Furthermore, feasibility was confirmed in 100 Framingham Offspring Cohort subjects. 2D measures of LV area, dTCD and CTR were 47.3 ± 8 cm 2 , 14.7 ± 1.5 cm and 0.54 ± 0.05, respectively. 3D-derived LV volume (end-diastolic) and mass were 148.9 ± 45 cm 3 and 124.2 ± 34 g, respectively. Excellent inter- and intra-observer agreement were shown for 2D LV area measurements (both intraclass correlation coefficients (ICC) = 0.99, p 0.27). Compared with traditionally used CTR, LV size can be accurately predicted based on a simple and highly reproducible axial LV area-based measurement. (orig.)

  2. Assessment of a method for the prediction of mandibular rotation.

    Science.gov (United States)

    Lee, R S; Daniel, F J; Swartz, M; Baumrind, S; Korn, E L

    1987-05-01

    A new method to predict mandibular rotation developed by Skieller and co-workers on a sample of 21 implant subjects with extreme growth patterns has been tested against an alternative sample of 25 implant patients with generally similar mean values, but with less extreme facial patterns. The method, which had been highly successful in retrospectively predicting changes in the sample of extreme subjects, was much less successful in predicting individual patterns of mandibular rotation in the new, less extreme sample. The observation of a large difference in the strength of the predictions for these two samples, even though their mean values were quite similar, should serve to increase our awareness of the complexity of the problem of predicting growth patterns in individual cases.

  3. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients.

    Science.gov (United States)

    Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.

  4. Performance prediction method for a multi-stage Knudsen pump

    Science.gov (United States)

    Kugimoto, K.; Hirota, Y.; Kizaki, Y.; Yamaguchi, H.; Niimi, T.

    2017-12-01

    In this study, the novel method to predict the performance of a multi-stage Knudsen pump is proposed. The performance prediction method is carried out in two steps numerically with the assistance of a simple experimental result. In the first step, the performance of a single-stage Knudsen pump was measured experimentally under various pressure conditions, and the relationship of the mass flow rate was obtained with respect to the average pressure between the inlet and outlet of the pump and the pressure difference between them. In the second step, the performance of a multi-stage pump was analyzed by a one-dimensional model derived from the mass conservation law. The performances predicted by the 1D-model of 1-stage, 2-stage, 3-stage, and 4-stage pumps were validated by the experimental results for the corresponding number of stages. It was concluded that the proposed prediction method works properly.

  5. Predicting and explaining inflammation in Crohn's disease patients using predictive analytics methods and electronic medical record data.

    Science.gov (United States)

    Reddy, Bhargava K; Delen, Dursun; Agrawal, Rupesh K

    2018-01-01

    Crohn's disease is among the chronic inflammatory bowel diseases that impact the gastrointestinal tract. Understanding and predicting the severity of inflammation in real-time settings is critical to disease management. Extant literature has primarily focused on studies that are conducted in clinical trial settings to investigate the impact of a drug treatment on the remission status of the disease. This research proposes an analytics methodology where three different types of prediction models are developed to predict and to explain the severity of inflammation in patients diagnosed with Crohn's disease. The results show that machine-learning-based analytic methods such as gradient boosting machines can predict the inflammation severity with a very high accuracy (area under the curve = 92.82%), followed by regularized regression and logistic regression. According to the findings, a combination of baseline laboratory parameters, patient demographic characteristics, and disease location are among the strongest predictors of inflammation severity in Crohn's disease patients.

  6. Connecting clinical and actuarial prediction with rule-based methods.

    Science.gov (United States)

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  7. The trajectory prediction of spacecraft by grey method

    International Nuclear Information System (INIS)

    Wang, Qiyue; Wang, Zhongyu; Zhang, Zili; Wang, Yanqing; Zhou, Weihu

    2016-01-01

    The real-time and high-precision trajectory prediction of a moving object is a core technology in the field of aerospace engineering. The real-time monitoring and tracking technology are also significant guarantees of aerospace equipment. A dynamic trajectory prediction method called grey dynamic filter (GDF) which combines the dynamic measurement theory and grey system theory is proposed. GDF can use coordinates of the current period to extrapolate coordinates of the following period. At meantime, GDF can also keep the instantaneity of measured coordinates by the metabolism model. In this paper the optimal model length of GDF is firstly selected to improve the prediction accuracy. Then the simulation for uniformly accelerated motion and variably accelerated motion is conducted. The simulation results indicate that the mean composite position error of GDF prediction is one-fifth to that of Kalman filter (KF). By using a spacecraft landing experiment, the prediction accuracy of GDF is compared with the KF method and the primitive grey method (GM). The results show that the motion trajectory of spacecraft predicted by GDF is much closer to actual trajectory than the other two methods. The mean composite position error calculated by GDF is one-eighth to KF and one-fifth to GM respectively. (paper)

  8. Specific surface area evaluation method by using scanning electron microscopy

    International Nuclear Information System (INIS)

    Petrescu, Camelia; Petrescu, Cristian; Axinte, Adrian

    2000-01-01

    Ceramics are among the most interesting materials for a large category of applications, including both industry and health. Among the characteristic of the ceramic materials, the specific surface area is often difficult to evaluate.The paper presents a method of evaluation for the specific surface area of two ceramic powders by means of scanning electron microscopy measurements and an original method of computing the specific surface area.Cumulative curves are used to calculate the specific surface area under assumption that the values of particles diameters follow a normal logarithmic distribution. For two powder types, X7R and NPO the results are the following: - for the density ρ (g/cm 2 ), 5.5 and 6.0, respectively; - for the average diameter D bar (μm), 0.51 and 0.53, respectively; - for σ, 1.465 and 1.385, respectively; - for specific surface area (m 2 /g), 1.248 and 1.330, respectively. The obtained results are in good agreement with the values measured by conventional methods. (authors)

  9. Predicting chaos in memristive oscillator via harmonic balance method.

    Science.gov (United States)

    Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai

    2012-12-01

    This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.

  10. Evaluation and comparison of mammalian subcellular localization prediction methods

    Directory of Open Access Journals (Sweden)

    Fink J Lynn

    2006-12-01

    Full Text Available Abstract Background Determination of the subcellular location of a protein is essential to understanding its biochemical function. This information can provide insight into the function of hypothetical or novel proteins. These data are difficult to obtain experimentally but have become especially important since many whole genome sequencing projects have been finished and many resulting protein sequences are still lacking detailed functional information. In order to address this paucity of data, many computational prediction methods have been developed. However, these methods have varying levels of accuracy and perform differently based on the sequences that are presented to the underlying algorithm. It is therefore useful to compare these methods and monitor their performance. Results In order to perform a comprehensive survey of prediction methods, we selected only methods that accepted large batches of protein sequences, were publicly available, and were able to predict localization to at least nine of the major subcellular locations (nucleus, cytosol, mitochondrion, extracellular region, plasma membrane, Golgi apparatus, endoplasmic reticulum (ER, peroxisome, and lysosome. The selected methods were CELLO, MultiLoc, Proteome Analyst, pTarget and WoLF PSORT. These methods were evaluated using 3763 mouse proteins from SwissProt that represent the source of the training sets used in development of the individual methods. In addition, an independent evaluation set of 2145 mouse proteins from LOCATE with a bias towards the subcellular localization underrepresented in SwissProt was used. The sensitivity and specificity were calculated for each method and compared to a theoretical value based on what might be observed by random chance. Conclusion No individual method had a sufficient level of sensitivity across both evaluation sets that would enable reliable application to hypothetical proteins. All methods showed lower performance on the LOCATE

  11. Apparatus and method for mapping an area of interest

    Science.gov (United States)

    Staab, Torsten A. Cohen, Daniel L.; Feller, Samuel [Fairfax, VA

    2009-12-01

    An apparatus and method are provided for mapping an area of interest using polar coordinates or Cartesian coordinates. The apparatus includes a range finder, an azimuth angle measuring device to provide a heading and an inclinometer to provide an angle of inclination of the range finder as it relates to primary reference points and points of interest. A computer is provided to receive signals from the range finder, inclinometer and azimuth angle measurer to record location data and calculate relative locations between one or more points of interest and one or more primary reference points. The method includes mapping of an area of interest to locate points of interest relative to one or more primary reference points and to store the information in the desired manner. The device may optionally also include an illuminator which can be utilized to paint the area of interest to indicate both points of interest and primary points of reference during and/or after data acquisition.

  12. Univariate Time Series Prediction of Solar Power Using a Hybrid Wavelet-ARMA-NARX Prediction Method

    Energy Technology Data Exchange (ETDEWEB)

    Nazaripouya, Hamidreza; Wang, Yubo; Chu, Chi-Cheng; Pota, Hemanshu; Gadh, Rajit

    2016-05-02

    This paper proposes a new hybrid method for super short-term solar power prediction. Solar output power usually has a complex, nonstationary, and nonlinear characteristic due to intermittent and time varying behavior of solar radiance. In addition, solar power dynamics is fast and is inertia less. An accurate super short-time prediction is required to compensate for the fluctuations and reduce the impact of solar power penetration on the power system. The objective is to predict one step-ahead solar power generation based only on historical solar power time series data. The proposed method incorporates discrete wavelet transform (DWT), Auto-Regressive Moving Average (ARMA) models, and Recurrent Neural Networks (RNN), while the RNN architecture is based on Nonlinear Auto-Regressive models with eXogenous inputs (NARX). The wavelet transform is utilized to decompose the solar power time series into a set of richer-behaved forming series for prediction. ARMA model is employed as a linear predictor while NARX is used as a nonlinear pattern recognition tool to estimate and compensate the error of wavelet-ARMA prediction. The proposed method is applied to the data captured from UCLA solar PV panels and the results are compared with some of the common and most recent solar power prediction methods. The results validate the effectiveness of the proposed approach and show a considerable improvement in the prediction precision.

  13. Quantitative prediction process and evaluation method for seafloor polymetallic sulfide resources

    Directory of Open Access Journals (Sweden)

    Mengyi Ren

    2016-03-01

    Full Text Available Seafloor polymetallic sulfide resources exhibit significant development potential. In 2011, China received the exploration rights for 10,000 km2 of a polymetallic sulfides area in the Southwest Indian Ocean; China will be permitted to retain only 25% of the area in 2021. However, an exploration of seafloor hydrothermal sulfide deposits in China remains in the initial stage. According to the quantitative prediction theory and the exploration status of seafloor sulfides, this paper systematically proposes a quantitative prediction evaluation process of oceanic polymetallic sulfide resources and divides it into three stages: prediction in a large area, prediction in the prospecting region, and the verification and evaluation of targets. The first two stages of the prediction process have been employed in seafloor sulfides prospecting of the Chinese contract area. The results of stage one suggest that the Chinese contract area is located in the high posterior probability area, which indicates good prospecting potential area in the Indian Ocean. In stage two, the Chinese contract area of 48°–52°E has the highest posterior probability value, which can be selected as the reserved region for additional exploration. In stage three, the method of numerical simulation is employed to reproduce the ore-forming process of sulfides to verify the accuracy of the reserved targets obtained from the three-stage prediction. By narrowing the exploration area and gradually improving the exploration accuracy, the prediction will provide a basis for the exploration and exploitation of seafloor polymetallic sulfide resources.

  14. Available Prediction Methods for Corrosion under Insulation (CUI): A Review

    OpenAIRE

    Burhani Nurul Rawaida Ain; Muhammad Masdi; Ismail Mokhtar Che

    2014-01-01

    Corrosion under insulation (CUI) is an increasingly important issue for the piping in industries especially petrochemical and chemical plants due to its unexpected catastrophic disaster. Therefore, attention towards the maintenance and prediction of CUI occurrence, particularly in the corrosion rates, has grown in recent years. In this study, a literature review in determining the corrosion rates by using various prediction models and method of the corrosion occurrence between the external su...

  15. Methods, apparatus and system for notification of predictable memory failure

    Energy Technology Data Exchange (ETDEWEB)

    Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2017-01-03

    A method for providing notification of a predictable memory failure includes the steps of: obtaining information regarding at least one condition associated with a memory; calculating a memory failure probability as a function of the obtained information; calculating a failure probability threshold; and generating a signal when the memory failure probability exceeds the failure probability threshold, the signal being indicative of a predicted future memory failure.

  16. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Development of a software for predicting the effects of nuclear and radiological terrorism events in city areas

    International Nuclear Information System (INIS)

    Luo Lijuan; Chen Bo; Zhuo Weihai; Lu Shuyu

    2011-01-01

    Objective: To develop a new software system that can directly display the predicted results on an electronic map, in order to get a directly perceived understanding of the affected areas of nuclear and radiological terrorism events in city areas. Methods: Three scenarios of events including spreading radioactive materials, dirty bomb attack, and explosion or arson attacks on the radiation facilities were assumed. Gaussian diffusion model was employed to predict the spread and deposition of radioactive pollutants, and both the internal and external doses were estimated for the representative person by using the corresponding dose conversion factors. Through integration of the computing system and Mapinfo geographic information system (GIS), the predicted results were visually displayed on the electronic maps of a city. Results: The new software system could visually display the predicted results on the electronic map of a city, and the predicted results were consistent with those calculated by the similar software Hotspot®. The deviation between this system and Hotspot was less than 0.2 km for predicted isoplethic curves of dose rate downwind. Conclusions: The newly developed software system is of the practical value in predicting the effects of nuclear and radiological terrorism events in city areas. (authors)

  18. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    Science.gov (United States)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  19. Methods and techniques for prediction of environmental impact

    International Nuclear Information System (INIS)

    1992-04-01

    Environmental impact assessment (EIA) is the procedure that helps decision makers understand the environmental implications of their decisions. The prediction of environmental effects or impact is an extremely important part of the EIA procedure and improvements in existing capabilities are needed. Considerable attention is paid within environmental impact assessment and in handbooks on EIA to methods for identifying and evaluating environmental impacts. However, little attention is given to the issue distribution of information on impact prediction methods. The quantitative or qualitative methods for the prediction of environmental impacts appear to be the two basic approaches for incorporating environmental concerns into the decision-making process. Depending on the nature of the proposed activity and the environment likely to be affected, a combination of both quantitative and qualitative methods is used. Within environmental impact assessment, the accuracy of methods for the prediction of environmental impacts is of major importance while it provides for sound and well-balanced decision making. Pertinent and effective action to deal with the problems of environmental protection and the rational use of natural resources and sustainable development is only possible given objective methods and techniques for the prediction of environmental impact. Therefore, the Senior Advisers to ECE Governments on Environmental and Water Problems, decided to set up a task force, with the USSR as lead country, on methods and techniques for the prediction of environmental impacts in order to undertake a study to review and analyse existing methodological approaches and to elaborate recommendations to ECE Governments. The work of the task force was completed in 1990 and the resulting report, with all relevant background material, was approved by the Senior Advisers to ECE Governments on Environmental and Water Problems in 1991. The present report reflects the situation, state of

  20. Modified-Fibonacci-Dual-Lucas method for earthquake prediction

    Science.gov (United States)

    Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.

    2015-06-01

    The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with the earthquake date and in this case the FDL method coincides with the MFDL. Based on the MDFL method we present the prediction method capable of predicting global events or localized earthquakes and we will discuss the accuracy of the method in as far as the prediction and location parts of the method. We show example calendar style predictions for global events as well as for the Greek region using

  1. Methods for early prediction of lactation flow in Holstein heifers

    Directory of Open Access Journals (Sweden)

    Vesna Gantner

    2010-12-01

    Full Text Available The aim of this research was to define methods for early prediction (based on I. milk control record of lactation flow in Holstein heifers as well as to choose optimal one in terms of prediction fit and application simplicity. Total of 304,569 daily yield records automatically recorded on a 1,136 first lactation Holstein cows, from March 2003 till August 2008., were included in analysis. According to the test date, calving date, the age at first calving, lactation stage when I. milk control occurred and to the average milk yield in first 25th, T1 (and 25th-45th, T2 lactation days, measuring monthcalving month-age-production-time-period subgroups were formed. The parameters of analysed nonlinear and linear methods were estimated for each defined subgroup. As models evaluation measures,adjusted coefficient of determination, and average and standard deviation of error were used. Considering obtained results, in terms of total variance explanation (R2 adj, the nonlinear Wood’s method showed superiority above the linear ones (Wilmink’s, Ali-Schaeffer’s and Guo-Swalve’s method in both time-period subgroups (T1 - 97.5 % of explained variability; T2 - 98.1 % of explained variability. Regarding the evaluation measures based on prediction error amount (eavg±eSD, the lowest average error of daily milk yield prediction (less than 0.005 kg/day, as well as of lactation milk yield prediction (less than 50 kg/lactation (T1 time-period subgroup and less than 30 kg/lactation (T2 time-period subgroup; were determined when Wood’s nonlinear prediction method were applied. Obtained results indicate that estimated Wood’s regression parameters could be used in routine work for early prediction of Holstein heifer’s lactation flow.

  2. Towards a unified fatigue life prediction method for marine structures

    CERN Document Server

    Cui, Weicheng; Wang, Fang

    2014-01-01

    In order to apply the damage tolerance design philosophy to design marine structures, accurate prediction of fatigue crack growth under service conditions is required. Now, more and more people have realized that only a fatigue life prediction method based on fatigue crack propagation (FCP) theory has the potential to explain various fatigue phenomena observed. In this book, the issues leading towards the development of a unified fatigue life prediction (UFLP) method based on FCP theory are addressed. Based on the philosophy of the UFLP method, the current inconsistency between fatigue design and inspection of marine structures could be resolved. This book presents the state-of-the-art and recent advances, including those by the authors, in fatigue studies. It is designed to lead the future directions and to provide a useful tool in many practical applications. It is intended to address to engineers, naval architects, research staff, professionals and graduates engaged in fatigue prevention design and survey ...

  3. DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail; Soufan, Othman; Essack, Magbubah; Kalnis, Panos; Bajic, Vladimir B.

    2016-01-01

    DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind can be accessed online at: http://​www.​cbrc.​kaust.​edu.​sa/​daspfind.

  4. Prediction of Protein–Protein Interactions by Evidence Combining Methods

    Directory of Open Access Journals (Sweden)

    Ji-Wei Chang

    2016-11-01

    Full Text Available Most cellular functions involve proteins’ features based on their physical interactions with other partner proteins. Sketching a map of protein–protein interactions (PPIs is therefore an important inception step towards understanding the basics of cell functions. Several experimental techniques operating in vivo or in vitro have made significant contributions to screening a large number of protein interaction partners, especially high-throughput experimental methods. However, computational approaches for PPI predication supported by rapid accumulation of data generated from experimental techniques, 3D structure definitions, and genome sequencing have boosted the map sketching of PPIs. In this review, we shed light on in silico PPI prediction methods that integrate evidence from multiple sources, including evolutionary relationship, function annotation, sequence/structure features, network topology and text mining. These methods are developed for integration of multi-dimensional evidence, for designing the strategies to predict novel interactions, and for making the results consistent with the increase of prediction coverage and accuracy.

  5. PREDICTION OF DROUGHT IMPACT ON RICE PADDIES IN WEST JAVA USING ANALOGUE DOWNSCALING METHOD

    Directory of Open Access Journals (Sweden)

    Elza Surmaini

    2015-09-01

    Full Text Available Indonesia consistently experiences dry climatic conditions and droughts during El Niño, with significant consequences for rice production. To mitigate the impacts of such droughts, robust, simple and timely rainfall forecast is critically important for predicting drought prior to planting time over rice growing areas in Indonesia. The main objective of this study was to predict drought in rice growing areas using ensemble seasonal prediction. The skill of National Oceanic and Atmospheric Administration’s (NOAA’s seasonal prediction model Climate Forecast System version 2 (CFSv2 for predicting rice drought in West Java was investigated in a series of hindcast experiments in 1989-2010. The Constructed Analogue (CA method was employed to produce downscaled local rainfall prediction with stream function (y and velocity potential (c at 850 hPa as predictors and observed rainfall as predictant. We used forty two rain gauges in northern part of West Java in Indramayu, Cirebon, Sumedang and Majalengka Districts. To be able to quantify the uncertainties, a multi-window scheme for predictors was applied to obtain ensemble rainfall prediction. Drought events in dry season planting were predicted by rainfall thresholds. The skill of downscaled rainfall prediction was assessed using Relative Operating Characteristics (ROC method. Results of the study showed that the skills of the probabilistic seasonal prediction for early detection of rice area drought were found to range from 62% to 82% with an improved lead time of 2-4 months. The lead time of 2-4 months provided sufficient time for practical policy makers, extension workers and farmers to cope with drought by preparing suitable farming practices and equipments.

  6. Predicting Metabolic Syndrome Using the Random Forest Method

    Directory of Open Access Journals (Sweden)

    Apilak Worachartcheewan

    2015-01-01

    Full Text Available Aims. This study proposes a computational method for determining the prevalence of metabolic syndrome (MS and to predict its occurrence using the National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III criteria. The Random Forest (RF method is also applied to identify significant health parameters. Materials and Methods. We used data from 5,646 adults aged between 18–78 years residing in Bangkok who had received an annual health check-up in 2008. MS was identified using the NCEP ATP III criteria. The RF method was applied to predict the occurrence of MS and to identify important health parameters surrounding this disorder. Results. The overall prevalence of MS was 23.70% (34.32% for males and 17.74% for females. RF accuracy for predicting MS in an adult Thai population was 98.11%. Further, based on RF, triglyceride levels were the most important health parameter associated with MS. Conclusion. RF was shown to predict MS in an adult Thai population with an accuracy >98% and triglyceride levels were identified as the most informative variable associated with MS. Therefore, using RF to predict MS may be potentially beneficial in identifying MS status for preventing the development of diabetes mellitus and cardiovascular diseases.

  7. NOAA ESRI Grid - sediment size predictions model in New York offshore planning area from Biogeography Branch

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset represents sediment size predictions from a sediment spatial model developed for the New York offshore spatial planning area. The model also includes...

  8. NOAA ESRI Grid - depth predictions bathymetry model in New York offshore planning area from Biogeography Branch

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset represents depth predictions from a bathymetric model developed for the New York offshore spatial planning area. The model also includes...

  9. NOAA ESRI Shapefile - sediment composition class predictions in New York offshore planning area from Biogeography Branch

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset represents sediment composition class predictions from a sediment spatial model developed for the New York offshore spatial planning area. The...

  10. Prediction of polymer flooding performance using an analytical method

    International Nuclear Information System (INIS)

    Tan Czek Hoong; Mariyamni Awang; Foo Kok Wai

    2001-01-01

    The study investigated the applicability of an analytical method developed by El-Khatib in polymer flooding. Results from a simulator UTCHEM and experiments were compared with the El-Khatib prediction method. In general, by assuming a constant viscosity polymer injection, the method gave much higher recovery values than the simulation runs and the experiments. A modification of the method gave better correlation, albeit only oil production. Investigation is continuing on modifying the method so that a better overall fit can be obtained for polymer flooding. (Author)

  11. A non-destructive method for estimating onion leaf area

    Directory of Open Access Journals (Sweden)

    Córcoles J.I.

    2015-06-01

    Full Text Available Leaf area is one of the most important parameters for characterizing crop growth and development, and its measurement is useful for examining the effects of agronomic management on crop production. It is related to interception of radiation, photosynthesis, biomass accumulation, transpiration and gas exchange in crop canopies. Several direct and indirect methods have been developed for determining leaf area. The aim of this study is to develop an indirect method, based on the use of a mathematical model, to compute leaf area in an onion crop using non-destructive measurements with the condition that the model must be practical and useful as a Decision Support System tool to improve crop management. A field experiment was conducted in a 4.75 ha commercial onion plot irrigated with a centre pivot system in Aguas Nuevas (Albacete, Spain, during the 2010 irrigation season. To determine onion crop leaf area in the laboratory, the crop was sampled on four occasions between 15 June and 15 September. At each sampling event, eight experimental plots of 1 m2 were used and the leaf area for individual leaves was computed using two indirect methods, one based on the use of an automated infrared imaging system, LI-COR-3100C, and the other using a digital scanner EPSON GT-8000, obtaining several images that were processed using Image J v 1.43 software. A total of 1146 leaves were used. Before measuring the leaf area, 25 parameters related to leaf length and width were determined for each leaf. The combined application of principal components analysis and cluster analysis for grouping leaf parameters was used to reduce the number of variables from 25 to 12. The parameter derived from the product of the total leaf length (L and the leaf diameter at a distance of 25% of the total leaf length (A25 gave the best results for estimating leaf area using a simple linear regression model. The model obtained was useful for computing leaf area using a non

  12. Preface to the Focus Issue: Chaos Detection Methods and Predictability

    International Nuclear Information System (INIS)

    Gottwald, Georg A.; Skokos, Charalampos

    2014-01-01

    This Focus Issue presents a collection of papers originating from the workshop Methods of Chaos Detection and Predictability: Theory and Applications held at the Max Planck Institute for the Physics of Complex Systems in Dresden, June 17–21, 2013. The main aim of this interdisciplinary workshop was to review comprehensively the theory and numerical implementation of the existing methods of chaos detection and predictability, as well as to report recent applications of these techniques to different scientific fields. The collection of twelve papers in this Focus Issue represents the wide range of applications, spanning mathematics, physics, astronomy, particle accelerator physics, meteorology and medical research. This Preface surveys the papers of this Issue

  13. Preface to the Focus Issue: chaos detection methods and predictability.

    Science.gov (United States)

    Gottwald, Georg A; Skokos, Charalampos

    2014-06-01

    This Focus Issue presents a collection of papers originating from the workshop Methods of Chaos Detection and Predictability: Theory and Applications held at the Max Planck Institute for the Physics of Complex Systems in Dresden, June 17-21, 2013. The main aim of this interdisciplinary workshop was to review comprehensively the theory and numerical implementation of the existing methods of chaos detection and predictability, as well as to report recent applications of these techniques to different scientific fields. The collection of twelve papers in this Focus Issue represents the wide range of applications, spanning mathematics, physics, astronomy, particle accelerator physics, meteorology and medical research. This Preface surveys the papers of this Issue.

  14. The energetic cost of walking: a comparison of predictive methods.

    Directory of Open Access Journals (Sweden)

    Patricia Ann Kramer

    Full Text Available BACKGROUND: The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1 to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2 to investigate to what degree the prediction methods explain the variation in energy expenditure. METHODOLOGY/PRINCIPAL FINDINGS: We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. CONCLUSION: Our results indicate that the choice of predictive method is dependent on the question(s of interest and the data available for use as inputs. Although we

  15. The energetic cost of walking: a comparison of predictive methods.

    Science.gov (United States)

    Kramer, Patricia Ann; Sylvester, Adam D

    2011-01-01

    The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended

  16. Diagnostic test of predicted height model in Indonesian elderly: a study in an urban area

    Directory of Open Access Journals (Sweden)

    Fatmah Fatmah

    2010-08-01

    Full Text Available Aim In an anthropometric assessment, elderly are frequently unable to measure their height due to mobility and skeletal deformities. An alternative is to use a surrogate value of stature from arm span, knee height, and sitting height. The equations developed for predicting height in Indonesian elderly using these three predictors. The equations put in the nutritional assessment card (NSA of older people. Before the card which is the first new technology in Indonesia will be applied in the community, it should be tested. The study aimed was to conduct diagnostic test of predicted height model in the card compared to actual height.Methods Model validation towards 400 healthy elderly conducted in Jakarta City with cross-sectional design. The study was the second validation test of the model besides Depok City representing semi urban area which was undertaken as the first study.Result Male elderly had higher mean age, height, weight, arm span, knee height, and sitting height as compared to female elderly. The highest correlation between knee height and standing height was similar in women (r = 0.80; P < 0.001 and men (r = 0.78; P < 0.001, and followed by arm span and sitting height. Knee height had the lowest difference with standing height in men (3.13 cm and women (2.79 cm. Knee height had the biggest sensitivity (92.2%, and the highest specificity on sitting height (91.2%.Conclusion Stature prediction equation based on knee-height, arm span, and sitting height are applicable for nutritional status assessment in Indonesian elderly. (Med J Indones 2010;19:199-204Key words: diagnostic test, elderly, predicted height model

  17. Predicting fuel poverty at a small-area level in England

    International Nuclear Information System (INIS)

    Fahmy, Eldin; Gordon, David; Patsios, Demi

    2011-01-01

    This paper describes the development of a series of models for predicting the incidence of fuel poverty in England at a small-area level and examines the adequacy of the modelled results in informing our understanding of the geography of fuel poverty. This paper summarises the development of alternative approaches to model specification based upon different approaches to the treatment of household income. Since 2003 small-area fuel poverty estimates have been widely used to inform affordable warmth policies and local targeting of fuel poverty programs. Whilst improvements in data sources and methods in recent years provide an opportunity to better understand the spatial distribution of fuel poverty, these analyses suggest that our understanding of the incidence and spatial distribution of fuel poverty is highly sensitive to the way in which household incomes are measured. - Highlights: → The proposed models estimate fuel poverty incidence at a small-area level. → This is necessary in order to accurately target local fuel poverty interventions. → Fuel poverty estimates are highly sensitive to differences in income measurement. → Fewer children and more pensioners are fuel poor using EHCS income measures. → More children and fewer pensioners are fuel poor using HBAI income measures.

  18. Identification, prediction, and mitigation of sinkhole hazards in evaporite karst areas

    Science.gov (United States)

    Gutierrez, F.; Cooper, A.H.; Johnson, K.S.

    2008-01-01

    occurrence of sinkholes (number of sinkholes/km2 year). Such spatial and temporal predictions, frequently derived from limited records and based on the assumption that past sinkhole activity may be extrapolated to the future, are non-corroborated hypotheses. Validation methods allow us to assess the predictive capability of the susceptibility maps and to transform them into probability maps. Avoiding the most hazardous areas by preventive planning is the safest strategy for development in sinkhole-prone areas. Corrective measures could be applied to reduce the dissolution activity and subsidence processes. A more practical solution for safe development is to reduce the vulnerability of the structures by using subsidence-proof designs. ?? 2007 Springer-Verlag.

  19. Using ANN and EPR models to predict carbon monoxide concentrations in urban area of Tabriz

    Directory of Open Access Journals (Sweden)

    Mohammad Shakerkhatibi

    2015-09-01

    Full Text Available Background: Forecasting of air pollutants has become a popular topic of environmental research today. For this purpose, the artificial neural network (AAN technique is widely used as a reliable method for forecasting air pollutants in urban areas. On the other hand, the evolutionary polynomial regression (EPR model has recently been used as a forecasting tool in some environmental issues. In this research, we compared the ability of these models to forecast carbon monoxide (CO concentrations in the urban area of Tabriz city. Methods: The dataset of CO concentrations measured at the fixed stations operated by the East Azerbaijan Environmental Office along with meteorological data obtained from the East Azerbaijan Meteorological Bureau from March 2007 to March 2013, were used as input for the ANN and EPR models. Results: Based on the results, the performance of ANN is more reliable in comparison with EPR. Using the ANN model, the correlation coefficient values at all monitoring stations were calculated above 0.85. Conversely, the R2 values for these stations were obtained <0.41 using the EPR model. Conclusion: The EPR model could not overcome the nonlinearities of input data. However, the ANN model displayed more accurate results compared to the EPR. Hence, the ANN models are robust tools for predicting air pollutant concentrations.

  20. An analytical method for computing atomic contact areas in biomolecules.

    Science.gov (United States)

    Mach, Paul; Koehl, Patrice

    2013-01-15

    We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  1. Combining gene prediction methods to improve metagenomic gene annotation

    Directory of Open Access Journals (Sweden)

    Rosen Gail L

    2011-01-01

    Full Text Available Abstract Background Traditional gene annotation methods rely on characteristics that may not be available in short reads generated from next generation technology, resulting in suboptimal performance for metagenomic (environmental samples. Therefore, in recent years, new programs have been developed that optimize performance on short reads. In this work, we benchmark three metagenomic gene prediction programs and combine their predictions to improve metagenomic read gene annotation. Results We not only analyze the programs' performance at different read-lengths like similar studies, but also separate different types of reads, including intra- and intergenic regions, for analysis. The main deficiencies are in the algorithms' ability to predict non-coding regions and gene edges, resulting in more false-positives and false-negatives than desired. In fact, the specificities of the algorithms are notably worse than the sensitivities. By combining the programs' predictions, we show significant improvement in specificity at minimal cost to sensitivity, resulting in 4% improvement in accuracy for 100 bp reads with ~1% improvement in accuracy for 200 bp reads and above. To correctly annotate the start and stop of the genes, we find that a consensus of all the predictors performs best for shorter read lengths while a unanimous agreement is better for longer read lengths, boosting annotation accuracy by 1-8%. We also demonstrate use of the classifier combinations on a real dataset. Conclusions To optimize the performance for both prediction and annotation accuracies, we conclude that the consensus of all methods (or a majority vote is the best for reads 400 bp and shorter, while using the intersection of GeneMark and Orphelia predictions is the best for reads 500 bp and longer. We demonstrate that most methods predict over 80% coding (including partially coding reads on a real human gut sample sequenced by Illumina technology.

  2. Orthology prediction methods: a quality assessment using curated protein families.

    Science.gov (United States)

    Trachana, Kalliopi; Larsson, Tomas A; Powell, Sean; Chen, Wei-Hua; Doerks, Tobias; Muller, Jean; Bork, Peer

    2011-10-01

    The increasing number of sequenced genomes has prompted the development of several automated orthology prediction methods. Tests to evaluate the accuracy of predictions and to explore biases caused by biological and technical factors are therefore required. We used 70 manually curated families to analyze the performance of five public methods in Metazoa. We analyzed the strengths and weaknesses of the methods and quantified the impact of biological and technical challenges. From the latter part of the analysis, genome annotation emerged as the largest single influencer, affecting up to 30% of the performance. Generally, most methods did well in assigning orthologous group but they failed to assign the exact number of genes for half of the groups. The publicly available benchmark set (http://eggnog.embl.de/orthobench/) should facilitate the improvement of current orthology assignment protocols, which is of utmost importance for many fields of biology and should be tackled by a broad scientific community. Copyright © 2011 WILEY Periodicals, Inc.

  3. Fast Prediction Method for Steady-State Heat Convection

    KAUST Repository

    Wáng, Yì

    2012-03-14

    A reduced model by proper orthogonal decomposition (POD) and Galerkin projection methods for steady-state heat convection is established on a nonuniform grid. It was verified by thousands of examples that the results are in good agreement with the results obtained from the finite volume method. This model can also predict the cases where model parameters far exceed the sample scope. Moreover, the calculation time needed by the model is much shorter than that needed for the finite volume method. Thus, the nonuniform POD-Galerkin projection method exhibits high accuracy, good suitability, and fast computation. It has universal significance for accurate and fast prediction. Also, the methodology can be applied to more complex modeling in chemical engineering and technology, such as reaction and turbulence. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Hybrid robust predictive optimization method of power system dispatch

    Science.gov (United States)

    Chandra, Ramu Sharat [Niskayuna, NY; Liu, Yan [Ballston Lake, NY; Bose, Sumit [Niskayuna, NY; de Bedout, Juan Manuel [West Glenville, NY

    2011-08-02

    A method of power system dispatch control solves power system dispatch problems by integrating a larger variety of generation, load and storage assets, including without limitation, combined heat and power (CHP) units, renewable generation with forecasting, controllable loads, electric, thermal and water energy storage. The method employs a predictive algorithm to dynamically schedule different assets in order to achieve global optimization and maintain the system normal operation.

  5. Available Prediction Methods for Corrosion under Insulation (CUI: A Review

    Directory of Open Access Journals (Sweden)

    Burhani Nurul Rawaida Ain

    2014-07-01

    Full Text Available Corrosion under insulation (CUI is an increasingly important issue for the piping in industries especially petrochemical and chemical plants due to its unexpected catastrophic disaster. Therefore, attention towards the maintenance and prediction of CUI occurrence, particularly in the corrosion rates, has grown in recent years. In this study, a literature review in determining the corrosion rates by using various prediction models and method of the corrosion occurrence between the external surface piping and its insulation was carried out. The results, prediction models and methods available were presented for future research references. However, most of the prediction methods available are based on each local industrial data only which might be different based on the plant location, environment, temperature and many other factors which may contribute to the difference and reliability of the model developed. Thus, it is more reliable if those models or method supported by laboratory testing or simulation which includes the factors promoting CUI such as environment temperature, insulation types, operating temperatures, and other factors.

  6. Predicting proteasomal cleavage sites: a comparison of available methods

    DEFF Research Database (Denmark)

    Saxova, P.; Buus, S.; Brunak, Søren

    2003-01-01

    -terminal, in particular, of CTL epitopes is cleaved precisely by the proteasome, whereas the N-terminal is produced with an extension, and later trimmed by peptidases in the cytoplasm and in the endoplasmic reticulum. Recently, three publicly available methods have been developed for prediction of the specificity...

  7. Predicting the location of human perirhinal cortex, Brodmann's area 35, from MRI

    DEFF Research Database (Denmark)

    Augustinack, Jean C.; Huber, Kristen E.; Stevens, Allison A.

    2013-01-01

    resolution labels to the surface models to localize area 35 in fourteen cases. We validated the area boundaries determined using histological Nissl staining. To test the accuracy of the probabilistic mapping, we measured the Hausdorff distance between the predicted and true labels and found that the median...

  8. Predicting the Impact of Urban Green Areas on Microclimate Changes of Mashhad Residential Areas during the Hottest Period

    Directory of Open Access Journals (Sweden)

    zahra karimian

    2017-09-01

    Full Text Available Introduction: With regard to two adverse climatic phenomena of urban heat islands and global warming that has been leading to increase temperature in many cities in the world, providing human thermal comfort especially in large cities with hot and dry climates, during the hottest periods of the year is crucial. Mainly vegetation with three methods: shading, evapotranspiration and wind breaking can affect micro-climate. The aim of this study was to asses and simulate the impact of existing and proposed vegetation on the human thermal comfort and micro climate changes in some residential areas of Mashhad during the hottest periods of the year by using a modeling and computer simulation approach. Materials and Methods: This research was performed in the Ghasemabad residential area, Andisheh and Hesabi blocks, and in the hottest period of the year 2012 in Mashhad. Recorded data in the residential sites along with observed data from Mashhad weather station that included temperature, relative humidity, wind speed and direction. Soil data (soil temperature and humidity, soil\\ type, plant data (plant type, plant height, leaf area index and building data (inner temperature in the building, height and area buildings as input data were used in the ENVI-met model. Both two sites, Andishe and Hesabi residential blocks, with vegetation (different trees and bushes plants, for example Acacia, ash, sycamore, mulberry, chinaberry, barberry, boxwood and Cotoneaster that all of them are tolerant and semi-tolerant to drought about 20% were simulated. Regarding the area of simulating, 3 receptors were considered in per sites. Simulation was commenced from 6 AM and continued until 18 pm, but just data of 11-15 hours were analysed (the hours of peak traffic. Results and Discussion: Analysis of outputs data revealed that the temperature of two residential sites in all three receptors during the study were almost the same. In general, the maximum temperature difference

  9. An auxiliary optimization method for complex public transit route network based on link prediction

    Science.gov (United States)

    Zhang, Lin; Lu, Jian; Yue, Xianfei; Zhou, Jialin; Li, Yunxuan; Wan, Qian

    2018-02-01

    Inspired by the missing (new) link prediction and the spurious existing link identification in link prediction theory, this paper establishes an auxiliary optimization method for public transit route network (PTRN) based on link prediction. First, link prediction applied to PTRN is described, and based on reviewing the previous studies, the summary indices set and its algorithms set are collected for the link prediction experiment. Second, through analyzing the topological properties of Jinan’s PTRN established by the Space R method, we found that this is a typical small-world network with a relatively large average clustering coefficient. This phenomenon indicates that the structural similarity-based link prediction will show a good performance in this network. Then, based on the link prediction experiment of the summary indices set, three indices with maximum accuracy are selected for auxiliary optimization of Jinan’s PTRN. Furthermore, these link prediction results show that the overall layout of Jinan’s PTRN is stable and orderly, except for a partial area that requires optimization and reconstruction. The above pattern conforms to the general pattern of the optimal development stage of PTRN in China. Finally, based on the missing (new) link prediction and the spurious existing link identification, we propose optimization schemes that can be used not only to optimize current PTRN but also to evaluate PTRN planning.

  10. MAPPIN: a method for annotating, predicting pathogenicity and mode of inheritance for nonsynonymous variants.

    Science.gov (United States)

    Gosalia, Nehal; Economides, Aris N; Dewey, Frederick E; Balasubramanian, Suganthi

    2017-10-13

    Nonsynonymous single nucleotide variants (nsSNVs) constitute about 50% of known disease-causing mutations and understanding their functional impact is an area of active research. Existing algorithms predict pathogenicity of nsSNVs; however, they are unable to differentiate heterozygous, dominant disease-causing variants from heterozygous carrier variants that lead to disease only in the homozygous state. Here, we present MAPPIN (Method for Annotating, Predicting Pathogenicity, and mode of Inheritance for Nonsynonymous variants), a prediction method which utilizes a random forest algorithm to distinguish between nsSNVs with dominant, recessive, and benign effects. We apply MAPPIN to a set of Mendelian disease-causing mutations and accurately predict pathogenicity for all mutations. Furthermore, MAPPIN predicts mode of inheritance correctly for 70.3% of nsSNVs. MAPPIN also correctly predicts pathogenicity for 87.3% of mutations from the Deciphering Developmental Disorders Study with a 78.5% accuracy for mode of inheritance. When tested on a larger collection of mutations from the Human Gene Mutation Database, MAPPIN is able to significantly discriminate between mutations in known dominant and recessive genes. Finally, we demonstrate that MAPPIN outperforms CADD and Eigen in predicting disease inheritance modes for all validation datasets. To our knowledge, MAPPIN is the first nsSNV pathogenicity prediction algorithm that provides mode of inheritance predictions, adding another layer of information for variant prioritization. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. The simple method of determination peaks areas in multiplets

    International Nuclear Information System (INIS)

    Loska, L.; Ptasinski, J.

    1991-01-01

    Semiconductor germanium detectors used in γ-spectrometry give spectra with well-separated peaks. However, in some cases, energies of γ-lines are too near, to produce resolved and undisturbed peaks. Then, there is a necessity to perform a mathematical separation. The method proposed here is based on the assumption, that areas of peaks composing the analysed multiplet are proportional to their heights. The method can be applied for any number of interfering peaks, providing, that the function of the background under the multiplet is accurately determined. The results of testing calculations performed on a simulated spectrum are given. The method works successfully in a computer program used for neutron activation analysis data processing. (author). 9 refs, 1 fig, 1 tab

  12. Customer churn prediction using a hybrid method and censored data

    Directory of Open Access Journals (Sweden)

    Reza Tavakkoli-Moghaddam

    2013-05-01

    Full Text Available Customers are believed to be the main part of any organization’s assets and customer retention as well as customer churn management are important responsibilities of organizations. In today’s competitive environment, organization must do their best to retain their existing customers since attracting new customers cost significantly more than taking care of existing ones. In this paper, we present a hybrid method based on neural network and Cox regression analysis where neural network is used for outlier data and Cox regression method is implemented for prediction of future events. The proposed model of this paper has been implemented on some data and the results are compared based on five criteria including prediction accuracy, errors’ type I and II, root mean square error and mean absolute deviation. The preliminary results indicate that the proposed model of this paper performs better than alternative methods.

  13. Cervical gland area as an ultrasound marker for prediction of preterm delivery: A cohort study

    Directory of Open Access Journals (Sweden)

    Vajiheh Marsoosi

    2017-11-01

    Full Text Available Background: Preterm labor is a major cause of perinatal morbidity and mortality and it might be predicted by assessing the cervical change. Objective: To assess the association between absence of cervical gland area (CGA and spontaneous preterm labor (SPTL. Materials and Methods: This prospective cohort study was performed on 200 singleton pregnant women with a history of SPTL, second-trimester abortion in the previous pregnancy or lower abdominal pain in current pregnancy. Each patient underwent one transvaginal ultrasound examination between 14-28 wk of gestation. Cervical length was measured and CGA was identified and their relationship with SPTL before 35 and 37 wk gestation was evaluated using STATA software version 10. Results: The mean of cervical length was 36.5 mm (SD=8.4, the shortest measurement was 9 mm, and the longest one was 61 mm. Short cervical length (≤18mm was significantly associated with SPTL before 35 and 37 wk gestation. Cervical gland area (the hypoechogenic or echogenic area around the cervical canal was present in 189 (94.5% patients. Absent of CGA had a significant relationship with SPTL before 35 and 37 wk gestation (p=0.01 and p<0.001, respectively. Cervical length was shorter in women with absent CGA in comparison with subjects with present CGA: 37±10 mm in CGA present group and 23±9 mm in CGA absent group (p<0.001. Conclusion: Our study showed that cervical gland area might be an important predictor of SPTL which should be confirmed with further researches.

  14. A method for evaluating transport energy consumption in suburban areas

    Energy Technology Data Exchange (ETDEWEB)

    Marique, Anne-Francoise, E-mail: afmarique@ulg.ac.be; Reiter, Sigrid, E-mail: Sigrid.Reiter@ulg.ac.be

    2012-02-15

    Urban sprawl is a major issue for sustainable development. It represents a significant contribution to energy consumption of a territory especially due to transportation requirements. However, transport energy consumption is rarely taken into account when the sustainability of suburban structures is studied. In this context, the paper presents a method to estimate transport energy consumption in residential suburban areas. The study aimed, on this basis, at highlighting the most efficient strategies needed to promote awareness and to give practical hints on how to reduce transport energy consumption linked to urban sprawl in existing and future suburban neighborhoods. The method uses data collected by using empirical surveys and GIS. An application of this method is presented concerning the comparison of four suburban districts located in Belgium to demonstrate the advantages of the approach. The influence of several parameters, such as distance to work places and services, use of public transport and performance of the vehicles, are then discussed to allow a range of different development situations to be explored. The results of the case studies highlight that traveled distances, and thus a good mix between activities at the living area scale, are of primordial importance for the energy performance, whereas means of transport used is only of little impact. Improving the performance of the vehicles and favoring home-work give also significant energy savings. The method can be used when planning new areas or retrofitting existing ones, as well as promoting more sustainable lifestyles regarding transport habits. - Highlights: Black-Right-Pointing-Pointer The method allows to assess transport energy consumption in suburban areas and highlight the best strategies to reduce it. Black-Right-Pointing-Pointer Home-to-work travels represent the most important part of calculated transport energy consumption. Black-Right-Pointing-Pointer Energy savings can be achieved by

  15. A method for evaluating transport energy consumption in suburban areas

    International Nuclear Information System (INIS)

    Marique, Anne-Françoise; Reiter, Sigrid

    2012-01-01

    Urban sprawl is a major issue for sustainable development. It represents a significant contribution to energy consumption of a territory especially due to transportation requirements. However, transport energy consumption is rarely taken into account when the sustainability of suburban structures is studied. In this context, the paper presents a method to estimate transport energy consumption in residential suburban areas. The study aimed, on this basis, at highlighting the most efficient strategies needed to promote awareness and to give practical hints on how to reduce transport energy consumption linked to urban sprawl in existing and future suburban neighborhoods. The method uses data collected by using empirical surveys and GIS. An application of this method is presented concerning the comparison of four suburban districts located in Belgium to demonstrate the advantages of the approach. The influence of several parameters, such as distance to work places and services, use of public transport and performance of the vehicles, are then discussed to allow a range of different development situations to be explored. The results of the case studies highlight that traveled distances, and thus a good mix between activities at the living area scale, are of primordial importance for the energy performance, whereas means of transport used is only of little impact. Improving the performance of the vehicles and favoring home-work give also significant energy savings. The method can be used when planning new areas or retrofitting existing ones, as well as promoting more sustainable lifestyles regarding transport habits. - Highlights: ► The method allows to assess transport energy consumption in suburban areas and highlight the best strategies to reduce it. ► Home-to-work travels represent the most important part of calculated transport energy consumption. ► Energy savings can be achieved by reducing distances to travel through a good mix between activities at the

  16. Method of predicting surface deformation in the form of sinkholes

    Energy Technology Data Exchange (ETDEWEB)

    Chudek, M.; Arkuszewski, J.

    1980-06-01

    Proposes a method for predicting probability of sinkhole shaped subsidence, number of funnel-shaped subsidences and size of individual funnels. The following factors which influence the sudden subsidence of the surface in the form of funnels are analyzed: geologic structure of the strata between mining workings and the surface, mining depth, time factor, and geologic disolocations. Sudden surface subsidence is observed only in the case of workings situated up to a few dozen meters from the surface. Using the proposed method is explained with some examples. It is suggested that the method produces correct results which can be used in coal mining and in ore mining. (1 ref.) (In Polish)

  17. Polyadenylation site prediction using PolyA-iEP method.

    Science.gov (United States)

    Kavakiotis, Ioannis; Tzanis, George; Vlahavas, Ioannis

    2014-01-01

    This chapter presents a method called PolyA-iEP that has been developed for the prediction of polyadenylation sites. More precisely, PolyA-iEP is a method that recognizes mRNA 3'ends which contain polyadenylation sites. It is a modular system which consists of two main components. The first exploits the advantages of emerging patterns and the second is a distance-based scoring method. The outputs of the two components are finally combined by a classifier. The final results reach very high scores of sensitivity and specificity.

  18. Metallogenetic prospecting prediction of volcanic rock type of uranium deposit in Pucheng Area

    International Nuclear Information System (INIS)

    Xiao Bin; Wang Yong

    1998-01-01

    Based on the metallogenetic geological conditions of Pucheng area, metallogenetic geological model existing and the information quality method, the logic vector length method and the logic vector length weighted method, some favorable geological variance are selected. The assessment model is set up and some favorable metallogenetic area are delineated according to the different contribution degrees of the geological variances to mineralization. By geological assessment in the favorable metallogenetic areas, it is considered that the favorable metallogenetic geological conditions exist in this areas, and there are prospecting prospective surroundings areas and glorious prospecting future were confirmed in the district

  19. Lattice gas methods for predicting intrinsic permeability of porous media

    Energy Technology Data Exchange (ETDEWEB)

    Santos, L.O.E.; Philippi, P.C. [Santa Catarina Univ., Florianopolis, SC (Brazil). Dept. de Engenharia Mecanica. Lab. de Propriedades Termofisicas e Meios Porosos)]. E-mail: emerich@lmpt.ufsc.br; philippi@lmpt.ufsc.br; Damiani, M.C. [Engineering Simulation and Scientific Software (ESSS), Florianopolis, SC (Brazil). Parque Tecnologico]. E-mail: damiani@lmpt.ufsc.br

    2000-07-01

    This paper presents a method for predicting intrinsic permeability of porous media based on Lattice Gas Cellular Automata methods. Two methods are presented. The first is based on a Boolean model (LGA). The second is Boltzmann method (LB) based on Boltzmann relaxation equation. LGA is a relatively recent method developed to perform hydrodynamic calculations. The method, in its simplest form, consists of a regular lattice populated with particles that hop from site to site in discrete time steps in a process, called propagation. After propagation, the particles in each site interact with each other in a process called collision, in which the number of particles and momentum are conserved. An exclusion principle is imposed in order to achieve better computational efficiency. In despite of its simplicity, this model evolves in agreement with Navier-Stokes equation for low Mach numbers. LB methods were recently developed for the numerical integration of the Navier-Stokes equation based on discrete Boltzmann transport equation. Derived from LGA, LB is a powerful alternative to the standard methods in computational fluid dynamics. In recent years, it has received much attention and has been used in several applications like simulations of flows through porous media, turbulent flows and multiphase flows. It is important to emphasize some aspects that make Lattice Gas Cellular Automata methods very attractive for simulating flows through porous media. In fact, boundary conditions in flows through complex geometry structures are very easy to describe in simulations using these methods. In LGA methods simulations are performed with integers needing less resident memory capability and boolean arithmetic reduces running time. The two methods are used to simulate flows through several Brazilian reservoir petroleum rocks leading to intrinsic permeability prediction. Simulation is compared with experimental results. (author)

  20. Preliminary Groundwater Assessment using Electrical Method at Quaternary Deposits Area

    Science.gov (United States)

    Hazreek, Z. A. M.; Raqib, A. G. A.; Aziman, M.; Azhar, A. T. S.; Khaidir, A. T. M.; Fairus, Y. M.; Rosli, S.; Fakhrurrazi, I. M.; Izzaty, R. A.

    2017-08-01

    Alternative water sources using groundwater has increasingly demand in recent years. In the past, proper and systematic study of groundwater potential was varies due to several constraints. Conventionally, tube well point was drilled based on subjective judgment of several parties which may lead to the uncertainties of the project success. Hence, this study performed an electrical method to investigate the groundwater potential at quaternary deposits area particularly using resistivity and induced polarization technique. Electrical method was performed using ABEM SAS4000 equipment based on pole dipole array and 2.5 m electrode spacing. Resistivity raw data was analyzed using RES2DINV software. It was found that groundwater was able to be detected based on resistivity and chargeability values which varied at 10 - 100 Ωm and 0 - 1 ms respectively. Moreover, suitable location of tube well was able to be proposed which located at 80 m from the first survey electrode in west direction. Verification of both electrical results with established references has shown some good agreement thus able to convince the result reliability. Hence, the establishment of electrical method in preliminary groundwater assessment was able to assist several parties in term groundwater prospective at study area which efficient in term of cost, time, data coverage and sustainability.

  1. Method of manufacturing a large-area segmented photovoltaic module

    Science.gov (United States)

    Lenox, Carl

    2013-11-05

    One embodiment of the invention relates to a segmented photovoltaic (PV) module which is manufactured from laminate segments. The segmented PV module includes rectangular-shaped laminate segments formed from rectangular-shaped PV laminates and further includes non-rectangular-shaped laminate segments formed from rectangular-shaped and approximately-triangular-shaped PV laminates. The laminate segments are mechanically joined and electrically interconnected to form the segmented module. Another embodiment relates to a method of manufacturing a large-area segmented photovoltaic module from laminate segments of various shapes. Other embodiments relate to processes for providing a photovoltaic array for installation at a site. Other embodiments and features are also disclosed.

  2. Keyhole imaging method for dynamic objects behind the occlusion area

    Science.gov (United States)

    Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong

    2018-01-01

    A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .

  3. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    Science.gov (United States)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  4. The Dissolved Oxygen Prediction Method Based on Neural Network

    Directory of Open Access Journals (Sweden)

    Zhong Xiao

    2017-01-01

    Full Text Available The dissolved oxygen (DO is oxygen dissolved in water, which is an important factor for the aquaculture. Using BP neural network method with the combination of purelin, logsig, and tansig activation functions is proposed for the prediction of aquaculture’s dissolved oxygen. The input layer, hidden layer, and output layer are introduced in detail including the weight adjustment process. The breeding data of three ponds in actual 10 consecutive days were used for experiments; these ponds were located in Beihai, Guangxi, a traditional aquaculture base in southern China. The data of the first 7 days are used for training, and the data of the latter 3 days are used for the test. Compared with the common prediction models, curve fitting (CF, autoregression (AR, grey model (GM, and support vector machines (SVM, the experimental results show that the prediction accuracy of the neural network is the highest, and all the predicted values are less than 5% of the error limit, which can meet the needs of practical applications, followed by AR, GM, SVM, and CF. The prediction model can help to improve the water quality monitoring level of aquaculture which will prevent the deterioration of water quality and the outbreak of disease.

  5. PatchSurfers: Two methods for local molecular property-based binding ligand prediction.

    Science.gov (United States)

    Shin, Woong-Hee; Bures, Mark Gregory; Kihara, Daisuke

    2016-01-15

    Protein function prediction is an active area of research in computational biology. Function prediction can help biologists make hypotheses for characterization of genes and help interpret biological assays, and thus is a productive area for collaboration between experimental and computational biologists. Among various function prediction methods, predicting binding ligand molecules for a target protein is an important class because ligand binding events for a protein are usually closely intertwined with the proteins' biological function, and also because predicted binding ligands can often be directly tested by biochemical assays. Binding ligand prediction methods can be classified into two types: those which are based on protein-protein (or pocket-pocket) comparison, and those that compare a target pocket directly to ligands. Recently, our group proposed two computational binding ligand prediction methods, Patch-Surfer, which is a pocket-pocket comparison method, and PL-PatchSurfer, which compares a pocket to ligand molecules. The two programs apply surface patch-based descriptions to calculate similarity or complementarity between molecules. A surface patch is characterized by physicochemical properties such as shape, hydrophobicity, and electrostatic potentials. These properties on the surface are represented using three-dimensional Zernike descriptors (3DZD), which are based on a series expansion of a 3 dimensional function. Utilizing 3DZD for describing the physicochemical properties has two main advantages: (1) rotational invariance and (2) fast comparison. Here, we introduce Patch-Surfer and PL-PatchSurfer with an emphasis on PL-PatchSurfer, which is more recently developed. Illustrative examples of PL-PatchSurfer performance on binding ligand prediction as well as virtual drug screening are also provided. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. A method of predicting the reliability of CDM coil insulation

    International Nuclear Information System (INIS)

    Kytasty, A.; Ogle, C.; Arrendale, H.

    1992-01-01

    This paper presents a method of predicting the reliability of the Collider Dipole Magnet (CDM) coil insulation design. The method proposes a probabilistic treatment of electrical test data, stress analysis, material properties variability and loading uncertainties to give the reliability estimate. The approach taken to predict reliability of design related failure modes of the CDM is to form analytical models of the various possible failure modes and their related mechanisms or causes, and then statistically assess the contributions of the various contributing variables. The probability of the failure mode occurring is interpreted as the number of times one would expect certain extreme situations to combine and randomly occur. One of the more complex failure modes of the CDM will be used to illustrate this methodology

  7. Drug-Target Interactions: Prediction Methods and Applications.

    Science.gov (United States)

    Anusuya, Shanmugam; Kesherwani, Manish; Priya, K Vishnu; Vimala, Antonydhason; Shanmugam, Gnanendra; Velmurugan, Devadasan; Gromiha, M Michael

    2018-01-01

    Identifying the interactions between drugs and target proteins is a key step in drug discovery. This not only aids to understand the disease mechanism, but also helps to identify unexpected therapeutic activity or adverse side effects of drugs. Hence, drug-target interaction prediction becomes an essential tool in the field of drug repurposing. The availability of heterogeneous biological data on known drug-target interactions enabled many researchers to develop various computational methods to decipher unknown drug-target interactions. This review provides an overview on these computational methods for predicting drug-target interactions along with available webservers and databases for drug-target interactions. Further, the applicability of drug-target interactions in various diseases for identifying lead compounds has been outlined. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Risk prediction, safety analysis and quantitative probability methods - a caveat

    International Nuclear Information System (INIS)

    Critchley, O.H.

    1976-01-01

    Views are expressed on the use of quantitative techniques for the determination of value judgements in nuclear safety assessments, hazard evaluation, and risk prediction. Caution is urged when attempts are made to quantify value judgements in the field of nuclear safety. Criteria are given the meaningful application of reliability methods but doubts are expressed about their application to safety analysis, risk prediction and design guidances for experimental or prototype plant. Doubts are also expressed about some concomitant methods of population dose evaluation. The complexities of new designs of nuclear power plants make the problem of safety assessment more difficult but some possible approaches are suggested as alternatives to the quantitative techniques criticized. (U.K.)

  9. Water hammer prediction and control: the Green's function method

    Science.gov (United States)

    Xuan, Li-Jun; Mao, Feng; Wu, Jie-Zhi

    2012-04-01

    By Green's function method we show that the water hammer (WH) can be analytically predicted for both laminar and turbulent flows (for the latter, with an eddy viscosity depending solely on the space coordinates), and thus its hazardous effect can be rationally controlled and minimized. To this end, we generalize a laminar water hammer equation of Wang et al. (J. Hydrodynamics, B2, 51, 1995) to include arbitrary initial condition and variable viscosity, and obtain its solution by Green's function method. The predicted characteristic WH behaviors by the solutions are in excellent agreement with both direct numerical simulation of the original governing equations and, by adjusting the eddy viscosity coefficient, experimentally measured turbulent flow data. Optimal WH control principle is thereby constructed and demonstrated.

  10. Testing and intercomparison of model predictions of radionuclide migration from a hypothetical area source

    International Nuclear Information System (INIS)

    O'Brien, R.S.; Yu, C.; Zeevaert, T.; Olyslaegers, G.; Amado, V.; Setlow, L.W.; Waggitt, P.W.

    2008-01-01

    This work was carried out as part of the International Atomic Energy Agency's EMRAS program. One aim of the work was to develop scenarios for testing computer models designed for simulating radionuclide migration in the environment, and to use these scenarios for testing the models and comparing predictions from different models. This paper presents the results of the development and testing of a hypothetical area source of NORM waste/residue using two complex computer models and one screening model. There are significant differences in the methods used to model groundwater flow between the complex models. The hypothetical source was used because of its relative simplicity and because of difficulties encountered in finding comprehensive, well-validated data sets for real sites. The source consisted of a simple repository of uniform thickness, with 1 Bq g -1 of uranium-238 ( 238 U) (in secular equilibrium with its decay products) distributed uniformly throughout the waste. These approximate real situations, such as engineered repositories, waste rock piles, tailings piles and landfills. Specification of the site also included the physical layout, vertical stratigraphic details, soil type for each layer of material, precipitation and runoff details, groundwater flow parameters, and meteorological data. Calculations were carried out with and without a cover layer of clean soil above the waste, for people working and living at different locations relative to the waste. The predictions of the two complex models showed several differences which need more detailed examination. The scenario is available for testing by other modelers. It can also be used as a planning tool for remediation work or for repository design, by changing the scenario parameters and running the models for a range of different inputs. Further development will include applying models to real scenarios and integrating environmental impact assessment methods with the safety assessment tools currently

  11. River Flow Prediction Using the Nearest Neighbor Probabilistic Ensemble Method

    Directory of Open Access Journals (Sweden)

    H. Sanikhani

    2016-02-01

    Full Text Available Introduction: In the recent years, researchers interested on probabilistic forecasting of hydrologic variables such river flow.A probabilistic approach aims at quantifying the prediction reliability through a probability distribution function or a prediction interval for the unknown future value. The evaluation of the uncertainty associated to the forecast is seen as a fundamental information, not only to correctly assess the prediction, but also to compare forecasts from different methods and to evaluate actions and decisions conditionally on the expected values. Several probabilistic approaches have been proposed in the literature, including (1 methods that use resampling techniques to assess parameter and model uncertainty, such as the Metropolis algorithm or the Generalized Likelihood Uncertainty Estimation (GLUE methodology for an application to runoff prediction, (2 methods based on processing the forecast errors of past data to produce the probability distributions of future values and (3 methods that evaluate how the uncertainty propagates from the rainfall forecast to the river discharge prediction, as the Bayesian forecasting system. Materials and Methods: In this study, two different probabilistic methods are used for river flow prediction.Then the uncertainty related to the forecast is quantified. One approach is based on linear predictors and in the other, nearest neighbor was used. The nonlinear probabilistic ensemble can be used for nonlinear time series analysis using locally linear predictors, while NNPE utilize a method adapted for one step ahead nearest neighbor methods. In this regard, daily river discharge (twelve years of Dizaj and Mashin Stations on Baranduz-Chay basin in west Azerbijan and Zard-River basin in Khouzestan provinces were used, respectively. The first six years of data was applied for fitting the model. The next three years was used to calibration and the remained three yeas utilized for testing the models

  12. Improving protein function prediction methods with integrated literature data

    Directory of Open Access Journals (Sweden)

    Gabow Aaron P

    2008-04-01

    Full Text Available Abstract Background Determining the function of uncharacterized proteins is a major challenge in the post-genomic era due to the problem's complexity and scale. Identifying a protein's function contributes to an understanding of its role in the involved pathways, its suitability as a drug target, and its potential for protein modifications. Several graph-theoretic approaches predict unidentified functions of proteins by using the functional annotations of better-characterized proteins in protein-protein interaction networks. We systematically consider the use of literature co-occurrence data, introduce a new method for quantifying the reliability of co-occurrence and test how performance differs across species. We also quantify changes in performance as the prediction algorithms annotate with increased specificity. Results We find that including information on the co-occurrence of proteins within an abstract greatly boosts performance in the Functional Flow graph-theoretic function prediction algorithm in yeast, fly and worm. This increase in performance is not simply due to the presence of additional edges since supplementing protein-protein interactions with co-occurrence data outperforms supplementing with a comparably-sized genetic interaction dataset. Through the combination of protein-protein interactions and co-occurrence data, the neighborhood around unknown proteins is quickly connected to well-characterized nodes which global prediction algorithms can exploit. Our method for quantifying co-occurrence reliability shows superior performance to the other methods, particularly at threshold values around 10% which yield the best trade off between coverage and accuracy. In contrast, the traditional way of asserting co-occurrence when at least one abstract mentions both proteins proves to be the worst method for generating co-occurrence data, introducing too many false positives. Annotating the functions with greater specificity is harder

  13. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  14. Comparison of RF spectrum prediction methods for dynamic spectrum access

    Science.gov (United States)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  15. A comparison of methods to predict historical daily streamflow time series in the southeastern United States

    Science.gov (United States)

    Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.

    2015-01-01

    Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB

  16. Methods for predicting isochronous stress-strain curves

    International Nuclear Information System (INIS)

    Kiyoshige, Masanori; Shimizu, Shigeki; Satoh, Keisuke.

    1976-01-01

    Isochronous stress-strain curves show the relation between stress and total strain at a certain temperature with time as a parameter, and they are drawn up from the creep test results at various stress levels at a definite temperature. The concept regarding the isochronous stress-strain curves was proposed by McVetty in 1930s, and has been used for the design of aero-engines. Recently the high temperature characteristics of materials are shown as the isochronous stress-strain curves in the design guide for the nuclear energy equipments and structures used in high temperature creep region. It is prescribed that these curves are used as the criteria for determining design stress intensity or the data for analyzing the superposed effects of creep and fatigue. In case of the isochronous stress-strain curves used for the design of nuclear energy equipments with very long service life, it is impractical to determine the curves directly from the results of long time creep test, accordingly the method of predicting long time stress-strain curves from short time creep test results must be established. The method proposed by the authors, for which the creep constitution equations taking the first and second creep stages into account are used, and the method using Larson-Miller parameter were studied, and it was found that both methods were reliable for the prediction. (Kako, I.)

  17. Convergence on the Prediction of Ice Particle Mass and Projected Area in Ice Clouds

    Science.gov (United States)

    Mitchell, D. L.

    2013-12-01

    Ice particle mass- and area-dimensional power law (henceforth m-D and A-D) relationships are building-blocks for formulating microphysical processes and optical properties in cloud and climate models, and they are critical for ice cloud remote sensing algorithms, affecting the retrieval accuracy. They can be estimated by (1) directly measuring the sizes, masses and areas of individual ice particles at ground-level and (2) using aircraft probes to simultaneously measure the ice water content (IWC) and ice particle size distribution. A third indirect method is to use observations from method 1 to develop an m-A relationship representing mean conditions in ice clouds. Owing to a tighter correlation (relative to m-D data), this m-A relationship can be used to estimate m from aircraft probe measurements of A. This has the advantage of estimating m at small sizes, down to 10 μm using the 2D-Sterio probe. In this way, 2D-S measurements of maximum dimension D can be related to corresponding estimates of m to develop ice cloud type and temperature dependent m-D expressions. However, these expressions are no longer linear in log-log space, but are slowly varying curves covering most of the size range of natural ice particles. This work compares all three of the above methods and demonstrates close agreement between them. Regarding (1), 4869 ice particles and corresponding melted hemispheres were measured during a field campaign to obtain D and m. Selecting only those unrimed habits that formed between -20°C and -40°C, the mean mass values for selected size intervals are within 35% of the corresponding masses predicted by the Method 3 curve based on a similar temperature range. Moreover, the most recent m-D expression based on Method 2 differs by no more than 50% with the m-D curve from Method 3. Method 3 appears to be the most accurate over the observed ice particle size range (10-4000 μm). An m-D/A-D scheme was developed by which self-consistent m-D and A-D power laws

  18. RSARF: Prediction of residue solvent accessibility from protein sequence using random forest method

    KAUST Repository

    Ganesan, Pugalenthi; Kandaswamy, Krishna Kumar Umar; Chou -, Kuochen; Vivekanandan, Saravanan; Kolatkar, Prasanna R.

    2012-01-01

    Prediction of protein structure from its amino acid sequence is still a challenging problem. The complete physicochemical understanding of protein folding is essential for the accurate structure prediction. Knowledge of residue solvent accessibility gives useful insights into protein structure prediction and function prediction. In this work, we propose a random forest method, RSARF, to predict residue accessible surface area from protein sequence information. The training and testing was performed using 120 proteins containing 22006 residues. For each residue, buried and exposed state was computed using five thresholds (0%, 5%, 10%, 25%, and 50%). The prediction accuracy for 0%, 5%, 10%, 25%, and 50% thresholds are 72.9%, 78.25%, 78.12%, 77.57% and 72.07% respectively. Further, comparison of RSARF with other methods using a benchmark dataset containing 20 proteins shows that our approach is useful for prediction of residue solvent accessibility from protein sequence without using structural information. The RSARF program, datasets and supplementary data are available at http://caps.ncbs.res.in/download/pugal/RSARF/. - See more at: http://www.eurekaselect.com/89216/article#sthash.pwVGFUjq.dpuf

  19. A Lifetime Prediction Method for LEDs Considering Real Mission Profiles

    DEFF Research Database (Denmark)

    Qu, Xiaohui; Wang, Huai; Zhan, Xiaoqing

    2017-01-01

    operations due to the varying operational and environmental conditions during the entire service time (i.e., mission profiles). To overcome the challenge, this paper proposes an advanced lifetime prediction method, which takes into account the field operation mission profiles and also the statistical......The Light-Emitting Diode (LED) has become a very promising alternative lighting source with the advantages of longer lifetime and higher efficiency than traditional ones. The lifetime prediction of LEDs is important to guide the LED system designers to fulfill the design specifications...... properties of the life data available from accelerated degradation testing. The electrical and thermal characteristics of LEDs are measured by a T3Ster system, used for the electro-thermal modeling. It also identifies key variables (e.g., heat sink parameters) that can be designed to achieve a specified...

  20. Long-Term Prediction of Satellite Orbit Using Analytical Method

    Directory of Open Access Journals (Sweden)

    Jae-Cheol Yoon

    1997-12-01

    Full Text Available A long-term prediction algorithm of geostationary orbit was developed using the analytical method. The perturbation force models include geopotential upto fifth order and degree and luni-solar gravitation, and solar radiation pressure. All of the perturbation effects were analyzed by secular variations, short-period variations, and long-period variations for equinoctial elements such as the semi-major axis, eccentricity vector, inclination vector, and mean longitude of the satellite. Result of the analytical orbit propagator was compared with that of the cowell orbit propagator for the KOREASAT. The comparison indicated that the analytical solution could predict the semi-major axis with an accuarcy of better than ~35meters over a period of 3 month.

  1. Prediction of Chloride Diffusion in Concrete Structure Using Meshless Methods

    Directory of Open Access Journals (Sweden)

    Ling Yao

    2016-01-01

    Full Text Available Degradation of RC structures due to chloride penetration followed by reinforcement corrosion is a serious problem in civil engineering. The numerical simulation methods at present mainly involve finite element methods (FEM, which are based on mesh generation. In this study, element-free Galerkin (EFG and meshless weighted least squares (MWLS methods are used to solve the problem of simulation of chloride diffusion in concrete. The range of a scaling parameter is presented using numerical examples based on meshless methods. One- and two-dimensional numerical examples validated the effectiveness and accuracy of the two meshless methods by comparing results obtained by MWLS with results computed by EFG and FEM and results calculated by an analytical method. A good agreement is obtained among MWLS and EFG numerical simulations and the experimental data obtained from an existing marine concrete structure. These results indicate that MWLS and EFG are reliable meshless methods that can be used for the prediction of chloride ingress in concrete structures.

  2. Development of the interfacial area concentration measurement method using a five sensor conductivity probe

    International Nuclear Information System (INIS)

    Euh, Dong Jin; Yun, Byong Jo; Song, Chul Hwa; Kwon, Tae Soon; Chung, Moon Ki; Lee, Un Chul

    2000-01-01

    The interfacial area concentration(IAC) is one of the most important parameters in the two-fluid model for two-phase flow analysis. The IAE can be measured by a local conductivity probe method that uses the difference of conductivity between water and air/steam. The number of sensors in the conductivity probe may be differently chosen by considering the flow regime of two-phase flow. The four sensor conductivity probe method predicts the IAC without any assumptions of the bubble shape. The local IAC can be obtained by measuring the three dimensional velocity vector elements at the measuring point, and the directional cosines of the sensors. The five sensor conductivity probe method proposed in this study is based on the four sensor probe method. With the five sensor probe, the local IAC for a given referred measuring area of the probe can be predicted more exactly than the four sensor prober. In this paper, the mathematical approach of the five sensor probe method for measuring the IAC is described, and a numerical simulation is carried out for ideal cap bubbles of which the sizes and locations are determined by a random number generator

  3. Amazon forest carbon dynamics predicted by profiles of canopy leaf area and light environment

    Science.gov (United States)

    S. C. Stark; V. Leitold; J. L. Wu; M. O. Hunter; C. V. de Castilho; F. R. C. Costa; S. M. McMahon; G. G. Parker; M. Takako Shimabukuro; M. A. Lefsky; M. Keller; L. F. Alves; J. Schietti; Y. E. Shimabukuro; D. O. Brandao; T. K. Woodcock; N. Higuchi; P. B de Camargo; R. C. de Oliveira; S. R. Saleska

    2012-01-01

    Tropical forest structural variation across heterogeneous landscapes may control above-ground carbon dynamics. We tested the hypothesis that canopy structure (leaf area and light availability) – remotely estimated from LiDAR – control variation in above-ground coarse wood production (biomass growth). Using a statistical model, these factors predicted biomass growth...

  4. Bicycle Frame Prediction Techniques with Fuzzy Logic Method

    Directory of Open Access Journals (Sweden)

    Rafiuddin Syam

    2015-03-01

    Full Text Available In general, an appropriate size bike frame would get comfort to the rider while biking. This study aims to predict the simulation system on the bike frame sizes with fuzzy logic. Testing method used is the simulation test. In this study, fuzzy logic will be simulated using Matlab language to test their performance. Mamdani fuzzy logic using 3 variables and 1 output variable intake. Triangle function for the input and output. The controller is designed in the type mamdani with max-min composition and the method deffuzification using center of gravity method. The results showed that height, inseam and Crank Size generating appropriate frame size for the rider associated with comfort. Has a height range between 142 cm and 201 cm. Inseam has a range between 64 cm and 97 cm. Crank has a size range between 175 mm and 180 mm. The simulation results have a range of frame sizes between 13 inches and 22 inches. By using the fuzzy logic can be predicted the size frame of bicycle suitable for the biker.

  5. Bicycle Frame Prediction Techniques with Fuzzy Logic Method

    Directory of Open Access Journals (Sweden)

    Rafiuddin Syam

    2017-03-01

    Full Text Available In general, an appropriate size bike frame would get comfort to the rider while biking. This study aims to predict the simulation system on the bike frame sizes with fuzzy logic. Testing method used is the simulation test. In this study, fuzzy logic will be simulated using Matlab language to test their performance. Mamdani fuzzy logic using 3 variables and 1 output variable intake. Triangle function for the input and output. The controller is designed in the type mamdani with max-min composition and the method deffuzification using center of gravity method. The results showed that height, inseam and Crank Size generating appropriate frame size for the rider associated with comfort. Has a height range between 142 cm and 201 cm. Inseam has a range between 64 cm and 97 cm. Crank has a size range between 175 mm and 180 mm. The simulation results have a range of frame sizes between 13 inches and 22 inches. By using the fuzzy logic can be predicted the size frame of bicycle suitable for the biker.

  6. Predicting forested catchment evapotranspiration and streamflow from stand sapwood area and Aridity Index

    Science.gov (United States)

    Lane, Patrick

    2016-04-01

    Estimating the water balance of ungauged catchments has been the subject of decades of research. An extension of the fundamental problem of estimating the hydrology is then understanding how do changes in catchment attributes affect the water balance component? This is a particular issue in forest hydrology where vegetation exerts such a strong influence on evapotranspiration (ET), and consequent streamflow (Q). Given the primacy of trees in the water balance, and the potential for change to species and density through logging, fire, pests and diseases and drought, methods that directly relate ET/Q to vegetation structure, species, and stand density are very powerful. Plot studies on tree water use routinely use sapwood area (SA) to calculate transpiration and upscale to the stand/catchment scale. Recent work in south eastern Australian forests have found stand-wide SA to be linearly correlated (R2 = 0.89) with long term mean annual loss (P-Q), and hence, long term mean annual catchment streamflow. Robust relationships can be built between basal area (BA), tree density and stand SA. BA and density are common forest inventory measurements. Until now, no research has related the fundamental stand attribute of SA to streamflow. The data sets include catchments that have been thinned and with varying age classes. Thus far these analyses have been for energy limited systems in wetter forest types. SA has proven to be a more robust biometric than leaf area index which varies seasonally. That long term ET/Q is correlated with vegetation conforms to the Budyko framework. Use of a downscaled (20 m) Aridity Index (AI) has shown distinct correlations with stand SA, and therefore T. Structural patterns at a the hillslope scale not only correlate with SA and T, but also with interception (I) and forest floor evaporation (Es). These correlations between AI and I and Es have given R2 > 0.8. The result of these studies suggest an ability to estimate mean annual ET fluxes at sub

  7. Alternative Testing Methods for Predicting Health Risk from Environmental Exposures

    Directory of Open Access Journals (Sweden)

    Annamaria Colacci

    2014-08-01

    Full Text Available Alternative methods to animal testing are considered as promising tools to support the prediction of toxicological risks from environmental exposure. Among the alternative testing methods, the cell transformation assay (CTA appears to be one of the most appropriate approaches to predict the carcinogenic properties of single chemicals, complex mixtures and environmental pollutants. The BALB/c 3T3 CTA shows a good degree of concordance with the in vivo rodent carcinogenesis tests. Whole-genome transcriptomic profiling is performed to identify genes that are transcriptionally regulated by different kinds of exposures. Its use in cell models representative of target organs may help in understanding the mode of action and predicting the risk for human health. Aiming at associating the environmental exposure to health-adverse outcomes, we used an integrated approach including the 3T3 CTA and transcriptomics on target cells, in order to evaluate the effects of airborne particulate matter (PM on toxicological complex endpoints. Organic extracts obtained from PM2.5 and PM1 samples were evaluated in the 3T3 CTA in order to identify effects possibly associated with different aerodynamic diameters or airborne chemical components. The effects of the PM2.5 extracts on human health were assessed by using whole-genome 44 K oligo-microarray slides. Statistical analysis by GeneSpring GX identified genes whose expression was modulated in response to the cell treatment. Then, modulated genes were associated with pathways, biological processes and diseases through an extensive biological analysis. Data derived from in vitro methods and omics techniques could be valuable for monitoring the exposure to toxicants, understanding the modes of action via exposure-associated gene expression patterns and to highlight the role of genes in key events related to adversity.

  8. Application of statistical classification methods for predicting the acceptability of well-water quality

    Science.gov (United States)

    Cameron, Enrico; Pilla, Giorgio; Stella, Fabio A.

    2018-01-01

    The application of statistical classification methods is investigated—in comparison also to spatial interpolation methods—for predicting the acceptability of well-water quality in a situation where an effective quantitative model of the hydrogeological system under consideration cannot be developed. In the example area in northern Italy, in particular, the aquifer is locally affected by saline water and the concentration of chloride is the main indicator of both saltwater occurrence and groundwater quality. The goal is to predict if the chloride concentration in a water well will exceed the allowable concentration so that the water is unfit for the intended use. A statistical classification algorithm achieved the best predictive performances and the results of the study show that statistical classification methods provide further tools for dealing with groundwater quality problems concerning hydrogeological systems that are too difficult to describe analytically or to simulate effectively.

  9. Method for predicting peptide detection in mass spectrometry

    Science.gov (United States)

    Kangas, Lars [West Richland, WA; Smith, Richard D [Richland, WA; Petritis, Konstantinos [Richland, WA

    2010-07-13

    A method of predicting whether a peptide present in a biological sample will be detected by analysis with a mass spectrometer. The method uses at least one mass spectrometer to perform repeated analysis of a sample containing peptides from proteins with known amino acids. The method then generates a data set of peptides identified as contained within the sample by the repeated analysis. The method then calculates the probability that a specific peptide in the data set was detected in the repeated analysis. The method then creates a plurality of vectors, where each vector has a plurality of dimensions, and each dimension represents a property of one or more of the amino acids present in each peptide and adjacent peptides in the data set. Using these vectors, the method then generates an algorithm from the plurality of vectors and the calculated probabilities that specific peptides in the data set were detected in the repeated analysis. The algorithm is thus capable of calculating the probability that a hypothetical peptide represented as a vector will be detected by a mass spectrometry based proteomic platform, given that the peptide is present in a sample introduced into a mass spectrometer.

  10. A lifetime prediction method for LEDs considering mission profiles

    DEFF Research Database (Denmark)

    Qu, Xiaohui; Wang, Huai; Zhan, Xiaoqing

    2016-01-01

    and to benchmark the cost-competitiveness of different lighting technologies. The existing lifetime data released by LED manufacturers or standard organizations are usually applicable only for specific temperature and current levels. Significant lifetime discrepancies may be observed in field operations due...... to the varying operational and environmental conditions during the entire service time (i.e., mission profiles). To overcome the challenge, this paper proposes an advanced lifetime prediction method, which takes into account the field operation mission profiles and the statistical properties of the life data...

  11. Predicted tyre-soil interface area and vertical stress distribution based on loading characteristics

    DEFF Research Database (Denmark)

    Schjønning, Per; Stettler, M.; Keller, Thomas

    2015-01-01

    The upper boundary condition for all models simulating stress patterns throughout the soil profile is the stress distribution at the tyre–soil interface. The so-called FRIDA model (Schjønning et al., 2008. Biosyst. Eng. 99, 119–133) treats the contact area as a superellipse and has been shown...... of the actual to recommended inflation pressure ratio. We found that VT and Kr accounted for nearly all variation in the data with respect to the contact area. The contact area width was accurately described by a combination of tyre width and Kr, while the superellipse squareness parameter, n, diminished...... slightly with increasing Kr. Estimated values of the contact area length related to observed data with a standard deviation of about 0.06 m. A difference between traction and implement tyres called for separate prediction equations, especially for the contact area. The FRIDA parameters α and β, reflecting...

  12. SOFTWARE EFFORT PREDICTION: AN EMPIRICAL EVALUATION OF METHODS TO TREAT MISSING VALUES WITH RAPIDMINER ®

    OpenAIRE

    OLGA FEDOTOVA; GLADYS CASTILLO; LEONOR TEIXEIRA; HELENA ALVELOS

    2011-01-01

    Missing values is a common problem in the data analysis in all areas, being software engineering not an exception. Particularly, missing data is a widespread phenomenon observed during the elaboration of effort prediction models (EPMs) required for budget, time and functionalities planning. Current work presents the results of a study carried out on a Portuguese medium-sized software development organization in order to obtain a formal method for EPMs elicitation in development processes. Thi...

  13. Methods for Finding Legacy Wells in Large Areas

    Energy Technology Data Exchange (ETDEWEB)

    Hammack, Richard W. [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Veloski, Garret A. [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Hodges, D. Greg [Fugro Airborne Surveys, Mississauga, ON (Canada); White, Jr., Curt M. [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States)

    2016-06-16

    United States. When abandoned, many wells were not adequately sealed and now provide a potential conduit for the vertical movement of liquids and gases. Today, groundwater aquifers can be contaminated by surface pollutants flowing down wells or by deep, saline water diffusing upwards. Likewise, natural gas, carbon dioxide (CO2), or radon can travel upwards via these wells to endanger structures or human health on the surface. Recently, the need to find and plug wells has become critical with the advent of carbon dioxide injection into geologic formations for enhanced oil recovery (EOR) or carbon storage. The potential for natural gas or brine leakage through existing wells has also been raised as a concern in regions where shale resources are hydraulically fractured for hydrocarbon recovery. In this study, the National Energy Technology Laboratory (NETL) updated existing, effective well finding techniques to be able to survey large areas quickly using helicopter or ground-vehicle-mounted magnetometers, combined with mobile methane detection. For this study, magnetic data were collected using airborne and ground vehicles equipped with two boom-mounted magnetometers, or on foot using a hand-held magnetometer with a single sensor. Data processing techniques were employed to accentuate well-casing-type magnetic signatures. To locate wells with no magnetic signature (wells where the steel well casing had been removed), the team monitored for anomalous concentrations of methane, which could indicate migration of volatile compounds from deeper sedimentary strata along a well or fracture pathway. Methane measurements were obtained using the ALPIS DIfferential Absorption Lidar (DIAL) sensor for helicopter surveys and the Apogee leak detection system (LDS) for ground surveys. These methods were evaluated at a 100-year-old oilfield in Wyoming, where a helicopter magnetic survey accurately located 93% of visible wells. In addition, 20% of the wells found by the survey were

  14. Methods for Finding Legacy Wells in Large Areas

    Energy Technology Data Exchange (ETDEWEB)

    Hammack, Richard [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Veloski, Garret [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Hodges, D. Greg [Fugro Airborne Surveys, Mississauga, ON (Canada); White, Jr., Charles E. [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States)

    2016-06-16

    More than 10 million wells have been drilled during 150 years of oil and gas production in the United States. When abandoned, many wells were not adequately sealed and now provide a potential conduit for the vertical movement of liquids and gases. Today, groundwater aquifers can be contaminated by surface pollutants flowing down wells or by deep, saline water diffusing upwards. Likewise, natural gas, carbon dioxide (CO2), or radon can travel upwards via these wells to endanger structures or human health on the surface. Recently, the need to find and plug wells has become critical with the advent of carbon dioxide injection into geologic formations for enhanced oil recovery (EOR) or carbon storage. The potential for natural gas or brine leakage through existing wells has also been raised as a concern in regions where shale resources are hydraulically fractured for hydrocarbon recovery. In this study, the National Energy Technology Laboratory (NETL) updated existing, effective well finding techniques to be able to survey large areas quickly using helicopter or ground-vehicle-mounted magnetometers, combined with mobile methane detection. For this study, magnetic data were collected using airborne and ground vehicles equipped with two boom-mounted magnetometers, or on foot using a hand-held magnetometer with a single sensor. Data processing techniques were employed to accentuate well-casing-type magnetic signatures. To locate wells with no magnetic signature (wells where the steel well casing had been removed), the team monitored for anomalous concentrations of methane, which could indicate migration of volatile compounds from deeper sedimentary strata along a well or fracture pathway. Methane measurements were obtained using the ALPIS DIfferential Absorption Lidar (DIAL) sensor for helicopter surveys and the Apogee leak detection system (LDS) for ground surveys. These methods were evaluated at a 100-year-old oilfield in Wyoming, where a helicopter magnetic

  15. Prediction strategies in a TV recommender system - Method and experiments

    NARCIS (Netherlands)

    van Setten, M.J.; Veenstra, M.; van Dijk, Elisabeth M.A.G.; Nijholt, Antinus; Isaísas, P.; Karmakar, N.

    2003-01-01

    Predicting the interests of a user in information is an important process in personalized information systems. In this paper, we present a way to create prediction engines that allow prediction techniques to be easily combined into prediction strategies. Prediction strategies choose one or a

  16. Data Based Prediction of Blood Glucose Concentrations Using Evolutionary Methods.

    Science.gov (United States)

    Hidalgo, J Ignacio; Colmenar, J Manuel; Kronberger, Gabriel; Winkler, Stephan M; Garnica, Oscar; Lanchares, Juan

    2017-08-08

    Predicting glucose values on the basis of insulin and food intakes is a difficult task that people with diabetes need to do daily. This is necessary as it is important to maintain glucose levels at appropriate values to avoid not only short-term, but also long-term complications of the illness. Artificial intelligence in general and machine learning techniques in particular have already lead to promising results in modeling and predicting glucose concentrations. In this work, several machine learning techniques are used for the modeling and prediction of glucose concentrations using as inputs the values measured by a continuous monitoring glucose system as well as also previous and estimated future carbohydrate intakes and insulin injections. In particular, we use the following four techniques: genetic programming, random forests, k-nearest neighbors, and grammatical evolution. We propose two new enhanced modeling algorithms for glucose prediction, namely (i) a variant of grammatical evolution which uses an optimized grammar, and (ii) a variant of tree-based genetic programming which uses a three-compartment model for carbohydrate and insulin dynamics. The predictors were trained and tested using data of ten patients from a public hospital in Spain. We analyze our experimental results using the Clarke error grid metric and see that 90% of the forecasts are correct (i.e., Clarke error categories A and B), but still even the best methods produce 5 to 10% of serious errors (category D) and approximately 0.5% of very serious errors (category E). We also propose an enhanced genetic programming algorithm that incorporates a three-compartment model into symbolic regression models to create smoothed time series of the original carbohydrate and insulin time series.

  17. Decision tree methods: applications for classification and prediction.

    Science.gov (United States)

    Song, Yan-Yan; Lu, Ying

    2015-04-25

    Decision tree methodology is a commonly used data mining method for establishing classification systems based on multiple covariates or for developing prediction algorithms for a target variable. This method classifies a population into branch-like segments that construct an inverted tree with a root node, internal nodes, and leaf nodes. The algorithm is non-parametric and can efficiently deal with large, complicated datasets without imposing a complicated parametric structure. When the sample size is large enough, study data can be divided into training and validation datasets. Using the training dataset to build a decision tree model and a validation dataset to decide on the appropriate tree size needed to achieve the optimal final model. This paper introduces frequently used algorithms used to develop decision trees (including CART, C4.5, CHAID, and QUEST) and describes the SPSS and SAS programs that can be used to visualize tree structure.

  18. Development of A Bayesian Geostatistical Data Assimilation Method and Application to the Hanford 300 Area

    Science.gov (United States)

    Murakami, Haruko

    Probabilistic risk assessment of groundwater contamination requires us to incorporate large and diverse datasets at the site into the stochastic modeling of flow and transport for prediction. In quantifying the uncertainty in our predictions, we must not only combine the best estimates of the parameters based on each dataset, but also integrate the uncertainty associated with each dataset caused by measurement errors and limited number of measurements. This dissertation presents a Bayesian geostatistical data assimilation method that integrates various types of field data for characterizing heterogeneous hydrological properties. It quantifies the parameter uncertainty as a posterior distribution conditioned on all the datasets, which can be directly used in stochastic simulations to compute possible outcomes of flow and transport processes. The goal of this framework is to remove the discontinuity between data analysis and prediction. Such a direct connection between data and prediction also makes it possible to evaluate the worth of each dataset or combined worth of multiple datasets. The synthetic studies described here confirm that the data assimilation method introduced in this dissertation successfully captures the true parameter values and predicted values within the posterior distribution. The shape of the inferred posterior distributions from the method indicates the importance of estimating the entire distribution in fully accounting for parameter uncertainty. The method is then applied to integrate multiple types of datasets at the Hanford 300 Area for characterizing a three-dimensional heterogeneous hydraulic conductivity field. Comparing the results based on the different numbers or combinations of datasets shows that increasing data do not always contribute in a straightforward way to improving the posterior distribution: increasing numbers of the same data type would not necessarily be beneficial above a certain number, and also the combined effect of

  19. Use of simplified methods for predicting natural resource damages

    International Nuclear Information System (INIS)

    Loreti, C.P.; Boehm, P.D.; Gundlach, E.R.; Healy, E.A.; Rosenstein, A.B.; Tsomides, H.J.; Turton, D.J.; Webber, H.M.

    1995-01-01

    To reduce transaction costs and save time, the US Department of the Interior (DOI) and the National Oceanic and Atmospheric Administration (NOAA) have developed simplified methods for assessing natural resource damages from oil and chemical spills. DOI has proposed the use of two computer models, the Natural Resource Damage Assessment Model for Great Lakes Environments (NRDAM/GLE) and a revised Natural Resource Damage Assessment Model for Coastal and Marine Environments (NRDAM/CME) for predicting monetary damages for spills of oils and chemicals into the Great Lakes and coastal and marine environments. NOAA has used versions of these models to create Compensation Formulas, which it has proposed for calculating natural resource damages for oil spills of up to 50,000 gallons anywhere in the US. Based on a review of the documentation supporting the methods, the results of hundreds of sample runs of DOI's models, and the outputs of the thousands of model runs used to create NOAA's Compensation Formulas, this presentation discusses the ability of these simplified assessment procedures to make realistic damage estimates. The limitations of these procedures are described, and the need for validating the assumptions used in predicting natural resource injuries is discussed

  20. VAN method of short-term earthquake prediction shows promise

    Science.gov (United States)

    Uyeda, Seiya

    Although optimism prevailed in the 1970s, the present consensus on earthquake prediction appears to be quite pessimistic. However, short-term prediction based on geoelectric potential monitoring has stood the test of time in Greece for more than a decade [VarotsosandKulhanek, 1993] Lighthill, 1996]. The method used is called the VAN method.The geoelectric potential changes constantly due to causes such as magnetotelluric effects, lightning, rainfall, leakage from manmade sources, and electrochemical instabilities of electrodes. All of this noise must be eliminated before preseismic signals are identified, if they exist at all. The VAN group apparently accomplished this task for the first time. They installed multiple short (100-200m) dipoles with different lengths in both north-south and east-west directions and long (1-10 km) dipoles in appropriate orientations at their stations (one of their mega-stations, Ioannina, for example, now has 137 dipoles in operation) and found that practically all of the noise could be eliminated by applying a set of criteria to the data.

  1. Predictive ability of machine learning methods for massive crop yield prediction

    Directory of Open Access Journals (Sweden)

    Alberto Gonzalez-Sanchez

    2014-04-01

    Full Text Available An important issue for agricultural planning purposes is the accurate yield estimation for the numerous crops involved in the planning. Machine learning (ML is an essential approach for achieving practical and effective solutions for this problem. Many comparisons of ML methods for yield prediction have been made, seeking for the most accurate technique. Generally, the number of evaluated crops and techniques is too low and does not provide enough information for agricultural planning purposes. This paper compares the predictive accuracy of ML and linear regression techniques for crop yield prediction in ten crop datasets. Multiple linear regression, M5-Prime regression trees, perceptron multilayer neural networks, support vector regression and k-nearest neighbor methods were ranked. Four accuracy metrics were used to validate the models: the root mean square error (RMS, root relative square error (RRSE, normalized mean absolute error (MAE, and correlation factor (R. Real data of an irrigation zone of Mexico were used for building the models. Models were tested with samples of two consecutive years. The results show that M5-Prime and k-nearest neighbor techniques obtain the lowest average RMSE errors (5.14 and 4.91, the lowest RRSE errors (79.46% and 79.78%, the lowest average MAE errors (18.12% and 19.42%, and the highest average correlation factors (0.41 and 0.42. Since M5-Prime achieves the largest number of crop yield models with the lowest errors, it is a very suitable tool for massive crop yield prediction in agricultural planning.

  2. Predicting the presence and cover of management relevant invasive plant species on protected areas.

    Science.gov (United States)

    Iacona, Gwenllian; Price, Franklin D; Armsworth, Paul R

    2016-01-15

    Invasive species are a management concern on protected areas worldwide. Conservation managers need to predict infestations of invasive plants they aim to treat if they want to plan for long term management. Many studies predict the presence of invasive species, but predictions of cover are more relevant for management. Here we examined how predictors of invasive plant presence and cover differ across species that vary in their management priority. To do so, we used data on management effort and cover of invasive plant species on central Florida protected areas. Using a zero-inflated multiple regression framework, we showed that protected area features can predict the presence and cover of the focal species but the same features rarely explain both. There were several predictors of either presence or cover that were important across multiple species. Protected areas with three days of frost per year or fewer were more likely to have occurrences of four of the six focal species. When invasive plants were present, their proportional cover was greater on small preserves for all species, and varied with surrounding household density for three species. None of the predictive features were clearly related to whether species were prioritized for management or not. Our results suggest that predictors of cover and presence can differ both within and across species but do not covary with management priority. We conclude that conservation managers need to select predictors of invasion with care as species identity can determine the relationship between predictors of presence and the more management relevant predictors of cover. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. A method for managing re-identification risk from small geographic areas in Canada

    Directory of Open Access Journals (Sweden)

    Neisa Angelica

    2010-04-01

    Full Text Available Abstract Background A common disclosure control practice for health datasets is to identify small geographic areas and either suppress records from these small areas or aggregate them into larger ones. A recent study provided a method for deciding when an area is too small based on the uniqueness criterion. The uniqueness criterion stipulates that an the area is no longer too small when the proportion of unique individuals on the relevant variables (the quasi-identifiers approaches zero. However, using a uniqueness value of zero is quite a stringent threshold, and is only suitable when the risks from data disclosure are quite high. Other uniqueness thresholds that have been proposed for health data are 5% and 20%. Methods We estimated uniqueness for urban Forward Sortation Areas (FSAs by using the 2001 long form Canadian census data representing 20% of the population. We then constructed two logistic regression models to predict when the uniqueness is greater than the 5% and 20% thresholds, and validated their predictive accuracy using 10-fold cross-validation. Predictor variables included the population size of the FSA and the maximum number of possible values on the quasi-identifiers (the number of equivalence classes. Results All model parameters were significant and the models had very high prediction accuracy, with specificity above 0.9, and sensitivity at 0.87 and 0.74 for the 5% and 20% threshold models respectively. The application of the models was illustrated with an analysis of the Ontario newborn registry and an emergency department dataset. At the higher thresholds considerably fewer records compared to the 0% threshold would be considered to be in small areas and therefore undergo disclosure control actions. We have also included concrete guidance for data custodians in deciding which one of the three uniqueness thresholds to use (0%, 5%, 20%, depending on the mitigating controls that the data recipients have in place, the

  4. A Study on the Estimation Method of Risk Based Area for Jetty Safety Monitoring

    Directory of Open Access Journals (Sweden)

    Byeong-Wook Nam

    2015-09-01

    Full Text Available Recently, the importance of safety-monitoring systems was highlighted by the unprecedented collision between a ship and a jetty in Yeosu. Accordingly, in this study, we introduce the concept of risk based area and develop a methodology for a jetty safety-monitoring system. By calculating the risk based areas for a ship and a jetty, the risk of collision was evaluated. To calculate the risk based areas, we employed an automatic identification system for the ship, stopping-distance equations, and the regulation velocity near the jetty. In this paper, we suggest a risk calculation method for jetty safety monitoring that can determine the collision probability in real time and predict collisions using the amount of overlap between the two calculated risk based areas. A test was conducted at a jetty control center at GS Caltex, and the effectiveness of the proposed risk calculation method was verified. The method is currently applied to the jetty-monitoring system at GS Caltex in Yeosu for the prevention of collisions.

  5. A highly accurate predictive-adaptive method for lithium-ion battery remaining discharge energy prediction in electric vehicle applications

    International Nuclear Information System (INIS)

    Liu, Guangming; Ouyang, Minggao; Lu, Languang; Li, Jianqiu; Hua, Jianfeng

    2015-01-01

    Highlights: • An energy prediction (EP) method is introduced for battery E RDE determination. • EP determines E RDE through coupled prediction of future states, parameters, and output. • The PAEP combines parameter adaptation and prediction to update model parameters. • The PAEP provides improved E RDE accuracy compared with DC and other EP methods. - Abstract: In order to estimate the remaining driving range (RDR) in electric vehicles, the remaining discharge energy (E RDE ) of the applied battery system needs to be precisely predicted. Strongly affected by the load profiles, the available E RDE varies largely in real-world applications and requires specific determination. However, the commonly-used direct calculation (DC) method might result in certain energy prediction errors by relating the E RDE directly to the current state of charge (SOC). To enhance the E RDE accuracy, this paper presents a battery energy prediction (EP) method based on the predictive control theory, in which a coupled prediction of future battery state variation, battery model parameter change, and voltage response, is implemented on the E RDE prediction horizon, and the E RDE is subsequently accumulated and real-timely optimized. Three EP approaches with different model parameter updating routes are introduced, and the predictive-adaptive energy prediction (PAEP) method combining the real-time parameter identification and the future parameter prediction offers the best potential. Based on a large-format lithium-ion battery, the performance of different E RDE calculation methods is compared under various dynamic profiles. Results imply that the EP methods provide much better accuracy than the traditional DC method, and the PAEP could reduce the E RDE error by more than 90% and guarantee the relative energy prediction error under 2%, proving as a proper choice in online E RDE prediction. The correlation of SOC estimation and E RDE calculation is then discussed to illustrate the

  6. Methods and approaches to prediction in the meat industry

    Directory of Open Access Journals (Sweden)

    A. B. Lisitsyn

    2016-01-01

    Full Text Available The modern stage of the agro-industrial complex is characterized by an increasing complexity, intensification of technological processes of complex processing of materials of animal origin also the need for a systematic analysis of the variety of determining factors and relationships between them, complexity of the objective function of product quality and severe restrictions on technological regimes. One of the main tasks that face the employees of the enterprises of the agro-industrial complex, which are engaged in processing biotechnological raw materials, is the further organizational improvement of work at all stages of the food chain, besides an increase in the production volume. The meat industry as a part of the agro-industrial complex has to use the biological raw materials with maximum efficiency, while reducing and even eliminating losses at all stages of processing; rationally use raw material when selecting a type of processing products; steadily increase quality, biological and food value of products; broaden the assortment of manufactured products in order to satisfy increasing consumer requirements and extend the market for their realization in the conditions of uncertainty of external environment, due to the uneven receipt of raw materials, variations in its properties and parameters, limited time sales and fluctuations in demand for products. The challenges facing the meat industry cannot be solved without changes to the strategy for scientific and technological development of the industry. To achieve these tasks, it is necessary to use the prediction as a method of constant improvement of all technological processes and their performance under the rational and optimal regimes, while constantly controlling quality of raw material, semi-prepared products and finished products at all stages of the technological processing by the physico-chemical, physico-mechanical (rheological, microbiological and organoleptic methods. The paper

  7. A method for reconstructing the development of the sapwood area of balsam fir.

    Science.gov (United States)

    Coyea, M R; Margolis, H A; Gagnon, R R

    1990-09-01

    Leaf area is commonly estimated as a function of sapwood area. However, because sapwood changes to heartwood over time, it has not previously been possible to reconstruct either the sapwood area or the leaf area of older trees into the past. In this study, we report a method for reconstructing the development of the sapwood area of dominant and codominant balsam fir (Abies balsamea (L.) Mill.). The technique is based on establishing a species-specific relationship between the number of annual growth rings in the sapwood area and tree age. Because the number of annual growth rings in the sapwood of balsam fir at a given age was found to be independent of site quality and stand density, the number of rings in sapwood (NRS) can be predicted from the age of a tree thus: NRS = 14.818 (1 - e(-0.031 age)), unweighted R(2) = 0.80, and NRS = 2.490 (1 - e(-0.038 age)), unweighted R(2) = 0.64, for measurements at breast height and at the base of the live crown, respectively. These nonlinear asymptotic regression models based only on age, were not improved by adding other tree variables such as diameter at breast height, diameter at the base of the live crown, total tree height or percent live crown.

  8. FREEZING AND THAWING TIME PREDICTION METHODS OF FOODS II: NUMARICAL METHODS

    Directory of Open Access Journals (Sweden)

    Yahya TÜLEK

    1999-03-01

    Full Text Available Freezing is one of the excellent methods for the preservation of foods. If freezing and thawing processes and frozen storage method are carried out correctly, the original characteristics of the foods can remain almost unchanged over an extended periods of time. It is very important to determine the freezing and thawing time period of the foods, as they strongly influence the both quality of food material and process productivity and the economy. For developing a simple and effectively usable mathematical model, less amount of process parameters and physical properties should be enrolled in calculations. But it is a difficult to have all of these in one prediction method. For this reason, various freezing and thawing time prediction methods were proposed in literature and research studies have been going on.

  9. [Using sequential indicator simulation method to define risk areas of soil heavy metals in farmland.

    Science.gov (United States)

    Yang, Hao; Song, Ying Qiang; Hu, Yue Ming; Chen, Fei Xiang; Zhang, Rui

    2018-05-01

    The heavy metals in soil have serious impacts on safety, ecological environment and human health due to their toxicity and accumulation. It is necessary to efficiently identify the risk area of heavy metals in farmland soil, which is of important significance for environment protection, pollution warning and farmland risk control. We collected 204 samples and analyzed the contents of seven kinds of heavy metals (Cu, Zn, Pb, Cd, Cr, As, Hg) in Zengcheng District of Guangzhou, China. In order to overcame the problems of the data, including the limitation of abnormal values and skewness distribution and the smooth effect with the traditional kriging methods, we used sequential indicator simulation method (SISIM) to define the spatial distribution of heavy metals, and combined Hakanson index method to identify potential ecological risk area of heavy metals in farmland. The results showed that: (1) Based on the similar accuracy of spatial prediction of soil heavy metals, the SISIM had a better expression of detail rebuild than ordinary kriging in small scale area. Compared to indicator kriging, the SISIM had less error rate (4.9%-17.1%) in uncertainty evaluation of heavy-metal risk identification. The SISIM had less smooth effect and was more applicable to simulate the spatial uncertainty assessment of soil heavy metals and risk identification. (2) There was no pollution in Zengcheng's farmland. Moderate potential ecological risk was found in the southern part of study area due to enterprise production, human activities, and river sediments. This study combined the sequential indicator simulation with Hakanson risk index method, and effectively overcame the outlier information loss and smooth effect of traditional kriging method. It provided a new way to identify the soil heavy metal risk area of farmland in uneven sampling.

  10. Prediction of the area affected by earthquake-induced landsliding based on seismological parameters

    Science.gov (United States)

    Marc, Odin; Meunier, Patrick; Hovius, Niels

    2017-07-01

    We present an analytical, seismologically consistent expression for the surface area of the region within which most landslides triggered by an earthquake are located (landslide distribution area). This expression is based on scaling laws relating seismic moment, source depth, and focal mechanism with ground shaking and fault rupture length and assumes a globally constant threshold of acceleration for onset of systematic mass wasting. The seismological assumptions are identical to those recently used to propose a seismologically consistent expression for the total volume and area of landslides triggered by an earthquake. To test the accuracy of the model we gathered geophysical information and estimates of the landslide distribution area for 83 earthquakes. To reduce uncertainties and inconsistencies in the estimation of the landslide distribution area, we propose an objective definition based on the shortest distance from the seismic wave emission line containing 95 % of the total landslide area. Without any empirical calibration the model explains 56 % of the variance in our dataset, and predicts 35 to 49 out of 83 cases within a factor of 2, depending on how we account for uncertainties on the seismic source depth. For most cases with comprehensive landslide inventories we show that our prediction compares well with the smallest region around the fault containing 95 % of the total landslide area. Aspects ignored by the model that could explain the residuals include local variations of the threshold of acceleration and processes modulating the surface ground shaking, such as the distribution of seismic energy release on the fault plane, the dynamic stress drop, and rupture directivity. Nevertheless, its simplicity and first-order accuracy suggest that the model can yield plausible and useful estimates of the landslide distribution area in near-real time, with earthquake parameters issued by standard detection routines.

  11. Methods for Finding Legacy Wells in Residential and Commercial Areas

    Energy Technology Data Exchange (ETDEWEB)

    Hammack, Richard W. [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Veloski, Garret A. [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States)

    2016-06-16

    In 1919, the enthusiasm surrounding a short-lived gas play in Versailles Borough, Pennsylvania resulted in the drilling of many needless wells. The legacy of this activity exists today in the form of abandoned, unplugged gas wells that are a continuing source of fugitive methane in the midst of a residential and commercial area. Flammable concentrations of methane have been detected near building foundations, which have forced people from their homes and businesses until methane concentrations decreased. Despite mitigation efforts, methane problems persist and have caused some buildings to be permanently abandoned and demolished. This paper describes the use of magnetic and methane sensing methods by the National Energy Technology Laboratory (NETL) to locate abandoned gas wells in Versailles Borough where site access is limited and existing infrastructure can interfere. Here, wells are located between closely spaced houses and beneath buildings and parking lots. Wells are seldom visible, often because wellheads and internal casing strings have been removed, and external casing has been cut off below ground level. The magnetic survey of Versailles Borough identified 53 strong, monopole magnetic anomalies that are presumed to indicate the locations of steel-cased wells. This hypothesis was tested by excavating the location of one strong, monopole magnetic anomaly that was within an area of anomalous methane concentrations. The excavation uncovered an unplugged gas well that was within 0.2 m of the location of the maximum magnetic signal. Truck-mounted methane surveys of Versailles Borough detected numerous methane anomalies that were useful for narrowing search areas. Methane sources identified during truck-mounted surveys included strong methane sources such as sewers and methane mitigation vents. However, inconsistent wind direction and speed, especially between buildings, made locating weaker methane sources (such as leaking wells) difficult. Walking surveys with

  12. Reading a suspenseful literary text activates brain areas related to social cognition and predictive inference.

    Directory of Open Access Journals (Sweden)

    Moritz Lehne

    Full Text Available Stories can elicit powerful emotions. A key emotional response to narrative plots (e.g., novels, movies, etc. is suspense. Suspense appears to build on basic aspects of human cognition such as processes of expectation, anticipation, and prediction. However, the neural processes underlying emotional experiences of suspense have not been previously investigated. We acquired functional magnetic resonance imaging (fMRI data while participants read a suspenseful literary text (E.T.A. Hoffmann's "The Sandman" subdivided into short text passages. Individual ratings of experienced suspense obtained after each text passage were found to be related to activation in the medial frontal cortex, bilateral frontal regions (along the inferior frontal sulcus, lateral premotor cortex, as well as posterior temporal and temporo-parietal areas. The results indicate that the emotional experience of suspense depends on brain areas associated with social cognition and predictive inference.

  13. Method of predicting Splice Sites based on signal interactions

    Directory of Open Access Journals (Sweden)

    Deogun Jitender S

    2006-04-01

    Full Text Available Abstract Background Predicting and proper ranking of canonical splice sites (SSs is a challenging problem in bioinformatics and machine learning communities. Any progress in SSs recognition will lead to better understanding of splicing mechanism. We introduce several new approaches of combining a priori knowledge for improved SS detection. First, we design our new Bayesian SS sensor based on oligonucleotide counting. To further enhance prediction quality, we applied our new de novo motif detection tool MHMMotif to intronic ends and exons. We combine elements found with sensor information using Naive Bayesian Network, as implemented in our new tool SpliceScan. Results According to our tests, the Bayesian sensor outperforms the contemporary Maximum Entropy sensor for 5' SS detection. We report a number of putative Exonic (ESE and Intronic (ISE Splicing Enhancers found by MHMMotif tool. T-test statistics on mouse/rat intronic alignments indicates, that detected elements are on average more conserved as compared to other oligos, which supports our assumption of their functional importance. The tool has been shown to outperform the SpliceView, GeneSplicer, NNSplice, Genio and NetUTR tools for the test set of human genes. SpliceScan outperforms all contemporary ab initio gene structural prediction tools on the set of 5' UTR gene fragments. Conclusion Designed methods have many attractive properties, compared to existing approaches. Bayesian sensor, MHMMotif program and SpliceScan tools are freely available on our web site. Reviewers This article was reviewed by Manyuan Long, Arcady Mushegian and Mikhail Gelfand.

  14. Artificial neural network for prediction of the area under the disease progress curve of tomato late blight

    Directory of Open Access Journals (Sweden)

    Daniel Pedrosa Alves

    Full Text Available ABSTRACT: Artificial neural networks (ANN are computational models inspired by the neural systems of living beings capable of learning from examples and using them to solve problems such as non-linear prediction, and pattern recognition, in addition to several other applications. In this study, ANN were used to predict the value of the area under the disease progress curve (AUDPC for the tomato late blight pathosystem. The AUDPC is widely used by epidemiologic studies of polycyclic diseases, especially those regarding quantitative resistance of genotypes. However, a series of six evaluations over time is necessary to obtain the final area value for this pathosystem. This study aimed to investigate the utilization of ANN to construct an AUDPC in the tomato late blight pathosystem, using a reduced number of severity evaluations. For this, four independent experiments were performed giving a total of 1836 plants infected with Phytophthora infestans pathogen. They were assessed every three days, comprised six opportunities and AUDPC calculations were performed by the conventional method. After the ANN were created it was possible to predict the AUDPC with correlations of 0.97 and 0.84 when compared to conventional methods, using 50 % and 67 % of the genotype evaluations, respectively. When using the ANN created in an experiment to predict the AUDPC of the other experiments the average correlation was 0.94, with two evaluations, 0.96, with three evaluations, between the predicted values of the ANN and they were observed in six evaluations. We present in this study a new paradigm for the use of AUDPC information in tomato experiments faced with P. infestans. This new proposed paradigm might be adapted to different pathosystems.

  15. Evaluation of mathematical methods for predicting optimum dose of gamma radiation in sugarcane (Saccharum sp.)

    International Nuclear Information System (INIS)

    Wu, K.K.; Siddiqui, S.H.; Heinz, D.J.; Ladd, S.L.

    1978-01-01

    Two mathematical methods - the reversed logarithmic method and the regression method - were used to compare the predicted and the observed optimum gamma radiation dose (OD 50 ) in vegetative propagules of sugarcane. The reversed logarithmic method, usually used in sexually propagated crops, showed the largest difference between the predicted and observed optimum dose. The regression method resulted in a better prediction of the observed values and is suggested as a better method for the prediction of optimum dose for vegetatively propagated crops. (author)

  16. Prediction of bead area contact load at the tire-wheel interface using NASTRAN

    Science.gov (United States)

    Chen, C. H. S.

    1982-01-01

    The theoretical prediction of the bead area contact load at the tire wheel interface using NASTRAN is reported. The application of the linear code to a basically nonlinear problem results in excessive deformation of the structure and the tire-wheel contact conditions become impossible to achieve. A psuedo-nonlinear approach was adopted in which the moduli of the cord reinforced composite are increased so that the computed key deformations matched that of the experiment. Numerical results presented are discussed.

  17. PREDICTION OF MEAT PRODUCT QUALITY BY THE MATHEMATICAL PROGRAMMING METHODS

    Directory of Open Access Journals (Sweden)

    A. B. Lisitsyn

    2016-01-01

    Full Text Available Abstract Use of the prediction technologies is one of the directions of the research work carried out both in Russia and abroad. Meat processing is accompanied by the complex physico-chemical, biochemical and mechanical processes. To predict the behavior of meat raw material during the technological processing, a complex of physico-technological and structural-mechanical indicators, which objectively reflects its quality, is used. Among these indicators are pH value, water binding and fat holding capacities, water activity, adhesiveness, viscosity, plasticity and so on. The paper demonstrates the influence of animal proteins (beef and pork on the physico-chemical and functional properties before and after thermal treatment of minced meat made from meat raw material with different content of the connective and fat tissues. On the basis of the experimental data, the model (stochastic dependence parameters linking the quantitative resultant and factor variables were obtained using the regression analysis, and the degree of the correlation with the experimental data was assessed. The maximum allowable levels of meat raw material replacement with animal proteins (beef and pork were established by the methods of mathematical programming. Use of the information technologies will significantly reduce the costs of the experimental search and substantiation of the optimal level of replacement of meat raw material with animal proteins (beef, pork, and will also allow establishing a relationship of product quality indicators with quantity and quality of minced meat ingredients.

  18. A comparison of different methods for predicting coal devolatilisation kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Arenillas, A.; Rubiera, F.; Pevida, C.; Pis, J.J. [Instituto Nacional del Carbon, CSIC, Apartado 73, 33080 Oviedo (Spain)

    2001-04-01

    Knowledge of the coal devolatilisation rate is of great importance because it exerts a marked effect on the overall combustion behaviour. Different approaches can be used to obtain the kinetics of the complex devolatilisation process. The simplest are empirical and employ global kinetics, where the Arrhenius expression is used to correlate rates of mass loss with temperature. In this study a high volatile bituminous coal was devolatilised at four different heating rates in a thermogravimetric analyser (TG) linked to a mass spectrometer (MS). As a first approach, the Arrhenius kinetic parameters (k and A) were calculated from the experimental results, assuming a single step process. Another approach is the distributed-activation energy model, which is more complex due to the assumption that devolatilisation occurs through several first-order reactions, which occur simultaneously. Recent advances in the understanding of coal structure have led to more fundamental approaches for modelling devolatilisation behaviour, such as network models. These are based on a physico-chemical description of coal structure. In the present study the FG-DVC (Functional Group-Depolymerisation, Vaporisation and Crosslinking) computer code was used as the network model and the FG-DVC predicted evolution of volatile compounds was compared with the experimental results. In addition, the predicted rate of mass loss from the FG-DVC model was used to obtain a third devolatilisation kinetic approach. The three methods were compared and discussed, with the experimental results as a reference.

  19. Predicting lattice thermal conductivity with help from ab initio methods

    Science.gov (United States)

    Broido, David

    2015-03-01

    The lattice thermal conductivity is a fundamental transport parameter that determines the utility a material for specific thermal management applications. Materials with low thermal conductivity find applicability in thermoelectric cooling and energy harvesting. High thermal conductivity materials are urgently needed to help address the ever-growing heat dissipation problem in microelectronic devices. Predictive computational approaches can provide critical guidance in the search and development of new materials for such applications. Ab initio methods for calculating lattice thermal conductivity have demonstrated predictive capability, but while they are becoming increasingly efficient, they are still computationally expensive particularly for complex crystals with large unit cells . In this talk, I will review our work on first principles phonon transport for which the intrinsic lattice thermal conductivity is limited only by phonon-phonon scattering arising from anharmonicity. I will examine use of the phase space for anharmonic phonon scattering and the Grüneisen parameters as measures of the thermal conductivities for a range of materials and compare these to the widely used guidelines stemming from the theory of Liebfried and Schölmann. This research was supported primarily by the NSF under Grant CBET-1402949, and by the S3TEC, an Energy Frontier Research Center funded by the US DOE, office of Basic Energy Sciences under Award No. DE-SC0001299.

  20. Predicting Nitrate Transport under Future Climate Scenarios beneath the Nebraska Management Systems Evaluation Area (MSEA) site

    Science.gov (United States)

    Li, Y.; Akbariyeh, S.; Gomez Peña, C. A.; Bartlet-Hunt, S.

    2017-12-01

    Understanding the impacts of future climate change on soil hydrological processes and solute transport is crucial to develop appropriate strategies to minimize adverse impacts of agricultural activities on groundwater quality. The goal of this work is to evaluate the direct effects of climate change on the fate and transport of nitrate beneath a center-pivot irrigated corn field in Nebraska Management Systems Evaluation Area (MSEA) site. Future groundwater recharge rate and actual evapotranspiration rate were predicted based on an inverse modeling approach using climate data generated by Weather Research and Forecasting (WRF) model under the RCP 8.5 scenario, which was downscaled from global CCSM4 model to a resolution of 24 by 24 km2. A groundwater flow model was first calibrated based on historical groundwater table measurement and was then applied to predict future groundwater table in the period 2057-2060. Finally, predicted future groundwater recharge rate, actual evapotranspiration rate, and groundwater level, together with future precipitation data from WRF, were used in a three-dimensional (3D) model, which was validated based on rich historic data set collected from 1993-1996, to predict nitrate concentration in soil and groundwater from the year 2057 to 2060. Future groundwater recharge was found to be decreasing in the study area compared to average groundwater recharge data from the literature. Correspondingly, groundwater elevation was predicted to decrease (1 to 2 ft) over the five years of simulation. Predicted higher transpiration data from climate model resulted in lower infiltration of nitrate concentration in subsurface within the root zone.

  1. Simulating urban-scale air pollutants and their predicting capabilities over the Seoul metropolitan area.

    Science.gov (United States)

    Park, Il-Soo; Lee, Suk-Jo; Kim, Cheol-Hee; Yoo, Chul; Lee, Yong-Hee

    2004-06-01

    Urban-scale air pollutants for sulfur dioxide, nitrogen dioxide, particulate matter with aerodynamic diameter > or = 10 microm, and ozone (O3) were simulated over the Seoul metropolitan area, Korea, during the period of July 2-11, 2002, and their predicting capabilities were discussed. The Air Pollution Model (TAPM) and the highly disaggregated anthropogenic and the biogenic gridded emissions (1 km x 1 km) recently prepared by the Korean Ministry of Environment were applied. Wind fields with observational nudging in the prognostic meteorological model TAPM are optionally adopted to comparatively examine the meteorological impact on the prediction capabilities of urban-scale air pollutants. The result shows that the simulated concentrations of secondary air pollutant largely agree with observed levels with an index of agreement (IOA) of >0.6, whereas IOAs of approximately 0.4 are found for most primary pollutants in the major cities, reflecting the quality of emission data in the urban area. The observationally nudged wind fields with higher IOAs have little effect on the prediction for both primary and secondary air pollutants, implying that the detailed wind field does not consistently improve the urban air pollution model performance if emissions are not well specified. However, the robust highest concentrations are better described toward observations by imposing observational nudging, suggesting the importance of wind fields for the predictions of extreme concentrations such as robust highest concentrations, maximum levels, and >90th percentiles of concentrations for both primary and secondary urban-scale air pollutants.

  2. Extremely Randomized Machine Learning Methods for Compound Activity Prediction

    Directory of Open Access Journals (Sweden)

    Wojciech M. Czarnecki

    2015-11-01

    Full Text Available Speed, a relatively low requirement for computational resources and high effectiveness of the evaluation of the bioactivity of compounds have caused a rapid growth of interest in the application of machine learning methods to virtual screening tasks. However, due to the growth of the amount of data also in cheminformatics and related fields, the aim of research has shifted not only towards the development of algorithms of high predictive power but also towards the simplification of previously existing methods to obtain results more quickly. In the study, we tested two approaches belonging to the group of so-called ‘extremely randomized methods’—Extreme Entropy Machine and Extremely Randomized Trees—for their ability to properly identify compounds that have activity towards particular protein targets. These methods were compared with their ‘non-extreme’ competitors, i.e., Support Vector Machine and Random Forest. The extreme approaches were not only found out to improve the efficiency of the classification of bioactive compounds, but they were also proved to be less computationally complex, requiring fewer steps to perform an optimization procedure.

  3. An efficient ray tracing method for propagation prediction along a mobile route in urban environments

    Science.gov (United States)

    Hussain, S.; Brennan, C.

    2017-07-01

    This paper presents an efficient ray tracing algorithm for propagation prediction in urban environments. The work presented in this paper builds upon previous work in which the maximum coverage area where rays can propagate after interaction with a wall or vertical edge is described by a lit polygon. The shadow regions formed by buildings within the lit polygon are described by shadow polygons. In this paper, the lit polygons of images are mapped to a coarse grid superimposed over the coverage area. This mapping reduces the active image tree significantly for a given receiver point to accelerate the ray finding process. The algorithm also presents an efficient method of quickly determining the valid ray segments for a mobile receiver moving along a linear trajectory. The validation results show considerable computation time reduction with good agreement between the simulated and measured data for propagation prediction in large urban environments.

  4. Strain dyssynchrony index determined by three-dimensional speckle area tracking can predict response to cardiac resynchronization therapy

    Directory of Open Access Journals (Sweden)

    Onishi Tetsuari

    2011-04-01

    Full Text Available Abstract Background We have previously reported strain dyssynchrony index assessed by two-dimensional speckle tracking strain, and a marker of both dyssynchrony and residual myocardial contractility, can predict response to cardiac resynchronization therapy (CRT. A newly developed three-dimensional (3-D speckle tracking system can quantify endocardial area change ratio (area strain, which coupled with the factors of both longitudinal and circumferential strain, from all 16 standard left ventricular (LV segments using complete 3-D pyramidal datasets. Our objective was to test the hypothesis that strain dyssynchrony index using area tracking (ASDI can quantify dyssynchrony and predict response to CRT. Methods We studied 14 heart failure patients with ejection fraction of 27 ± 7% (all≤35% and QRS duration of 172 ± 30 ms (all≥120 ms who underwent CRT. Echocardiography was performed before and 6-month after CRT. ASDI was calculated as the average difference between peak and end-systolic area strain of LV endocardium obtained from 3-D speckle tracking imaging using 16 segments. Conventional dyssynchrony measures were assessed by interventricular mechanical delay, Yu Index, and two-dimensional radial dyssynchrony by speckle-tracking strain. Response was defined as a ≥15% decrease in LV end-systolic volume 6-month after CRT. Results ASDI ≥ 3.8% was the best predictor of response to CRT with a sensitivity of 78%, specificity of 100% and area under the curve (AUC of 0.93 (p Conclusions ASDI can predict responders and LV reverse remodeling following CRT. This novel index using the 3-D speckle tracking system, which shows circumferential and longitudinal LV dyssynchrony and residual endocardial contractility, may thus have clinical significance for CRT patients.

  5. Predicting community structure in snakes on Eastern Nearctic islands using ecological neutral theory and phylogenetic methods.

    Science.gov (United States)

    Burbrink, Frank T; McKelvy, Alexander D; Pyron, R Alexander; Myers, Edward A

    2015-11-22

    Predicting species presence and richness on islands is important for understanding the origins of communities and how likely it is that species will disperse and resist extinction. The equilibrium theory of island biogeography (ETIB) and, as a simple model of sampling abundances, the unified neutral theory of biodiversity (UNTB), predict that in situations where mainland to island migration is high, species-abundance relationships explain the presence of taxa on islands. Thus, more abundant mainland species should have a higher probability of occurring on adjacent islands. In contrast to UNTB, if certain groups have traits that permit them to disperse to islands better than other taxa, then phylogeny may be more predictive of which taxa will occur on islands. Taking surveys of 54 island snake communities in the Eastern Nearctic along with mainland communities that have abundance data for each species, we use phylogenetic assembly methods and UNTB estimates to predict island communities. Species richness is predicted by island area, whereas turnover from the mainland to island communities is random with respect to phylogeny. Community structure appears to be ecologically neutral and abundance on the mainland is the best predictor of presence on islands. With regard to young and proximate islands, where allopatric or cladogenetic speciation is not a factor, we find that simple neutral models following UNTB and ETIB predict the structure of island communities. © 2015 The Author(s).

  6. Mountain Pine Beetles, Salvage Logging, and Hydrologic Change: Predicting Wet Ground Areas

    Directory of Open Access Journals (Sweden)

    John Rex

    2013-04-01

    Full Text Available The mountain pine beetle epidemic in British Columbia has covered 18.1 million hectares of forest land showing the potential for exceptionally large-scale disturbance to influence watershed hydrology. Pine stands killed by the epidemic can experience reduced levels of evapotranspiration and precipitation interception, which can translate into an increase in soil moisture as observed by some forest practitioners during salvage logging in the epicenter of the outbreak. They reported the replacement of summer ground, dry firm soil areas, with winter ground areas identified by having wetter, less firm soils upon which forestry equipment operation is difficult or impossible before winter freeze-up. To decrease the likelihood of soil disturbance from harvesting, a set of hazard indicators was developed to predict wet ground areas in areas heavily infested by the mountain pine beetle. Hazard indicators were based on available GIS data, aerial photographs, and local knowledge. Indicators were selected by an iterative process that began with office-based selection of potential indicators, model development and prediction, field verification, and model refinement to select those indicators that explained most field data variability. Findings indicate that the most effective indicators were lodgepole pine content, understory, drainage density, soil texture, and the topographic index.

  7. Developing a Long Short-Term Memory (LSTM) based model for predicting water table depth in agricultural areas

    Science.gov (United States)

    Zhang, Jianfeng; Zhu, Yan; Zhang, Xiaoping; Ye, Ming; Yang, Jinzhong

    2018-06-01

    Predicting water table depth over the long-term in agricultural areas presents great challenges because these areas have complex and heterogeneous hydrogeological characteristics, boundary conditions, and human activities; also, nonlinear interactions occur among these factors. Therefore, a new time series model based on Long Short-Term Memory (LSTM), was developed in this study as an alternative to computationally expensive physical models. The proposed model is composed of an LSTM layer with another fully connected layer on top of it, with a dropout method applied in the first LSTM layer. In this study, the proposed model was applied and evaluated in five sub-areas of Hetao Irrigation District in arid northwestern China using data of 14 years (2000-2013). The proposed model uses monthly water diversion, evaporation, precipitation, temperature, and time as input data to predict water table depth. A simple but effective standardization method was employed to pre-process data to ensure data on the same scale. 14 years of data are separated into two sets: training set (2000-2011) and validation set (2012-2013) in the experiment. As expected, the proposed model achieves higher R2 scores (0.789-0.952) in water table depth prediction, when compared with the results of traditional feed-forward neural network (FFNN), which only reaches relatively low R2 scores (0.004-0.495), proving that the proposed model can preserve and learn previous information well. Furthermore, the validity of the dropout method and the proposed model's architecture are discussed. Through experimentation, the results show that the dropout method can prevent overfitting significantly. In addition, comparisons between the R2 scores of the proposed model and Double-LSTM model (R2 scores range from 0.170 to 0.864), further prove that the proposed model's architecture is reasonable and can contribute to a strong learning ability on time series data. Thus, one can conclude that the proposed model can

  8. MASW Seismic Method in Brebu Landslide Area, Romania

    Science.gov (United States)

    Mihai, Marinescu; Paul, Cristea; Cristian, Marunteanu; Matei, Mezincescu

    2017-12-01

    This paper is focused on assessing the possibility of enhancing the geotechnical information in perimeters with landslides, especially through applications of the Multichannel Analysis of Surface Waves (MASW) method. The technology enables the determination of the phase velocities of Rayleigh waves and, recursively, the evaluation of shear wave velocities (Vs) related to depth. Finally, using longitudinal wave velocities (Vp), derived from the seismic refraction measurements, in situ dynamic elastic properties in a shallow section can be obtained. The investigation was carried out in the Brebu landslide (3-5 m depth of bedrock), located on the southern flank of the Slanic Syncline (110 km North of Bucharest) and included a drilling program and geotechnical laboratory observations. The seismic refraction records (seismic sources placed at the centre, ends and outside of the geophone spread) have been undertaken on two lines, 23 m and 46 m long respectively) approximately perpendicular to the downslope direction of the landslide and on different local morpho-structures. A Geode Geometrics seismograph was set for 1 ms sampling rate and pulse summations in real-time for five blows. Twenty-four vertical Geometrics SpaceTech geophones (14 Hz resonance frequency) were disposed at 1 m spacing. The seismic source was represented by the impact of an 8kg weight sledge hammer on a metal plate. Regarding seismic data processing, the distinctive feature is related to performing more detailed analyses of MASW records. The proposed procedure consists of the spread split in groups with fewer receivers and several interval-geophones superposed. 2D Fourier analysis, f-k (frequency-wave number) spectrum, for each of these groups assures the information continuity and, all the more, accuracy to pick out the amplitude maximums of the f-k spectra. Finally, combining both values VS (calculated from 2D spectral analyses of Rayleigh waves) and VP (obtained from seismic refraction records

  9. Real - time Dynamic Simulation and Prediction of Groundwater in Typical Arid Area Based on SPASS Improvement

    Science.gov (United States)

    Wang, Xiao-ming

    2018-03-01

    The establishment of traditional groundwater numerical simulation model, parameter identification and inspection process, especially the water level fitting and the actual observation of the value obtained compared to a large error. Based on the SPASS software, a large number of statistical analysis of the numerical simulation results show that the complexity of the terrain in the study area, the distribution of lithology and the influence of the parameters on the groundwater level in the study area have great influence on the groundwater level. Through the multi-factor analysis and adjustment, the simulated groundwater flow and the actual observation are similar. Then, the final result is taken as the standard value, and the groundwater in the study area is simulated and predicted in real time. The simulation results provide technical support for the further development and utilization of the local water resources.

  10. Assessment method to predict the rate of unresolved false alarms

    International Nuclear Information System (INIS)

    Reardon, P.T.; Eggers, R.F.; Heaberlin, S.W.

    1982-06-01

    A method has been developed to predict the rate of unresolved false alarms of material loss in a nuclear facility. The computer program DETRES-1 was developed. The program first assigns the true values of control unit components receipts, shipments, beginning and ending inventories. A normal random number generator is used to generate measured values of each component. A loss estimator is calculated from the control unit's measured values. If the loss estimator triggers a detection alarm, a response is simulated. The response simulation is divided into two phases. The first phase is to simulate remeasurement of the components of the detection loss estimator using the same or better measurement methods or inferences from surrounding control units. If this phase of response continues to indicate a material loss, phase of response simulating a production shutdown and comprehensive cleanout is initiated. A new loss estimator is found, and tested against the alarm thresholds. If the estimator value is below the threshold, the original detection alarm is considered resolved; if above the threshold, an unresolved alarm has occurred. A tally is kept of valid alarms, unresolved false alarms, and failure to alarm upon a true loss

  11. A novel time series link prediction method: Learning automata approach

    Science.gov (United States)

    Moradabadi, Behnaz; Meybodi, Mohammad Reza

    2017-09-01

    Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.

  12. Applicability of a Single Time Point Strategy for the Prediction of Area Under the Concentration Curve of Linezolid in Patients

    DEFF Research Database (Denmark)

    Srinivas, Nuggehally R; Syed, Muzeeb

    2016-01-01

    Background and Objectives: Linezolid, a oxazolidinone, was the first in class to be approved for the treatment of bacterial infections arising from both susceptible and resistant strains of Gram-positive bacteria. Since overt exposure of linezolid may precipitate serious toxicity issues......, therapeutic drug monitoring (TDM) may be required in certain situations, especially in patients who are prescribed other co-medications. Methods: Using appropriate oral pharmacokinetic data (single dose and steady state) for linezolid, both maximum plasma drug concentration (Cmax) versus area under the plasma...... concentration–time curve (AUC) and minimum plasma drug concentration (Cmin) versus AUC relationship was established by linear regression models. The predictions of the AUC values were performed using published mean/median Cmax or Cmin data and appropriate regression lines. The quotient of observed and predicted...

  13. Genomic prediction based on data from three layer lines: a comparison between linear methods

    NARCIS (Netherlands)

    Calus, M.P.L.; Huang, H.; Vereijken, J.; Visscher, J.; Napel, ten J.; Windig, J.J.

    2014-01-01

    Background The prediction accuracy of several linear genomic prediction models, which have previously been used for within-line genomic prediction, was evaluated for multi-line genomic prediction. Methods Compared to a conventional BLUP (best linear unbiased prediction) model using pedigree data, we

  14. Prediction of residual stress using explicit finite element method

    Directory of Open Access Journals (Sweden)

    W.A. Siswanto

    2015-12-01

    Full Text Available This paper presents the residual stress behaviour under various values of friction coefficients and scratching displacement amplitudes. The investigation is based on numerical solution using explicit finite element method in quasi-static condition. Two different aeroengine materials, i.e. Super CMV (Cr-Mo-V and Titanium alloys (Ti-6Al-4V, are examined. The usage of FEM analysis in plate under normal contact is validated with Hertzian theoretical solution in terms of contact pressure distributions. The residual stress distributions along with normal and shear stresses on elastic and plastic regimes of the materials are studied for a simple cylinder-on-flat contact configuration model subjected to normal loading, scratching and followed by unloading. The investigated friction coefficients are 0.3, 0.6 and 0.9, while scratching displacement amplitudes are 0.05 mm, 0.10 mm and 0.20 mm respectively. It is found that friction coefficient of 0.6 results in higher residual stress for both materials. Meanwhile, the predicted residual stress is proportional to the scratching displacement amplitude, higher displacement amplitude, resulting in higher residual stress. It is found that less residual stress is predicted on Super CMV material compared to Ti-6Al-4V material because of its high yield stress and ultimate strength. Super CMV material with friction coefficient of 0.3 and scratching displacement amplitude of 0.10 mm is recommended to be used in contact engineering applications due to its minimum possibility of fatigue.

  15. Methods of Measuring and Mapping of Landslide Areas

    Science.gov (United States)

    Skrzypczak, Izabela; Kokoszka, Wanda; Kogut, Janusz; Oleniacz, Grzegorz

    2017-12-01

    The problem of attracting new investment areas and the inability of current zoning areas, allows us to understand why it is impossible to completely rule out building on landslide areas. Therefore, it becomes important issue of monitoring areas at risk of landslides. Only through appropriate monitoring and proper development of measurements resulting as maps of areas at risk of landslides enables us to estimate the risk and the relevant economic calculation for the realization of the anticipated investment in such areas. The results of monitoring of the surface and in-depth of the landslides are supplemented with constant observation of precipitation. The previous analyses and monitoring of landslides show that some of them are continuously active. GPS measurements, especially with laser scanning provide a unique activity data acquired on the surface of each individual landslide. The development of high resolution numerical models of terrain and the creation of differential models based on subsequent measurements, informs us about the size of deformation, both in units of distance (displacements) and volume. The compatibility of the data with information from in-depth monitoring allows the generation of a very reliable in-depth model of landslide, and as a result proper calculation of the volume of colluvium. Programs presented in the article are a very effective tool to generate in-depth model of landslide. In Poland, the steps taken under the SOPO project i.e. the monitoring and description of landslides are absolutely necessary for social and economic reasons and they may have a significant impact on the economy and finances of individual municipalities and also a whole country economy.

  16. Using the area under the curve to reduce measurement error in predicting young adult blood pressure from childhood measures.

    Science.gov (United States)

    Cook, Nancy R; Rosner, Bernard A; Chen, Wei; Srinivasan, Sathanur R; Berenson, Gerald S

    2004-11-30

    Tracking correlations of blood pressure, particularly childhood measures, may be attenuated by within-person variability. Combining multiple measurements can reduce this error substantially. The area under the curve (AUC) computed from longitudinal growth curve models can be used to improve the prediction of young adult blood pressure from childhood measures. Quadratic random-effects models over unequally spaced repeated measures were used to compute the area under the curve separately within the age periods 5-14 and 20-34 years in the Bogalusa Heart Study. This method adjusts for the uneven age distribution and captures the underlying or average blood pressure, leading to improved estimates of correlation and risk prediction. Tracking correlations were computed by race and gender, and were approximately 0.6 for systolic, 0.5-0.6 for K4 diastolic, and 0.4-0.6 for K5 diastolic blood pressure. The AUC can also be used to regress young adult blood pressure on childhood blood pressure and childhood and young adult body mass index (BMI). In these data, while childhood blood pressure and young adult BMI were generally directly predictive of young adult blood pressure, childhood BMI was negatively correlated with young adult blood pressure when childhood blood pressure was in the model. In addition, racial differences in young adult blood pressure were reduced, but not eliminated, after controlling for childhood blood pressure, childhood BMI, and young adult BMI, suggesting that other genetic or lifestyle factors contribute to this difference. 2004 John Wiley & Sons, Ltd.

  17. Analysis of the uranium price predicted to 24 months, implementing neural networks and the Monte Carlo method like predictive tools

    International Nuclear Information System (INIS)

    Esquivel E, J.; Ramirez S, J. R.; Palacios H, J. C.

    2011-11-01

    The present work shows predicted prices of the uranium, using a neural network. The importance of predicting financial indexes of an energy resource, in this case, allows establishing budgetary measures, as well as the costs of the resource to medium period. The uranium is part of the main energy generating fuels and as such, its price rebounds in the financial analyses, due to this is appealed to predictive methods to obtain an outline referent to the financial behaviour that will have in a certain time. In this study, two methodologies are used for the prediction of the uranium price: the Monte Carlo method and the neural networks. These methods allow predicting the indexes of monthly costs, for a two years period, starting from the second bimonthly of 2011. For the prediction the uranium costs are used, registered from the year 2005. (Author)

  18. Use of predictive models and rapid methods to nowcast bacteria levels at coastal beaches

    Science.gov (United States)

    Francy, Donna S.

    2009-01-01

    The need for rapid assessments of recreational water quality to better protect public health is well accepted throughout the research and regulatory communities. Rapid analytical methods, such as quantitative polymerase chain reaction (qPCR) and immunomagnetic separation/adenosine triphosphate (ATP) analysis, are being tested but are not yet ready for widespread use.Another solution is the use of predictive models, wherein variable(s) that are easily and quickly measured are surrogates for concentrations of fecal-indicator bacteria. Rainfall-based alerts, the simplest type of model, have been used by several communities for a number of years. Deterministic models use mathematical representations of the processes that affect bacteria concentrations; this type of model is being used for beach-closure decisions at one location in the USA. Multivariable statistical models are being developed and tested in many areas of the USA; however, they are only used in three areas of the Great Lakes to aid in notifications of beach advisories or closings. These “operational” statistical models can result in more accurate assessments of recreational water quality than use of the previous day's Escherichia coli (E. coli)concentration as determined by traditional culture methods. The Ohio Nowcast, at Huntington Beach, Bay Village, Ohio, is described in this paper as an example of an operational statistical model. Because predictive modeling is a dynamic process, water-resource managers continue to collect additional data to improve the predictive ability of the nowcast and expand the nowcast to other Ohio beaches and a recreational river. Although predictive models have been shown to work well at some beaches and are becoming more widely accepted, implementation in many areas is limited by funding, lack of coordinated technical leadership, and lack of supporting epidemiological data.

  19. Debris-flows scale predictions based on basin spatial parameters calculated from Remote Sensing images in Wenchuan earthquake area

    International Nuclear Information System (INIS)

    Zhang, Huaizhen; Chi, Tianhe; Liu, Tianyue; Wang, Wei; Yang, Lina; Zhao, Yuan; Shao, Jing; Yao, Xiaojing; Fan, Jianrong

    2014-01-01

    Debris flow is a common hazard in the Wenchuan earthquake area. Collapse and Landslide Regions (CLR), caused by earthquakes, could be located from Remote Sensing images. CLR are the direct material source regions for debris flow. The Spatial Distribution of Collapse and Landslide Regions (SDCLR) strongly impact debris-flow formation. In order to depict SDCLR, we referred to Strahler's Hypsometric analysis method and developed 3 functional models to depict SDCLR quantitatively. These models mainly depict SDCLR relative to altitude, basin mouth and main gullies of debris flow. We used the integral of functions as the spatial parameters of SDCLR and these parameters were employed during the process of debris-flows scale predictions. Grouping-occurring debris-flows triggered by the rainstorm, which occurred on September 24th 2008 in Beichuan County, Sichuan province China, were selected to build the empirical equations for debris-flows scale predictions. Given the existing data, only debris-flows runout zone parameters (Max. runout distance L and Lateral width B) were estimated in this paper. The results indicate that the predicted results were more accurate when the spatial parameters were used. Accordingly, we suggest spatial parameters of SDCLR should be considered in the process of debris-flows scale prediction and proposed several strategies to prevent debris flow in the future

  20. Experimental method to predict avalanches based on neural networks

    Directory of Open Access Journals (Sweden)

    V. V. Zhdanov

    2016-01-01

    Full Text Available The article presents results of experimental use of currently available statistical methods to classify the avalanche‑dangerous precipitations and snowfalls in the Kishi Almaty river basin. The avalanche service of Kazakhstan uses graphical methods for prediction of avalanches developed by I.V. Kondrashov and E.I. Kolesnikov. The main objective of this work was to develop a modern model that could be used directly at the avalanche stations. Classification of winter precipitations into dangerous snowfalls and non‑dangerous ones was performed by two following ways: the linear discriminant function (canonical analysis and artificial neural networks. Observational data on weather and avalanches in the gorge Kishi Almaty in the gorge Kishi Almaty were used as a training sample. Coefficients for the canonical variables were calculated by the software «Statistica» (Russian version 6.0, and then the necessary formula had been constructed. The accuracy of the above classification was 96%. Simulator by the authors L.N. Yasnitsky and F.М. Cherepanov was used to learn the neural networks. The trained neural network demonstrated 98% accuracy of the classification. Prepared statistical models are recommended to be tested at the snow‑avalanche stations. Results of the tests will be used for estimation of the model quality and its readiness for the operational work. In future, we plan to apply these models for classification of the avalanche danger by the five‑point international scale.

  1. A METHOD OF PREDICTING BREAST CANCER USING QUESTIONNAIRES

    Directory of Open Access Journals (Sweden)

    V. N. Malashenko

    2017-01-01

    Full Text Available Purpose. Simplify and increase the accuracy of the questionnaire method of predicting breast cancer (BC for subsequent computer processing and Automated dispensary at risk without the doctor.Materials and methods. The work was based on statistical data obtained by surveying 305 women. The questionnaire included 63 items: 17 open-ended questions, 46 — with a choice of response. It was established multifactor model, the development of which, in addition to the survey data were used materials from the medical histories of patients and respondents data immuno-histochemical studies. Data analysis was performed using Statistica 10.0 and MedCalc 12.7.0 programs.Results. The ROC analysis was performas and the questionnaire data revealed 8 significant predictors of breast cancer. On their basis we created the formula for calculating the prognostic factor of risk of development of breast cancer with a sensitivity 83,12% and a specificity of 91,43%.Conclusions. The completed developments allow to create a computer program for automated processing of profiles on the formation of groups at risk of breast cancer and clinical supervision. The introduction of a screening questionnaire over the Internet with subsequent computer processing of the results, without the direct involvement of doctors, will increase the coverage of the female population of the Russian Federation activities related to the prevention of breast cancer. It can free up time for physicians to receive primary patients, as well as improve oncological vigilance of the female population of the Russian Federation.

  2. Validating computationally predicted TMS stimulation areas using direct electrical stimulation in patients with brain tumors near precentral regions.

    Science.gov (United States)

    Opitz, Alexander; Zafar, Noman; Bockermann, Volker; Rohde, Veit; Paulus, Walter

    2014-01-01

    The spatial extent of transcranial magnetic stimulation (TMS) is of paramount interest for all studies employing this method. It is generally assumed that the induced electric field is the crucial parameter to determine which cortical regions are excited. While it is difficult to directly measure the electric field, one usually relies on computational models to estimate the electric field distribution. Direct electrical stimulation (DES) is a local brain stimulation method generally considered the gold standard to map structure-function relationships in the brain. Its application is typically limited to patients undergoing brain surgery. In this study we compare the computationally predicted stimulation area in TMS with the DES area in six patients with tumors near precentral regions. We combine a motor evoked potential (MEP) mapping experiment for both TMS and DES with realistic individual finite element method (FEM) simulations of the electric field distribution during TMS and DES. On average, stimulation areas in TMS and DES show an overlap of up to 80%, thus validating our computational physiology approach to estimate TMS excitation volumes. Our results can help in understanding the spatial spread of TMS effects and in optimizing stimulation protocols to more specifically target certain cortical regions based on computational modeling.

  3. Sustainability of three modified soil conservation methods in agriculture area

    Science.gov (United States)

    Setiawan, M. A.; Sara, F. H.; Christanto, N.; Sartohadi, J.; Samodra, G.; Widicahyono, A.; Ardiana, N.; Widiyati, C. N.; Astuti, E. M.; Martha, G. K.; Malik, R. F.; Sambodo, A. P.; Rokhmaningtyas, R. P.; Swastanto, G. A.; Gomez, C.

    2018-04-01

    Recent innovations in soil conservation methods do not present any breakthrough. Providing more attractive soil conservation methods from the farmer’s perspective is however still of critical importance. Contributing to this soil research gap we attempt to evaluate the sustainable use of three modified conservation methods, namely JALAPA (Jala Sabut Kelapa - geotextile made of coconut fibres), wood sediment trap, and polybag system compared to traditional tillage without conservation method. This research provides both qualitative and quantitative analysis on the performance of each conservation measures. Therefore, in addition to the total sediment yield value and investment cost – as quantitative analysis, we also evaluate qualitatively the indicator of soil loss, installation, maintenance, and the durability of conservation medium. Those criteria define the sustainability use of each conservation method. The results show that JALAPA is the most effective method for controlling soil loss, but it also requires the most expensive cost for installation. However, our finding confirms that geotextile is sensitive to sun heating by which the coconut fibre can become dry and shrink. Wood sediment trap is the cheapest and easiest to install; however it is easily damaged by termite. Polybag method results in the highest productivity, but requires more time during the first installation. In terms of the farmer’s perspective, soil conservation using polybag system was the most accepted technique due to its high benefits; even if it is less effective at reducing soil loss compared to JALAPA.

  4. [The strategic research areas of a University Hospital: proposal of a quali-quantitative method.

    Science.gov (United States)

    Iezzi, Elisa; Ardissino, Diego; Ferrari, Carlo; Vitale, Marco; Caminiti, Caterina

    2018-02-01

    This work aimed to objectively identify the main research areas at the University Hospital of Parma. To this end, a multidisciplinary working group, comprising clinicians, researchers, and hospital management, was formed to develop a shared quali-quantitative method. Easily retrievable performance indicators were selected from the literature (concerning bibliometric data and grant acquisition), and a scoring system developed to assign weights to each indicator. Subsequently, Research Team Leaders were identified from the hospital's "Research Plan", a document produced every three years which contains information on the main research themes carried out at each Department, involved staff and available resources, provided by health care professionals themselves. The selected performance indicators were measured for each Team Leader, and scores assigned, thus creating a ranking list. Through the analyses of the research themes of top Team Leaders, the Working Group identified the following five strategic research areas: (a) personalized treatment in oncology and hematology; (b) chronicization mechanisms in immunomediate diseases; (c) old and new risk factors for cardiovascular diseases; (d) nutritional disorders, metabolic and chronic-degenerative diseases; (e) molecular diagnostic and predictive markers. We have developed an objective method to identify a hospital's main research areas. Its application can guide resource allocation and can offer ways to value the work of professionals involved in research.

  5. Application of gis on determination of flood prone areas and critical arterial road network by using chaid method in bandung area

    Directory of Open Access Journals (Sweden)

    Darwin

    2018-01-01

    Full Text Available Floods in Bandung area often occur when the rainfall is high then the water volume exceed the capacity of Citarum watershed. Floods cause economic and social losses. The purpose of this research is to get the GIS application model in the estimation of puddle area and road network in Bandung Metropolitan Area has disturbed.Geospatial map preparation methodology used statistical data from 11041 flood points, which divided into two groups, 7729 flood points to estimate the decision tree model and 3312 flood points to validate the model. The process of making flood vulnerability maps is approached by Chi-square Automatic Interaction Detection (CHAID method, and validation using Receiver Operating Characteristic (ROC method. Validation results in the area under the curve with a value of 93.1% for success rate and 92.7% for the prediction level.Chaid result is class 0 - 0,047 covering 76,68% area; Grades of 0.047-0.307 include 5.37%; Grades 0.307 - 0.599 (Low covering 5.36%; Grades 0.599 to 0.4444 include 5.31% and grade 0.844-1 (high covering 7.27% of the research area. Flood-prone road network is Link from Rancaekek (Area of PT Kahatex, link from Solokan Jeruk (Cicalengka-Majalaya, Link Baleendah, and linkDayeuhkolot (M.Toha - Andir

  6. A prediction method for the wax deposition rate based on a radial basis function neural network

    Directory of Open Access Journals (Sweden)

    Ying Xie

    2017-06-01

    Full Text Available The radial basis function neural network is a popular supervised learning tool based on machinery learning technology. Its high precision having been proven, the radial basis function neural network has been applied in many areas. The accumulation of deposited materials in the pipeline may lead to the need for increased pumping power, a decreased flow rate or even to the total blockage of the line, with losses of production and capital investment, so research on predicting the wax deposition rate is significant for the safe and economical operation of an oil pipeline. This paper adopts the radial basis function neural network to predict the wax deposition rate by considering four main influencing factors, the pipe wall temperature gradient, pipe wall wax crystal solubility coefficient, pipe wall shear stress and crude oil viscosity, by the gray correlational analysis method. MATLAB software is employed to establish the RBF neural network. Compared with the previous literature, favorable consistency exists between the predicted outcomes and the experimental results, with a relative error of 1.5%. It can be concluded that the prediction method of wax deposition rate based on the RBF neural network is feasible.

  7. Simple models for predicting leaf area of mango (Mangifera indica L.

    Directory of Open Access Journals (Sweden)

    Maryam Ghoreishi

    2012-01-01

    Full Text Available Mango (Mangifera indica L., one of the most popular tropical fruits, is cultivated in a considerable part of southern Iran. Leaf area is a valuable parameter in mango research, especially plant physiological and nutrition field. Most of available methods for estimating plant leaf area are difficult to apply, expensive and destructive which could in turn destroy the canopy and consequently make it difficult to perform further tests on the same plant. Therefore, a non-destructive method which is simple, inexpensive, and could yield an accurate estimation of leaf area will be a great benefit to researchers. A regression analysis was performed in order to determine the relationship between the leaf area and leaf width, leaf length, dry and fresh weight. For this purpose 50 mango seedlings of local selections were randomly took from a nursery in the Hormozgan province, and different parts of plants were separated in laboratory. Leaf area was measured by different method included leaf area meter, planimeter, ruler (length and width and the fresh and dry weight of leaves were also measured. The best regression models were statistically selected using Determination Coefficient, Maximum Error, Model Efficiency, Root Mean Square Error and Coefficient of Residual Mass. Overall, based on regression equation, a satisfactory estimation of leaf area was obtained by measuring the non-destructive parameters, i.e. number of leaf per seedling, length of the longest and width of widest leaf (R2 = 0.88 and also destructive parameters, i.e. dry weight (R2 = 0.94 and fresh weight (R2= 0.94 of leaves.

  8. Using Monte Carlo/Gaussian Based Small Area Estimates to Predict Where Medicaid Patients Reside.

    Science.gov (United States)

    Behrens, Jess J; Wen, Xuejin; Goel, Satyender; Zhou, Jing; Fu, Lina; Kho, Abel N

    2016-01-01

    Electronic Health Records (EHR) are rapidly becoming accepted as tools for planning and population health 1,2 . With the national dialogue around Medicaid expansion 12 , the role of EHR data has become even more important. For their potential to be fully realized and contribute to these discussions, techniques for creating accurate small area estimates is vital. As such, we examined the efficacy of developing small area estimates for Medicaid patients in two locations, Albuquerque and Chicago, by using a Monte Carlo/Gaussian technique that has worked in accurately locating registered voters in North Carolina 11 . The Albuquerque data, which includes patient address, will first be used to assess the accuracy of the methodology. Subsequently, it will be combined with the EHR data from Chicago to develop a regression that predicts Medicaid patients by US Block Group. We seek to create a tool that is effective in translating EHR data's potential for population health studies.

  9. Distributed Model Predictive Load Frequency Control of Multi-area Power System with DFIGs

    Institute of Scientific and Technical Information of China (English)

    Yi Zhang; Xiangjie Liu; Bin Qu

    2017-01-01

    Reliable load frequency control(LFC) is crucial to the operation and design of modern electric power systems. Considering the LFC problem of a four-area interconnected power system with wind turbines, this paper presents a distributed model predictive control(DMPC) based on coordination scheme.The proposed algorithm solves a series of local optimization problems to minimize a performance objective for each control area. The generation rate constraints(GRCs), load disturbance changes, and the wind speed constraints are considered. Furthermore, the DMPC algorithm may reduce the impact of the randomness and intermittence of wind turbine effectively. A performance comparison between the proposed controller with and without the participation of the wind turbines is carried out. Analysis and simulation results show possible improvements on closed–loop performance, and computational burden with the physical constraints.

  10. Prediction of interfacial area transport in a scaled 8×8 BWR rod bundle

    Energy Technology Data Exchange (ETDEWEB)

    Yang, X.; Schlegel, J.P.; Liu, Y.; Paranjape, S.; Hibiki, T.; Ishii, M. [School of Nuclear Engineering, Purdue University, 400 Central Dr., West Lafayette, IN 47907-2017 (United States); Bajorek, S.; Ireland, A. [U.S. Nuclear Regulatory Commission, Washington, DC 20555-0001 (United States)

    2016-12-15

    In the two-fluid model, it is important to give an accurate prediction for the interfacial area concentration. In order to achieve this goal, the interfacial area transport equation has been developed. This study focuses on the benchmark of IATE performance in a rod bundle geometry. A set of interfacial area concentration source and sink term models are proposed for a rod bundle geometry based on the confined channel IATE model. This model was selected as a basis because of the relative similarity of the two geometries. Benchmarking of the new model with interfacial area concentration data in an 8×8 rod bundle test section which has been scaled from an actual BWR fuel bundle is performed. The model shows good agreement in bubbly and cap-bubbly flows, which are similar in many types of geometries, while it shows some discrepancy in churn-turbulent flow regime. This discrepancy may be due to the geometrical differences between the actual rod bundle test facility and the facility used to collect the data which benchmarked the original source and sink models.

  11. The Comparison Study of Short-Term Prediction Methods to Enhance the Model Predictive Controller Applied to Microgrid Energy Management

    Directory of Open Access Journals (Sweden)

    César Hernández-Hernández

    2017-06-01

    Full Text Available Electricity load forecasting, optimal power system operation and energy management play key roles that can bring significant operational advantages to microgrids. This paper studies how methods based on time series and neural networks can be used to predict energy demand and production, allowing them to be combined with model predictive control. Comparisons of different prediction methods and different optimum energy distribution scenarios are provided, permitting us to determine when short-term energy prediction models should be used. The proposed prediction models in addition to the model predictive control strategy appear as a promising solution to energy management in microgrids. The controller has the task of performing the management of electricity purchase and sale to the power grid, maximizing the use of renewable energy sources and managing the use of the energy storage system. Simulations were performed with different weather conditions of solar irradiation. The obtained results are encouraging for future practical implementation.

  12. Gathering Data in Health Area: Capture – Recapture Method

    Directory of Open Access Journals (Sweden)

    Isil Irem Budakoglu

    2008-02-01

    Full Text Available Knowing about the frequency of diseases, accidents and any condition related with the health constitute the main control and intervention programs that will apply in health. There is a need to determine the measurements for frequency of these conditions and data for determination of measurements. Even if surveillance or registration system of a country is very well, it can be insufficient to collect some other conditions related with health; in fact so many countries can not designate their basic data such as birth and death numbers. There are many methods for collecting health data, such as registration system, surveys, etc. Another method which has been using recently in epidemiology called “capture-recapture method”. [TAF Prev Med Bull. 2008; 7(1: 75-80

  13. GPS surveying method applied to terminal area navigation flight experiments

    Energy Technology Data Exchange (ETDEWEB)

    Murata, M; Shingu, H; Satsushima, K; Tsuji, T; Ishikawa, K; Miyazawa, Y; Uchida, T [National Aerospace Laboratory, Tokyo (Japan)

    1993-03-01

    With an objective of evaluating accuracy of new landing and navigation systems such as microwave landing guidance system and global positioning satellite (GPS) system, flight experiments are being carried out using experimental aircraft. This aircraft mounts a GPS and evaluates its accuracy by comparing the standard orbits spotted by a Kalman filter from the laser tracing data on the aircraft with the navigation results. The GPS outputs position and speed information from an earth-centered-earth-fixed system called the World Geodetic System, 1984 (WGS84). However, in order to compare the navigation results with output from a reference orbit sensor or other navigation sensor, it is necessary to structure a high-precision reference coordinates system based on the WGS84. A method that applies the GPS phase interference measurement for this problem was proposed, and used actually in analyzing a flight experiment data. As referred to a case of the method having been applied to evaluating an independent navigation accuracy, the method was verified sufficiently effective and reliable not only in navigation method analysis, but also in the aspect of navigational operations. 12 refs., 10 figs., 5 tabs.

  14. Prediction of adiabatic bubbly flows in TRACE using the interfacial area transport equation

    International Nuclear Information System (INIS)

    Talley, J.; Worosz, T.; Kim, S.; Mahaffy, J.; Bajorek, S.; Tien, K.

    2011-01-01

    The conventional thermal-hydraulic reactor system analysis codes utilize a two-field, two-fluid formulation to model two-phase flows. To close this model, static flow regime transition criteria and algebraic relations are utilized to estimate the interfacial area concentration (a i ). To better reflect the continuous evolution of two-phase flow, an experimental version of TRACE is being developed which implements the interfacial area transport equation (IATE) to replace the flow regime based approach. Dynamic estimation of a i is provided through the use of mechanistic models for bubble coalescence and disintegration. To account for the differences in bubble interactions and drag forces, two-group bubble transport is sought. As such, Group 1 accounts for the transport of spherical and distorted bubbles, while Group 2 accounts for the cap, slug, and churn-turbulent bubbles. Based on this categorization, a two-group IATE applicable to the range of dispersed two-phase flows has been previously developed. Recently, a one-group, one-dimensional, adiabatic IATE has been implemented into the TRACE code with mechanistic models accounting for: (1) bubble breakup due to turbulent impact of an eddy on a bubble, (2) bubble coalescence due to random collision driven by turbulent eddies, and (3) bubble coalescence due to the acceleration of a bubble in the wake region of a preceding bubble. To demonstrate the enhancement of the code's capability using the IATE, experimental data for a i , void fraction, and bubble velocity measured by a multi-sensor conductivity probe are compared to both the IATE and flow regime based predictions. In total, 50 air-water vertical co-current upward and downward bubbly flow conditions in pipes with diameters ranging from 2.54 to 20.32 cm are evaluated. It is found that TRACE, using the conventional flow regime relation, always underestimates a i . Moreover, the axial trend of the a i prediction is always quasi-linear because a i in the

  15. Skill forecasting from different wind power ensemble prediction methods

    International Nuclear Information System (INIS)

    Pinson, Pierre; Nielsen, Henrik A; Madsen, Henrik; Kariniotakis, George

    2007-01-01

    This paper presents an investigation on alternative approaches to the providing of uncertainty estimates associated to point predictions of wind generation. Focus is given to skill forecasts in the form of prediction risk indices, aiming at giving a comprehensive signal on the expected level of forecast uncertainty. Ensemble predictions of wind generation are used as input. A proposal for the definition of prediction risk indices is given. Such skill forecasts are based on the dispersion of ensemble members for a single prediction horizon, or over a set of successive look-ahead times. It is shown on the test case of a Danish offshore wind farm how prediction risk indices may be related to several levels of forecast uncertainty (and energy imbalances). Wind power ensemble predictions are derived from the transformation of ECMWF and NCEP ensembles of meteorological variables to power, as well as by a lagged average approach alternative. The ability of risk indices calculated from the various types of ensembles forecasts to resolve among situations with different levels of uncertainty is discussed

  16. Improved USLE-K factor prediction: A case study on water erosion areas in China

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2016-09-01

    Full Text Available Soil erodibility (K-factor is an essential factor in soil erosion prediction and conservation practises. The major obstacles to any accurate, large-scale soil erodibility estimation are the lack of necessary data on soil characteristics and the misuse of variable K-factor calculators. In this study, we assessed the performance of available erodibility estimators Universal Soil Loss Equation (USLE, Revised Universal Soil Loss Equation (RUSLE, Erosion Productivity Impact Calculator (EPIC and the Geometric Mean Diameter based (Dg model for different geographic regions based on the Chinese soil erodibility database (CSED. Results showed that previous estimators overestimated almost all K-values. Furthermore, only the USLE and Dg approaches could be directly and reliably applicable to black and loess soil regions. Based on the nonlinear best fitting techniques, we improved soil erodibility prediction by combining Dg and soil organic matter (SOM. The NSE, R2 and RE values were 0.94, 0.67 and 9.5% after calibrating the results independently; similar model performance was showed for the validation process. The results obtained via the proposed approach were more accurate that the former K-value predictions. Moreover, those improvements allowed us to effectively establish a regional soil erodibility map (1:250,000 scale of water erosion areas in China. The mean K-value of Chinese water erosion regions was 0.0321 (t ha h·(ha MJ mm−1 with a standard deviation of 0.0107 (t ha h·(ha MJ mm−1; K-values present a decreasing trend from North to South in water erosion areas in China. The yield soil erodibility dataset also satisfactorily corresponded to former K-values from different scales (local, regional, and national.

  17. Limited-area short-range ensemble predictions targeted for heavy rain in Europe

    Directory of Open Access Journals (Sweden)

    K. Sattler

    2005-01-01

    Full Text Available Inherent uncertainties in short-range quantitative precipitation forecasts (QPF from the high-resolution, limited-area numerical weather prediction model DMI-HIRLAM (LAM are addressed using two different approaches to creating a small ensemble of LAM simulations, with focus on prediction of extreme rainfall events over European river basins. The first ensemble type is designed to represent uncertainty in the atmospheric state of the initial condition and at the lateral LAM boundaries. The global ensemble prediction system (EPS from ECMWF serves as host model to the LAM and provides the state perturbations, from which a small set of significant members is selected. The significance is estimated on the basis of accumulated precipitation over a target area of interest, which contains the river basin(s under consideration. The selected members provide the initial and boundary data for the ensemble integration in the LAM. A second ensemble approach tries to address a portion of the model-inherent uncertainty responsible for errors in the forecasted precipitation field by utilising different parameterisation schemes for condensation and convection in the LAM. Three periods around historical heavy rain events that caused or contributed to disastrous river flooding in Europe are used to study the performance of the LAM ensemble designs. The three cases exhibit different dynamic and synoptic characteristics and provide an indication of the ensemble qualities in different weather situations. Precipitation analyses from the Deutsche Wetterdienst (DWD are used as the verifying reference and a comparison of daily rainfall amounts is referred to the respective river basins of the historical cases.

  18. Model to predict radiological consequences of transportation accidents involving dispersal of radioactive material in urban areas

    International Nuclear Information System (INIS)

    Taylor, J.M.; Daniel, S.L.

    1978-01-01

    The analysis of accidental releases of radioactive material which may result from transportation accidents in high-density urban areas is influenced by several urban characteristics which make computer simulation the calculational method of choice. These urban features fall into four categories. Each of these categories contains time- and location-dependent parameters which must be coupled to the actual time and location of the release in the calculation of the anticipated radiological consequences. Due to the large number of dependent parameters a computer model, METRAN, has been developed to quantify these radiological consequences. Rather than attempt to describe an urban area as a single entity, a specific urban area is subdivided into a set of cells of fixed size to permit more detailed characterization. Initially, the study area is subdivided into a set of 2-dimensional cells. A uniform set of time-dependent physical characteristics which describe the land use, population distribution, traffic density, etc., within that cell are then computed from various data sources. The METRAN code incorporates several details of urban areas. A principal limitation of the analysis is the limited availability of accurate information to use as input data. Although the code was originally developed to analyze dispersal of radioactive material, it is currently being evaluated for use in analyzing the effects of dispersal of other hazardous materials in both urban and rural areas

  19. Fault diagnosis method for area gamma monitors in Nuclear Facilities

    International Nuclear Information System (INIS)

    Srinivas Reddy, P.; Amudhu Ramesh Kumar, R.; Geo Mathews, M.; Amarendra, G.

    2016-01-01

    Area Gamma Monitors (AGM) using Geiger-Muller (GM) counter are deployed in nuclear facilities for detection of gamma radiation. The AGMs display the dose rate locally and in Data Acquisition System (DAS) at central monitoring station. It also provides local visual and audio alarms in case of dose rate exceeding alarm set point. Regular surveillance checking, testing and calibration of AGMs are mandatory as per safety guidelines. This paper describes quick testing the AGMs without using radioactive source. The four point High Voltages (HV) and Discriminator Bias (DB) voltage characteristics are used to diagnose the state of health of GM counter. The profiles of HV and DB voltage are applied during testing of the AGMs

  20. An improved method for predicting brittleness of rocks via well logs in tight oil reservoirs

    Science.gov (United States)

    Wang, Zhenlin; Sun, Ting; Feng, Cheng; Wang, Wei; Han, Chuang

    2018-06-01

    There can be no industrial oil production in tight oil reservoirs until fracturing is undertaken. Under such conditions, the brittleness of the rocks is a very important factor. However, it has so far been difficult to predict. In this paper, the selected study area is the tight oil reservoirs in Lucaogou formation, Permian, Jimusaer sag, Junggar basin. According to the transformation of dynamic and static rock mechanics parameters and the correction of confining pressure, an improved method is proposed for quantitatively predicting the brittleness of rocks via well logs in tight oil reservoirs. First, 19 typical tight oil core samples are selected in the study area. Their static Young’s modulus, static Poisson’s ratio and petrophysical parameters are measured. In addition, the static brittleness indices of four other tight oil cores are measured under different confining pressure conditions. Second, the dynamic Young’s modulus, Poisson’s ratio and brittleness index are calculated using the compressional and shear wave velocity. With combination of the measured and calculated results, the transformation model of dynamic and static brittleness index is built based on the influence of porosity and clay content. The comparison of the predicted brittleness indices and measured results shows that the model has high accuracy. Third, on the basis of the experimental data under different confining pressure conditions, the amplifying factor of brittleness index is proposed to correct for the influence of confining pressure on the brittleness index. Finally, the above improved models are applied to formation evaluation via well logs. Compared with the results before correction, the results of the improved models agree better with the experimental data, which indicates that the improved models have better application effects. The brittleness index prediction method of tight oil reservoirs is improved in this research. It is of great importance in the optimization of

  1. 3D transient model to predict temperature and ablated areas during laser processing of metallic surfaces

    Directory of Open Access Journals (Sweden)

    Babak. B. Naghshine

    2017-02-01

    Full Text Available Laser processing is one of the most popular small-scale patterning methods and has many applications in semiconductor device fabrication and biomedical engineering. Numerical modelling of this process can be used for better understanding of the process, optimization, and predicting the quality of the final product. An accurate 3D model is presented here for short laser pulses that can predict the ablation depth and temperature distribution on any section of the material in a minimal amount of time. In this transient model, variations of thermal properties, plasma shielding, and phase change are considered. Ablation depth was measured using a 3D optical profiler. Calculated depths are in good agreement with measured values on laser treated titanium surfaces. The proposed model can be applied to a wide range of materials and laser systems.

  2. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  3. Vegetation cover, tidal amplitude and land area predict short-term marsh vulnerability in Coastal Louisiana

    Science.gov (United States)

    Schoolmaster, Donald; Stagg, Camille L.; Sharp, Leigh Anne; McGinnis, Tommy S.; Wood, Bernard; Piazza, Sarai

    2018-01-01

    The loss of coastal marshes is a topic of great concern, because these habitats provide tangible ecosystem services and are at risk from sea-level rise and human activities. In recent years, significant effort has gone into understanding and modeling the relationships between the biological and physical factors that contribute to marsh stability. Simulation-based process models suggest that marsh stability is the product of a complex feedback between sediment supply, flooding regime and vegetation response, resulting in elevation gains sufficient to match the combination of relative sea-level rise and losses from erosion. However, there have been few direct, empirical tests of these models, because long-term datasets that have captured sufficient numbers of marsh loss events in the context of a rigorous monitoring program are rare. We use a multi-year data set collected by the Coastwide Reference Monitoring System (CRMS) that includes transitions of monitored vegetation plots to open water to build and test a predictive model of near-term marsh vulnerability. We found that despite the conclusions of previous process models, elevation change had no ability to predict the transition of vegetated marsh to open water. However, we found that the processes that drive elevation change were significant predictors of transitions. Specifically, vegetation cover in prior year, land area in the surrounding 1 km2 (an estimate of marsh fragmentation), and the interaction of tidal amplitude and position in tidal frame were all significant factors predicting marsh loss. This suggests that 1) elevation change is likely better a predictor of marsh loss at time scales longer than we consider in this study and 2) the significant predictive factors affect marsh vulnerability through pathways other than elevation change, such as resistance to erosion. In addition, we found that, while sensitivity of marsh vulnerability to the predictive factors varied spatially across coastal Louisiana

  4. Improved Storm Monitoring and Prediction for the San Francisco Bay Area

    Science.gov (United States)

    Cifelli, R.; Chandrasekar, V.; Anderson, M.; Davis, G.

    2017-12-01

    The Advanced Quantitative Precipitation Information (AQPI) System is a multi-faceted project to improve precipitation and hydrologic monitoring, prediction, and decision support for the San Francisco Bay Area. The Bay Area faces a multitude of threats from extreme events, including disrupted transportation from flooded roads and railroad lines, water management challenges related to storm water, river and reservoir management and storm-related damage demanding emergency response. The threats occur on spatial scales ranging from local communities to the entire region and time scales ranging from hours to days. These challenges will be exacerbated by future sea level rise, more extreme weather events and increased vulnerabilities. AQPI is a collaboration of federal, state and local governments with assistance from the research community. Led by NOAA's Earth System Research Laboratory, in partnership with the Cooperative Institute for Research in the Atmosphere, USGS, and Scripps, AQPI is a four-year effort funded in part by a grant from the California Department of Water Resource's Integrated Regional Water Management Program. The Sonoma County Water Agency is serving as the local sponsor of the project. Other local participants include the Santa Clara Valley Water District, San Francisco Public Utilities Commission, and the Bay Area Flood Protection Agencies Association. AQPI will provide both improved observing capabilities and a suite of numerical forecast models to produce accurate and timely information for benefit of flood management, emergency response, water quality, ecosystem services, water supply and transportation management for the Bay Area. The resulting information will support decision making to mitigate flood risks, secure water supplies, minimize water quality impacts to the Bay from combined sewer overflows, and have improved lead-time on coastal and Bay inundation from extreme storms like Atmospheric Rivers (ARs). The project is expected to

  5. PROXIMAL: a method for Prediction of Xenobiotic Metabolism.

    Science.gov (United States)

    Yousofshahi, Mona; Manteiga, Sara; Wu, Charmian; Lee, Kyongbum; Hassoun, Soha

    2015-12-22

    Contamination of the environment with bioactive chemicals has emerged as a potential public health risk. These substances that may cause distress or disease in humans can be found in air, water and food supplies. An open question is whether these chemicals transform into potentially more active or toxic derivatives via xenobiotic metabolizing enzymes expressed in the body. We present a new prediction tool, which we call PROXIMAL (Prediction of Xenobiotic Metabolism) for identifying possible transformation products of xenobiotic chemicals in the liver. Using reaction data from DrugBank and KEGG, PROXIMAL builds look-up tables that catalog the sites and types of structural modifications performed by Phase I and Phase II enzymes. Given a compound of interest, PROXIMAL searches for substructures that match the sites cataloged in the look-up tables, applies the corresponding modifications to generate a panel of possible transformation products, and ranks the products based on the activity and abundance of the enzymes involved. PROXIMAL generates transformations that are specific for the chemical of interest by analyzing the chemical's substructures. We evaluate the accuracy of PROXIMAL's predictions through case studies on two environmental chemicals with suspected endocrine disrupting activity, bisphenol A (BPA) and 4-chlorobiphenyl (PCB3). Comparisons with published reports confirm 5 out of 7 and 17 out of 26 of the predicted derivatives for BPA and PCB3, respectively. We also compare biotransformation predictions generated by PROXIMAL with those generated by METEOR and Metaprint2D-react, two other prediction tools. PROXIMAL can predict transformations of chemicals that contain substructures recognizable by human liver enzymes. It also has the ability to rank the predicted metabolites based on the activity and abundance of enzymes involved in xenobiotic transformation.

  6. Computer-aided method of airborne uranium in working areas

    International Nuclear Information System (INIS)

    Dagen, E.; Ringel, V.; Rossbach, H.

    1981-09-01

    The described procedure allows the routine determination of uranium aerosols with low personnel and technical efforts. The activity deposited on the filters is measured automatically twice a night. The computerized evaluation, including the elimination of radon and thoron daughter products, is made off-line with the aid of the code ULK1. The results are available at the beginning of the following working day and can be used for radiation protection planning. The sensitivity of the method of eliminating the airborne natural activity is 4 times less than that of measurements after its complete decay. This, however, is not of significance for radiation protection purposes

  7. Development of wide area environment accelerator operation and diagnostics method

    Science.gov (United States)

    Uchiyama, Akito; Furukawa, Kazuro

    2015-08-01

    Remote operation and diagnostic systems for particle accelerators have been developed for beam operation and maintenance in various situations. Even though fully remote experiments are not necessary, the remote diagnosis and maintenance of the accelerator is required. Considering remote-operation operator interfaces (OPIs), the use of standard protocols such as the hypertext transfer protocol (HTTP) is advantageous, because system-dependent protocols are unnecessary between the remote client and the on-site server. Here, we have developed a client system based on WebSocket, which is a new protocol provided by the Internet Engineering Task Force for Web-based systems, as a next-generation Web-based OPI using the Experimental Physics and Industrial Control System Channel Access protocol. As a result of this implementation, WebSocket-based client systems have become available for remote operation. Also, as regards practical application, the remote operation of an accelerator via a wide area network (WAN) faces a number of challenges, e.g., the accelerator has both experimental device and radiation generator characteristics. Any error in remote control system operation could result in an immediate breakdown. Therefore, we propose the implementation of an operator intervention system for remote accelerator diagnostics and support that can obviate any differences between the local control room and remote locations. Here, remote-operation Web-based OPIs, which resolve security issues, are developed.

  8. Development of wide area environment accelerator operation and diagnostics method

    Directory of Open Access Journals (Sweden)

    Akito Uchiyama

    2015-08-01

    Full Text Available Remote operation and diagnostic systems for particle accelerators have been developed for beam operation and maintenance in various situations. Even though fully remote experiments are not necessary, the remote diagnosis and maintenance of the accelerator is required. Considering remote-operation operator interfaces (OPIs, the use of standard protocols such as the hypertext transfer protocol (HTTP is advantageous, because system-dependent protocols are unnecessary between the remote client and the on-site server. Here, we have developed a client system based on WebSocket, which is a new protocol provided by the Internet Engineering Task Force for Web-based systems, as a next-generation Web-based OPI using the Experimental Physics and Industrial Control System Channel Access protocol. As a result of this implementation, WebSocket-based client systems have become available for remote operation. Also, as regards practical application, the remote operation of an accelerator via a wide area network (WAN faces a number of challenges, e.g., the accelerator has both experimental device and radiation generator characteristics. Any error in remote control system operation could result in an immediate breakdown. Therefore, we propose the implementation of an operator intervention system for remote accelerator diagnostics and support that can obviate any differences between the local control room and remote locations. Here, remote-operation Web-based OPIs, which resolve security issues, are developed.

  9. Cesium residue leachate migration in the tailings management area of a mine site : predicted vs. actual

    Energy Technology Data Exchange (ETDEWEB)

    Solylo, P.; Ramsey, D. [Wardrop Engineering, Winnipeg, MB (Canada). Mining and Minerals Section

    2009-07-01

    This paper reported on a study at a cesium products facility (CPF) that manufactures a non-toxic cesium-formate drilling fluid. The facility operates adjacent to a pollucite/tantalum/spodumene mine. The CPF was developed as a closed system, with the residue tailings slurry from the CPF process discharged to doublelined containment cells. Groundwater monitoring has shown that leachate has affected near-surface porewater quality within the tailings management area (TMA). Elevated concentrations of calcium, sulphate, strontium, cesium, and rubidium were used to identify the leachate. Porewater at the base of the tailings and in the overburden beneath the tailings has not been affected. A geochemical investigation was initiated to determine how the leachate behaves in the groundwater/tailings porewater system. Over the past 7 years of residue placement in the TMA, the footprint of the residue placement area has changed, making the comparison of predicted versus actual rate of leachate migration very subjective and difficult to quantify. Based solely on the analytical data, the source of the leachate is unknown, either from the original residue pile or the 2007 residue placement area. For purposes of long term residue management, an investigation of the geochemical behaviour of residue leachate in the groundwater/tailings system of the TMA is currently underway. 5 refs., 1 tab., 2 figs.

  10. MLP based models to predict PM10, O3 concentrations, in Sines industrial area

    Science.gov (United States)

    Durao, R.; Pereira, M. J.

    2012-04-01

    Sines is an important Portuguese industrial area located southwest cost of Portugal with important nearby protected natural areas. The main economical activities are related with this industrial area, the deep-water port, petrochemical and thermo-electric industry. Nevertheless, tourism is also an important economic activity especially in summer time with potential to grow. The aim of this study is to develop prediction models of pollutant concentration categories (e.g. low concentration and high concentration) in order to provide early warnings to the competent authorities who are responsible for the air quality management. The knowledge in advanced of pollutant high concentrations occurrence will allow the implementation of mitigation actions and the release of precautionary alerts to population. The regional air quality monitoring network consists in three monitoring stations where a set of pollutants' concentrations are registered on a continuous basis. From this set stands out the tropospheric ozone (O3) and particulate matter (PM10) due to the high concentrations occurring in the region and their adverse effects on human health. Moreover, the major industrial plants of the region monitor SO2, NO2 and particles emitted flows at the principal chimneys (point sources), also on a continuous basis,. Therefore Artificial neuronal networks (ANN) were the applied methodology to predict next day pollutant concentrations; due to the ANNs structure they have the ability to capture the non-linear relationships between predictor variables. Hence the first step of this study was to apply multivariate exploratory techniques to select the best predictor variables. The classification trees methodology (CART) was revealed to be the most appropriate in this case.. Results shown that pollutants atmospheric concentrations are mainly dependent on industrial emissions and a complex combination of meteorological factors and the time of the year. In the second step, the Multi

  11. Predicting Solar Activity Using Machine-Learning Methods

    Science.gov (United States)

    Bobra, M.

    2017-12-01

    Of all the activity observed on the Sun, two of the most energetic events are flares and coronal mass ejections. However, we do not, as of yet, fully understand the physical mechanism that triggers solar eruptions. A machine-learning algorithm, which is favorable in cases where the amount of data is large, is one way to [1] empirically determine the signatures of this mechanism in solar image data and [2] use them to predict solar activity. In this talk, we discuss the application of various machine learning algorithms - specifically, a Support Vector Machine, a sparse linear regression (Lasso), and Convolutional Neural Network - to image data from the photosphere, chromosphere, transition region, and corona taken by instruments aboard the Solar Dynamics Observatory in order to predict solar activity on a variety of time scales. Such an approach may be useful since, at the present time, there are no physical models of flares available for real-time prediction. We discuss our results (Bobra and Couvidat, 2015; Bobra and Ilonidis, 2016; Jonas et al., 2017) as well as other attempts to predict flares using machine-learning (e.g. Ahmed et al., 2013; Nishizuka et al. 2017) and compare these results with the more traditional techniques used by the NOAA Space Weather Prediction Center (Crown, 2012). We also discuss some of the challenges in using machine-learning algorithms for space science applications.

  12. Mixed price and load forecasting of electricity markets by a new iterative prediction method

    International Nuclear Information System (INIS)

    Amjady, Nima; Daraeepour, Ali

    2009-01-01

    Load and price forecasting are the two key issues for the participants of current electricity markets. However, load and price of electricity markets have complex characteristics such as nonlinearity, non-stationarity and multiple seasonality, to name a few (usually, more volatility is seen in the behavior of electricity price signal). For these reasons, much research has been devoted to load and price forecast, especially in the recent years. However, previous research works in the area separately predict load and price signals. In this paper, a mixed model for load and price forecasting is presented, which can consider interactions of these two forecast processes. The mixed model is based on an iterative neural network based prediction technique. It is shown that the proposed model can present lower forecast errors for both load and price compared with the previous separate frameworks. Another advantage of the mixed model is that all required forecast features (from load or price) are predicted within the model without assuming known values for these features. So, the proposed model can better be adapted to real conditions of an electricity market. The forecast accuracy of the proposed mixed method is evaluated by means of real data from the New York and Spanish electricity markets. The method is also compared with some of the most recent load and price forecast techniques. (author)

  13. Geostatistical methods for rock mass quality prediction using borehole and geophysical survey data

    Science.gov (United States)

    Chen, J.; Rubin, Y.; Sege, J. E.; Li, X.; Hehua, Z.

    2015-12-01

    For long, deep tunnels, the number of geotechnical borehole investigations during the preconstruction stage is generally limited. Yet tunnels are often constructed in geological structures with complex geometries, and in which the rock mass is fragmented from past structural deformations. Tunnel Geology Prediction (TGP) is a geophysical technique widely used during tunnel construction in China to ensure safety during construction and to prevent geological disasters. In this paper, geostatistical techniques were applied in order to integrate seismic velocity from TGP and borehole information into spatial predictions of RMR (Rock Mass Rating) in unexcavated areas. This approach is intended to apply conditional probability methods to transform seismic velocities to directly observed RMR values. The initial spatial distribution of RMR, inferred from the boreholes, was updated by including geophysical survey data in a co-kriging approach. The method applied to a real tunnel project shows significant improvements in rock mass quality predictions after including geophysical survey data, leading to better decision-making for construction safety design.

  14. Visceral fat area predicts survival in patients with advanced hepatocellular carcinoma treated with tyrosine kinase inhibitors.

    Science.gov (United States)

    Nault, Jean-Charles; Pigneur, Frédéric; Nelson, Anaïs Charles; Costentin, Charlotte; Tselikas, Lambros; Katsahian, Sandrine; Diao, Guoqing; Laurent, Alexis; Mallat, Ariane; Duvoux, Christophe; Luciani, Alain; Decaens, Thomas

    2015-10-01

    Anthropometric measurements have been linked to resistance to anti-angiogenic treatment and survival. Patients with advanced hepatocellular carcinoma treated with sorafenib or brivanib in 2008-2011 were included in this retrospective study. Anthropometric measurements were assessed using computed tomography and were correlated with drug toxicity, radiological response, and overall survival. 52 patients were included, Barcelona Clinic Liver Classification B (38%) and C (62%), with a mean value of α-fetoprotein of 29,554±85,654 ng/mL, with a median overall survival of 10.5 months. Sarcopenia was associated with a greater rate of hand-foot syndrome (P=0.049). Modified Response Evaluation Criteria In Solid Tumours (mRECIST) and Choi criteria were significantly associated with survival, but RECIST criteria were not. An absence of hand-foot syndrome and high-visceral fat area were associated with progressive disease as assessed by RECIST and mRECIST criteria. In multivariate analyses, high visceral fat area (HR=3.6; P=0.002), low lean body mass (HR=2.4; P=0.015), and presence of hand-foot syndrome (HR=1.8; P=0.004) were significantly associated with overall survival. In time-dependent multivariate analyses; only high visceral fat area was associated with survival. Visceral fat area is associated with survival and seems to be a predictive marker for primary resistance to tyrosine kinase inhibitors in patients with advanced hepatocellular carcinoma. Copyright © 2015 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  15. Right Brodmann area 18 predicts tremor arrest after Vim radiosurgery: a voxel-based morphometry study.

    Science.gov (United States)

    Tuleasca, Constantin; Witjas, Tatiana; Van de Ville, Dimitri; Najdenovska, Elena; Verger, Antoine; Girard, Nadine; Champoudry, Jerome; Thiran, Jean-Philippe; Cuadra, Meritxell Bach; Levivier, Marc; Guedj, Eric; Régis, Jean

    2018-03-01

    Drug-resistant essential tremor (ET) can benefit from open standard stereotactic procedures, such as deep-brain stimulation or radiofrequency thalamotomy. Non-surgical candidates can be offered either high-focused ultrasound (HIFU) or radiosurgery (RS). All procedures aim to target the same thalamic site, the ventro-intermediate nucleus (e.g., Vim). The mechanisms by which tremor stops after Vim RS or HIFU remain unknown. We used voxel-based morphometry (VBM) on pretherapeutic neuroimaging data and assessed which anatomical site would best correlate with tremor arrest 1 year after Vim RS. Fifty-two patients (30 male, 22 female; mean age 71.6 years, range 49-82) with right-sided ET benefited from left unilateral Vim RS in Marseille, France. Targeting was performed in a uniform manner, using 130 Gy and a single 4-mm collimator. Neurological (pretherapeutic and 1 year after) and neuroimaging (baseline) assessments were completed. Tremor score on the treated hand (TSTH) at 1 year after Vim RS was included in a statistical parametric mapping analysis of variance (ANOVA) model as a continuous variable with pretherapeutic neuroimaging data. Pretherapeutic gray matter density (GMD) was further correlated with TSTH improvement. No a priori hypothesis was used in the statistical model. The only statistically significant region was right Brodmann area (BA) 18 (visual association area V2, p = 0.05, cluster size K c  = 71). Higher baseline GMD correlated with better TSTH improvement at 1 year after Vim RS (Spearman's rank correlation coefficient = 0.002). Routine baseline structural neuroimaging predicts TSTH improvement 1 year after Vim RS. The relevant anatomical area is the right visual association cortex (BA 18, V2). The question whether visual areas should be included in the targeting remains open.

  16. Development of a predictive methodology for identifying high radon exhalation potential areas

    International Nuclear Information System (INIS)

    Ielsch, G.

    2001-01-01

    Radon 222 is a radioactive natural gas originating from the decay of radium 226 which itself originates from the decay of uranium 23 8 naturally present in rocks and soil. Inhalation of radon gas and its decay products is a potential health risk for man. Radon can accumulate in confined environments such as buildings, and is responsible for one third of the total radiological exposure of the general public to radiation. The problem of how to manage this risk then arises. The main difficulty encountered is due to the large variability of exposure to radon across the country. A prediction needs to be made of areas with the highest density of buildings with high radon levels. Exposure to radon varies depending on the degree of confinement of the habitat, the lifestyle of the occupants and particularly emission of radon from the surface of the soil on which the building is built. The purpose of this thesis is to elaborate a methodology for determining areas presenting a high potential for radon exhalation at the surface of the soil. The methodology adopted is based on quantification of radon exhalation at the surface, starting from a precise characterization of the main local geological and pedological parameters that control the radon source and its transport to the ground/atmosphere interface. The methodology proposed is innovative in that it combines a cartographic analysis, parameters integrated into a Geographic Information system, and a simplified model for vertical transport of radon by diffusion through pores in the soil. This methodology has been validated on two typical areas, in different geological contexts, and gives forecasts that generally agree with field observations. This makes it possible to identify areas with a high exhalation potential within a range of a few square kilometers. (author)

  17. Main research areas and methods in social entrepreneurship

    Directory of Open Access Journals (Sweden)

    Hadad Shahrazad

    2017-07-01

    Full Text Available The main specific objective of this paper is to explore the content of research as well as methodological issues on social entrepreneurship in the context of corporate social economics and entrepreneurship. Therefore, in order to obtain an overview of the research done on this theme, we conducted a literature review using the exploratory analysis as methodology. We focused on the studies and articles which were published in the most important academic periodicals that cover subjects as management, economics and business. The articles were identified based on the presence of selected keywords in their title, abstract and body of the article: ‘social entrepreneur’, ‘social enterprise’, ‘social entrepreneurship’, ‘corporate social entrepreneurship’ and ‘social economy’. Using this method, there were selected articles and studies published starting from the last decade of the 1990s up to 2015. We were also interested in international publications on the topic and also in books that approached social entrepreneurship.

  18. Broca's region and Visual Word Form Area activation differ during a predictive Stroop task

    DEFF Research Database (Denmark)

    Wallentin, Mikkel; Gravholt, Claus Højbjerg; Skakkebæk, Anne

    2015-01-01

    displayed in green or red (incongruent vs congruent colors). One of the colors, however, was presented three times as often as the other, making it possible to study both congruency and frequency effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible...... to study frequency effects across modalities. We found significant behavioral effects of both incongruency and frequency. A significant effect (p effect of frequency was observed and no interaction. Conjoined effects of incongruency...... and frequency were found in parietal regions as well as in the Visual Word Form Area (VWFA). No interaction between perceptual modality and frequency was found in VWFA suggesting that the region is not strictly visual. These findings speak against a strong version of the prediction error processing hypothesis...

  19. Predicting Plasma Glucose From Interstitial Glucose Observations Using Bayesian Methods

    DEFF Research Database (Denmark)

    Hansen, Alexander Hildenbrand; Duun-Henriksen, Anne Katrine; Juhl, Rune

    2014-01-01

    One way of constructing a control algorithm for an artificial pancreas is to identify a model capable of predicting plasma glucose (PG) from interstitial glucose (IG) observations. Stochastic differential equations (SDEs) make it possible to account both for the unknown influence of the continuous...... glucose monitor (CGM) and for unknown physiological influences. Combined with prior knowledge about the measurement devices, this approach can be used to obtain a robust predictive model. A stochastic-differential-equation-based gray box (SDE-GB) model is formulated on the basis of an identifiable...

  20. A comparison of methods of predicting maximum oxygen uptake.

    OpenAIRE

    Grant, S; Corbett, K; Amjad, A M; Wilson, J; Aitchison, T

    1995-01-01

    The aim of this study was to compare the results from a Cooper walk run test, a multistage shuttle run test, and a submaximal cycle test with the direct measurement of maximum oxygen uptake on a treadmill. Three predictive tests of maximum oxygen uptake--linear extrapolation of heart rate of VO2 collected from a submaximal cycle ergometer test (predicted L/E), the Cooper 12 min walk, run test, and a multi-stage progressive shuttle run test (MST)--were performed by 22 young healthy males (mean...

  1. NOAA ESRI Grid - seafloor hardbottom occurrence predictions model in New York offshore planning area from Biogeography Branch

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset represents hard bottom occurrence predictions from a spatial model developed for the New York offshore spatial planning area. This model builds upon the...

  2. NOAA ESRI Grid - depth uncertainty predictions in New York offshore planning area from Biogeography Branch bathymetry model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset represents depth uncertainty predictions from a bathymetric model developed for the New York offshore spatial planning area. The model also includes...

  3. NOAA ESRI Grid - predictions of seabird diversity in the New York offshore planning area made by the NOAA Biogeography Branch

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset represents seabird diversity predictions from spatial models developed for the New York offshore spatial planning area. This raster was derived from...

  4. Wavefront coherence area for predicting visual acuity of post-PRK and post-PARK refractive surgery patients

    Science.gov (United States)

    Garcia, Daniel D.; van de Pol, Corina; Barsky, Brian A.; Klein, Stanley A.

    1999-06-01

    Many current corneal topography instruments (called videokeratographs) provide an `acuity index' based on corneal smoothness to analyze expected visual acuity. However, post-refractive surgery patients often exhibit better acuity than is predicted by such indices. One reason for this is that visual acuity may not necessarily be determined by overall corneal smoothness but rather by having some part of the cornea able to focus light coherently onto the fovea. We present a new method of representing visual acuity by measuring the wavefront aberration, using principles from both ray and wave optics. For each point P on the cornea, we measure the size of the associated coherence area whose optical path length (OPL), from a reference plane to P's focus, is within a certain tolerance of the OPL for P. We measured the topographies and vision of 62 eyes of patients who had undergone the corneal refractive surgery procedures of photorefractive keratectomy (PRK) and photorefractive astigmatic keratectomy (PARK). In addition to high contrast visual acuity, our vision tests included low contrast and low luminance to test the contribution of the PRK transition zone. We found our metric for visual acuity to be better than all other metrics at predicting the acuity of low contrast and low luminance. However, high contrast visual acuity was poorly predicted by all of the indices we studied, including our own. The indices provided by current videokeratographs sometimes fail for corneas whose shape differs from simple ellipsoidal models. This is the case with post-PRK and post-PARK refractive surgery patients. Our alternative representation that displays the coherence area of the wavefront has considerable advantages, and promises to be a better predictor of low contrast and low luminance visual acuity than current shape measures.

  5. What Predicts Method Effects in Child Behavior Ratings

    Science.gov (United States)

    Low, Justin A.; Keith, Timothy Z.; Jensen, Megan

    2015-01-01

    The purpose of this research was to determine whether child, parent, and teacher characteristics such as sex, socioeconomic status (SES), parental depressive symptoms, the number of years of teaching experience, number of children in the classroom, and teachers' disciplinary self-efficacy predict deviations from maternal ratings in a…

  6. A method for predicting the probability of business network profitability

    NARCIS (Netherlands)

    Johnson, P.; Iacob, Maria Eugenia; Välja, M.; van Sinderen, Marten J.; Magnusson, C; Ladhe, T.

    2014-01-01

    In the design phase of business collaboration, it is desirable to be able to predict the profitability of the business-to-be. Therefore, techniques to assess qualities such as costs, revenues, risks, and profitability have been previously proposed. However, they do not allow the modeler to properly

  7. Statistical tests for equal predictive ability across multiple forecasting methods

    DEFF Research Database (Denmark)

    Borup, Daniel; Thyrsgaard, Martin

    We develop a multivariate generalization of the Giacomini-White tests for equal conditional predictive ability. The tests are applicable to a mixture of nested and non-nested models, incorporate estimation uncertainty explicitly, and allow for misspecification of the forecasting model as well as ...

  8. Genomic breeding value prediction:methods and procedures

    NARCIS (Netherlands)

    Calus, M.P.L.

    2010-01-01

    Animal breeding faces one of the most significant changes of the past decades – the implementation of genomic selection. Genomic selection uses dense marker maps to predict the breeding value of animals with reported accuracies that are up to 0.31 higher than those of pedigree indexes, without the

  9. Link Prediction Methods and Their Accuracy for Different Social Networks and Network Metrics

    Directory of Open Access Journals (Sweden)

    Fei Gao

    2015-01-01

    Full Text Available Currently, we are experiencing a rapid growth of the number of social-based online systems. The availability of the vast amounts of data gathered in those systems brings new challenges that we face when trying to analyse it. One of the intensively researched topics is the prediction of social connections between users. Although a lot of effort has been made to develop new prediction approaches, the existing methods are not comprehensively analysed. In this paper we investigate the correlation between network metrics and accuracy of different prediction methods. We selected six time-stamped real-world social networks and ten most widely used link prediction methods. The results of the experiments show that the performance of some methods has a strong correlation with certain network metrics. We managed to distinguish “prediction friendly” networks, for which most of the prediction methods give good performance, as well as “prediction unfriendly” networks, for which most of the methods result in high prediction error. Correlation analysis between network metrics and prediction accuracy of prediction methods may form the basis of a metalearning system where based on network characteristics it will be able to recommend the right prediction method for a given network.

  10. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.

    2012-01-01

    PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator

  11. Hybrid Prediction Method for Aircraft Interior Noise, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of the project is research and development of methods for application of the Hybrid FE-SEA method to aircraft vibro-acoustic problems. This proposal...

  12. DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?

    Science.gov (United States)

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  13. Prediction of Solvent Physical Properties using the Hierarchical Clustering Method

    Science.gov (United States)

    Recently a QSAR (Quantitative Structure Activity Relationship) method, the hierarchical clustering method, was developed to estimate acute toxicity values for large, diverse datasets. This methodology has now been applied to the estimate solvent physical properties including sur...

  14. Indoor Radon Concentration Related to Different Radon Areas and Indoor Radon Prediction

    Science.gov (United States)

    Juhásová Šenitková, Ingrid; Šál, Jiří

    2017-12-01

    Indoor radon has been observed in the buildings at areas with different radon risk potential. Preventive measures are based on control of main potential radon sources (soil gas, building material and supplied water) to avoid building of new houses above recommended indoor radon level 200 Bq/m3. Radon risk (index) estimation of individual building site bedrock in case of new house siting and building protection according technical building code are obligatory. Remedial actions in buildings built at high radon risk areas were carried out principally by unforced ventilation and anti-radon insulation. Significant differences were found in the level of radon concentration between rooms where radon reduction techniques were designed and those where it was not designed. The mathematical model based on radon exhalation from soil has been developed to describe the physical processes determining indoor radon concentration. The model is focused on combined radon diffusion through the slab and advection through the gap from sub-slab soil. In this model, radon emanated from building materials is considered not having a significant contribution to indoor radon concentration. Dimensional analysis and Gauss-Newton nonlinear least squares parametric regression were used to simplify the problem, identify essential input variables and find parameter values. The presented verification case study is introduced for real buildings with respect to various underground construction types. Presented paper gives picture of possible mathematical approach to indoor radon concentration prediction.

  15. Identification and Prediction of Large Pedestrian Flow in Urban Areas Based on a Hybrid Detection Approach

    Directory of Open Access Journals (Sweden)

    Kaisheng Zhang

    2016-12-01

    Full Text Available Recently, population density has grown quickly with the increasing acceleration of urbanization. At the same time, overcrowded situations are more likely to occur in populous urban areas, increasing the risk of accidents. This paper proposes a synthetic approach to recognize and identify the large pedestrian flow. In particular, a hybrid pedestrian flow detection model was constructed by analyzing real data from major mobile phone operators in China, including information from smartphones and base stations (BS. With the hybrid model, the Log Distance Path Loss (LDPL model was used to estimate the pedestrian density from raw network data, and retrieve information with the Gaussian Progress (GP through supervised learning. Temporal-spatial prediction of the pedestrian data was carried out with Machine Learning (ML approaches. Finally, a case study of a real Central Business District (CBD scenario in Shanghai, China using records of millions of cell phone users was conducted. The results showed that the new approach significantly increases the utility and capacity of the mobile network. A more reasonable overcrowding detection and alert system can be developed to improve safety in subway lines and other hotspot landmark areas, such as the Bundle, People’s Square or Disneyland, where a large passenger flow generally exists.

  16. Optimal recall from bounded metaplastic synapses: predicting functional adaptations in hippocampal area CA3.

    Directory of Open Access Journals (Sweden)

    Cristina Savin

    2014-02-01

    Full Text Available A venerable history of classical work on autoassociative memory has significantly shaped our understanding of several features of the hippocampus, and most prominently of its CA3 area, in relation to memory storage and retrieval. However, existing theories of hippocampal memory processing ignore a key biological constraint affecting memory storage in neural circuits: the bounded dynamical range of synapses. Recent treatments based on the notion of metaplasticity provide a powerful model for individual bounded synapses; however, their implications for the ability of the hippocampus to retrieve memories well and the dynamics of neurons associated with that retrieval are both unknown. Here, we develop a theoretical framework for memory storage and recall with bounded synapses. We formulate the recall of a previously stored pattern from a noisy recall cue and limited-capacity (and therefore lossy synapses as a probabilistic inference problem, and derive neural dynamics that implement approximate inference algorithms to solve this problem efficiently. In particular, for binary synapses with metaplastic states, we demonstrate for the first time that memories can be efficiently read out with biologically plausible network dynamics that are completely constrained by the synaptic plasticity rule, and the statistics of the stored patterns and of the recall cue. Our theory organises into a coherent framework a wide range of existing data about the regulation of excitability, feedback inhibition, and network oscillations in area CA3, and makes novel and directly testable predictions that can guide future experiments.

  17. Variable importance and prediction methods for longitudinal problems with missing variables.

    Directory of Open Access Journals (Sweden)

    Iván Díaz

    Full Text Available We present prediction and variable importance (VIM methods for longitudinal data sets containing continuous and binary exposures subject to missingness. We demonstrate the use of these methods for prognosis of medical outcomes of severe trauma patients, a field in which current medical practice involves rules of thumb and scoring methods that only use a few variables and ignore the dynamic and high-dimensional nature of trauma recovery. Well-principled prediction and VIM methods can provide a tool to make care decisions informed by the high-dimensional patient's physiological and clinical history. Our VIM parameters are analogous to slope coefficients in adjusted regressions, but are not dependent on a specific statistical model, nor require a certain functional form of the prediction regression to be estimated. In addition, they can be causally interpreted under causal and statistical assumptions as the expected outcome under time-specific clinical interventions, related to changes in the mean of the outcome if each individual experiences a specified change in the variable (keeping other variables in the model fixed. Better yet, the targeted MLE used is doubly robust and locally efficient. Because the proposed VIM does not constrain the prediction model fit, we use a very flexible ensemble learner (the SuperLearner, which returns a linear combination of a list of user-given algorithms. Not only is such a prediction algorithm intuitive appealing, it has theoretical justification as being asymptotically equivalent to the oracle selector. The results of the analysis show effects whose size and significance would have been not been found using a parametric approach (such as stepwise regression or LASSO. In addition, the procedure is even more compelling as the predictor on which it is based showed significant improvements in cross-validated fit, for instance area under the curve (AUC for a receiver-operator curve (ROC. Thus, given that 1 our VIM

  18. Regional Characterization of the Crust in Metropolitan Areas for Prediction of Strong Ground Motion

    Science.gov (United States)

    Hirata, N.; Sato, H.; Koketsu, K.; Umeda, Y.; Iwata, T.; Kasahara, K.

    2003-12-01

    Introduction: After the 1995 Kobe earthquake, the Japanese government increased its focus and funding of earthquake hazards evaluation, studies of man-made structures integrity, and emergency response planning in the major urban centers. A new agency, the Ministry of Education, Science, Sports and Culture (MEXT) has started a five-year program titled as Special Project for Earthquake Disaster Mitigation in Urban Areas (abbreviated to Dai-dai-toku in Japanese) since 2002. The project includes four programs: I. Regional characterization of the crust in metropolitan areas for prediction of strong ground motion. II. Significant improvement of seismic performance of structure. III. Advanced disaster management system. IV. Investigation of earthquake disaster mitigation research results. We will present the results from the first program conducted in 2002 and 2003. Regional Characterization of the Crust in Metropolitan Areas for Prediction of Strong Ground Motion: A long-term goal is to produce map of reliable estimations of strong ground motion. This requires accurate determination of ground motion response, which includes a source process, an effect of propagation path, and near surface response. The new five-year project was aimed to characterize the "source" and "propagation path" in the Kanto (Tokyo) region and Kinki (Osaka) region. The 1923 Kanto Earthquake is one of the important targets to be addressed in the project. The proximity of the Pacific and Philippine Sea subducting plates requires study of the relationship between earthquakes and regional tectonics. This project focuses on identification and geometry of: 1) Source faults, 2) Subducting plates and mega-thrust faults, 3) Crustal structure, 4) Seismogenic zone, 5) Sedimentary basins, 6) 3D velocity properties We have conducted a series of seismic reflection and refraction experiment in the Kanto region. In 2002 we have completed to deploy seismic profiling lines in the Boso peninsula (112 km) and the

  19. Prediction of Human Drug Targets and Their Interactions Using Machine Learning Methods: Current and Future Perspectives.

    Science.gov (United States)

    Nath, Abhigyan; Kumari, Priyanka; Chaube, Radha

    2018-01-01

    Identification of drug targets and drug target interactions are important steps in the drug-discovery pipeline. Successful computational prediction methods can reduce the cost and time demanded by the experimental methods. Knowledge of putative drug targets and their interactions can be very useful for drug repurposing. Supervised machine learning methods have been very useful in drug target prediction and in prediction of drug target interactions. Here, we describe the details for developing prediction models using supervised learning techniques for human drug target prediction and their interactions.

  20. Comparison of Uncertainty of Two Precipitation Prediction Models at Los Alamos National Lab Technical Area 54

    Energy Technology Data Exchange (ETDEWEB)

    Shield, Stephen Allan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dai, Zhenxue [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-08-18

    Meteorological inputs are an important part of subsurface flow and transport modeling. The choice of source for meteorological data used as inputs has significant impacts on the results of subsurface flow and transport studies. One method to obtain the meteorological data required for flow and transport studies is the use of weather generating models. This paper compares the difference in performance of two weather generating models at Technical Area 54 of Los Alamos National Lab. Technical Area 54 is contains several waste pits for low-level radioactive waste and is the site for subsurface flow and transport studies. This makes the comparison of the performance of the two weather generators at this site particularly valuable.

  1. Theoretical prediction method of subcooled flow boiling CHF

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min; Chang, Soon Heung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1999-12-31

    A theoretical critical heat flux (CHF ) model, based on lateral bubble coalescence on the heated wall, is proposed to predict the subcooled flow boiling CHF in a uniformly heated vertical tube. The model is based on the concept that a single layer of bubbles contacted to the heated wall prevents a bulk liquid from reaching the wall at near CHF condition. Comparisons between the model predictions and experimental data result in satisfactory agreement within less than 9.73% root-mean-square error by the appropriate choice of the critical void fraction in the bubbly layer. The present model shows comparable performance with the CHF look-up table of Groeneveld et al.. 28 refs., 11 figs., 1 tab. (Author)

  2. Machine learning methods in predicting the student academic motivation

    Directory of Open Access Journals (Sweden)

    Ivana Đurđević Babić

    2017-01-01

    Full Text Available Academic motivation is closely related to academic performance. For educators, it is equally important to detect early students with a lack of academic motivation as it is to detect those with a high level of academic motivation. In endeavouring to develop a classification model for predicting student academic motivation based on their behaviour in learning management system (LMS courses, this paper intends to establish links between the predicted student academic motivation and their behaviour in the LMS course. Students from all years at the Faculty of Education in Osijek participated in this research. Three machine learning classifiers (neural networks, decision trees, and support vector machines were used. To establish whether a significant difference in the performance of models exists, a t-test of the difference in proportions was used. Although, all classifiers were successful, the neural network model was shown to be the most successful in detecting the student academic motivation based on their behaviour in LMS course.

  3. Theoretical prediction method of subcooled flow boiling CHF

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min; Chang, Soon Heung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A theoretical critical heat flux (CHF ) model, based on lateral bubble coalescence on the heated wall, is proposed to predict the subcooled flow boiling CHF in a uniformly heated vertical tube. The model is based on the concept that a single layer of bubbles contacted to the heated wall prevents a bulk liquid from reaching the wall at near CHF condition. Comparisons between the model predictions and experimental data result in satisfactory agreement within less than 9.73% root-mean-square error by the appropriate choice of the critical void fraction in the bubbly layer. The present model shows comparable performance with the CHF look-up table of Groeneveld et al.. 28 refs., 11 figs., 1 tab. (Author)

  4. Cellular Automata Modelling in Predicting the Development of Settlement Areas, A Case Study in The Eastern District of Pontianak Waterfront City

    Science.gov (United States)

    Nurhidayati, E.; Buchori, I.; Mussadun; Fariz, T. R.

    2017-07-01

    Pontianak waterfront city as water-based urban has the potential of water resources, socio-economic, cultural, tourism and riverine settlements. Settlements areas in the eastern district of Pontianak waterfront city is located in the triangle of Kapuas river and Landak river. This study uses quantitative-GIS methods that integrates binary logistic regression and Cellular Automata-Markov models. The data used in this study such as satellite imagery Quickbird 2003, Ikonos 2008 and elevation contour interval 1 meter. This study aims to discover the settlement land use changes in 2003-2014 and to predict the settlements areas in 2020. This study results the accuracy in predicting of changes in settlements areas shows overall accuracy (79.74%) and the highest kappa index (0.55). The prediction results show that settlement areas (481.98 Ha) in 2020 and the increasingly of settlement areas (6.80 Ha/year) in 2014-2020. The development of settlement areas in 2020 shows the highest land expansion in Parit Mayor Village. The results of regression coefficient value (0) of flooding variable, so flooding did not influence to the development of settlement areas in the eastern district of Pontianak because the building’s adaptation of rumah panggung’s settlements was very good which have adjusted to the height of tidal flood.

  5. Improved Methods for Pitch Synchronous Linear Prediction Analysis of Speech

    OpenAIRE

    劉, 麗清

    2015-01-01

    Linear prediction (LP) analysis has been applied to speech system over the last few decades. LP technique is well-suited for speech analysis due to its ability to model speech production process approximately. Hence LP analysis has been widely used for speech enhancement, low-bit-rate speech coding in cellular telephony, speech recognition, characteristic parameter extraction (vocal tract resonances frequencies, fundamental frequency called pitch) and so on. However, the performance of the co...

  6. Application of subinterval area median contrast filtering method in the recognizing of geochemical anomalies

    International Nuclear Information System (INIS)

    Zhao Ningbo; Fu Jin; Zhang Chuan; Liu Huan

    2012-01-01

    Traditional geochemical processing method sometimes maybe loses some weak anomalies related to mineralization, the authors can avoid the influence of geology background and can solve the problem of recognizing weak anomalies in the low-background and high-background area with the subinterval area median contrast filtering method. In an area of Jiangxi Province, several new anomalies are identified by this method and uranium mineralized prospects are found among them. (authors)

  7. Development of an integrated method for long-term water quality prediction using seasonal climate forecast

    Directory of Open Access Journals (Sweden)

    J. Cho

    2016-10-01

    Full Text Available The APEC Climate Center (APCC produces climate prediction information utilizing a multi-climate model ensemble (MME technique. In this study, four different downscaling methods, in accordance with the degree of utilizing the seasonal climate prediction information, were developed in order to improve predictability and to refine the spatial scale. These methods include: (1 the Simple Bias Correction (SBC method, which directly uses APCC's dynamic prediction data with a 3 to 6 month lead time; (2 the Moving Window Regression (MWR method, which indirectly utilizes dynamic prediction data; (3 the Climate Index Regression (CIR method, which predominantly uses observation-based climate indices; and (4 the Integrated Time Regression (ITR method, which uses predictors selected from both CIR and MWR. Then, a sampling-based temporal downscaling was conducted using the Mahalanobis distance method in order to create daily weather inputs to the Soil and Water Assessment Tool (SWAT model. Long-term predictability of water quality within the Wecheon watershed of the Nakdong River Basin was evaluated. According to the Korean Ministry of Environment's Provisions of Water Quality Prediction and Response Measures, modeling-based predictability was evaluated by using 3-month lead prediction data issued in February, May, August, and November as model input of SWAT. Finally, an integrated approach, which takes into account various climate information and downscaling methods for water quality prediction, was presented. This integrated approach can be used to prevent potential problems caused by extreme climate in advance.

  8. Predicted high-water elevations for selected flood events at the Albert Pike Recreation Area, Ouachita National Forest

    Science.gov (United States)

    D.A. Marion

    2012-01-01

    The hydraulic characteristics are determined for the June 11, 2010, flood on the Little Missouri River at the Albert Pike Recreation Area in Arkansas. These characteristics are then used to predict the high-water elevations for the 10-, 25-, 50-, and 100-year flood events in the Loop B, C, and D Campgrounds of the recreation area. The peak discharge and related...

  9. Family differences in equations for predicting biomass and leaf area in Douglas-fir (Pseudotsuga menziesii var. menziesii).

    Science.gov (United States)

    J.B. St. Clair

    1993-01-01

    Logarithmic regression equations were developed to predict component biomass and leaf area for an 18-yr-old genetic test of Douglas-fir (Pseudotsuga menziesii [Mirb.] Franco var. menziesii) based on stem diameter or cross-sectional sapwood area. Equations did not differ among open-pollinated families in slope, but intercepts...

  10. EPMLR: sequence-based linear B-cell epitope prediction method using multiple linear regression.

    Science.gov (United States)

    Lian, Yao; Ge, Meng; Pan, Xian-Ming

    2014-12-19

    B-cell epitopes have been studied extensively due to their immunological applications, such as peptide-based vaccine development, antibody production, and disease diagnosis and therapy. Despite several decades of research, the accurate prediction of linear B-cell epitopes has remained a challenging task. In this work, based on the antigen's primary sequence information, a novel linear B-cell epitope prediction model was developed using the multiple linear regression (MLR). A 10-fold cross-validation test on a large non-redundant dataset was performed to evaluate the performance of our model. To alleviate the problem caused by the noise of negative dataset, 300 experiments utilizing 300 sub-datasets were performed. We achieved overall sensitivity of 81.8%, precision of 64.1% and area under the receiver operating characteristic curve (AUC) of 0.728. We have presented a reliable method for the identification of linear B cell epitope using antigen's primary sequence information. Moreover, a web server EPMLR has been developed for linear B-cell epitope prediction: http://www.bioinfo.tsinghua.edu.cn/epitope/EPMLR/ .

  11. A Geometrical-based Vertical Gain Correction for Signal Strength Prediction of Downtilted Base Station Antennas in Urban Areas

    DEFF Research Database (Denmark)

    Rodriguez, Ignacio; Nguyen, Huan Cong; Sørensen, Troels Bundgaard

    2012-01-01

    -based extension to standard empirical path loss prediction models can give quite reasonable accuracy in predicting the signal strength from tilted base station antennas in small urban macro-cells. Our evaluation is based on measurements on several sectors in a 2.6 GHz Long Term Evolution (LTE) cellular network......, with electrical antenna downtilt in the range from 0 to 10 degrees, as well as predictions based on ray-tracing and 3D building databases covering the measurement area. Although the calibrated ray-tracing predictions are highly accurate compared with the measured data, the combined LOS/NLOS COST-WI model...

  12. Water Pollution Prediction in the Three Gorges Reservoir Area and Countermeasures for Sustainable Development of the Water Environment.

    Science.gov (United States)

    Li, Yinghui; Huang, Shuaijin; Qu, Xuexin

    2017-10-27

    The Three Gorges Project was implemented in 1994 to promote sustainable water resource use and development of the water environment in the Three Gorges Reservoir Area (hereafter "Reservoir Area"). However, massive discharge of wastewater along the river threatens these goals; therefore, this study employs a grey prediction model (GM) to predict the annual emissions of primary pollution sources, including industrial wastewater, domestic wastewater, and oily and domestic wastewater from ships, that influence the Three Gorges Reservoir Area water environment. First, we optimize the initial values of a traditional GM (1,1) model, and build a new GM (1,1) model that minimizes the sum of squares of the relative simulation errors. Second, we use the new GM (1,1) model to simulate historical annual emissions data for the four pollution sources and thereby test the effectiveness of the model. Third, we predict the annual emissions of the four pollution sources in the Three Gorges Reservoir Area for a future period. The prediction results reveal the annual emission trends for the major wastewater types, and indicate the primary sources of water pollution in the Three Gorges Reservoir Area. Based on our predictions, we suggest several countermeasures against water pollution and towards the sustainable development of the water environment in the Three Gorges Reservoir Area.

  13. Predictability and Spatial Characteristics of New-York-City-Area Heat Waves

    Science.gov (United States)

    Raymond, C.; Horton, R. M.

    2016-12-01

    The origins, characteristics, and predictability of extreme heat waves in the Northeast U.S. are simultaneously examined at multiple scales, using hourly observational data from 1948-2014 and focusing in particular on the region surrounding New York City. A novel definition of heat waves - incorporating both temperature and moisture at hourly resolution - is used to identify 3-to-5-day heat waves whose dynamics are then analyzed from 3 weeks prior to 3 weeks subsequent to the event. Inter-event differences in dynamics such as the strength and position of geopotential-height anomalies; the strength, persistence, and orientation of sea breezes; and the dominant 850-hPa wind azimuth, all of which are filtered via local terrain and land-use to create differences in conditions between events at specific locations. In particular, using composite maps and back trajectories, they are found to play an important role in creating mesoscale differences in low-level moisture content, from one side of the metropolitan area to the other. Evidence is presented supporting the influence of coastline orientation in explaining the differences in the relationships between wind azimuth and temperature & moisture advection between New York City proper and northern New Jersey. Self-organizing maps are employed to classify heat waves based on the small-scale differences in temperature and moisture between events, and the results of this classification are then used in correlations with synoptic- and hemispheric-scale geopotential-height anomalies. Considerable predictability of event type on the small-scale (as well as occurrence of a heat wave of any kind) is found, originating primarily from central Pacific and western Atlantic SSTs.

  14. Simple methods for predicting gas leakage flows through cracks

    International Nuclear Information System (INIS)

    Ewing, D.J.F.

    1989-01-01

    This report presents closed-form approximate analytical formulae with which the flow rate out of a through-wall crack can be estimated. The crack is idealised as a rough, tapering, wedgeshaped channel and the fluid is idealised as an isothermal or polytropically-expanding perfect gas. In practice, uncertainties about the wall friction factor dominate over uncertainties caused by the fluid-dynamics simplifications. The formulae take account of crack taper and for outwardly-diverging cracks they predict flows within 12% of mathematically more accurate one-dimensional numerical models. Upper and lower estimates of wall friction are discussed. (author)

  15. Underwater Sound Propagation Modeling Methods for Predicting Marine Animal Exposure.

    Science.gov (United States)

    Hamm, Craig A; McCammon, Diana F; Taillefer, Martin L

    2016-01-01

    The offshore exploration and production (E&P) industry requires comprehensive and accurate ocean acoustic models for determining the exposure of marine life to the high levels of sound used in seismic surveys and other E&P activities. This paper reviews the types of acoustic models most useful for predicting the propagation of undersea noise sources and describes current exposure models. The severe problems caused by model sensitivity to the uncertainty in the environment are highlighted to support the conclusion that it is vital that risk assessments include transmission loss estimates with statistical measures of confidence.

  16. Recurrence predictive models for patients with hepatocellular carcinoma after radiofrequency ablation using support vector machines with feature selection methods.

    Science.gov (United States)

    Liang, Ja-Der; Ping, Xiao-Ou; Tseng, Yi-Ju; Huang, Guan-Tarn; Lai, Feipei; Yang, Pei-Ming

    2014-12-01

    Recurrence of hepatocellular carcinoma (HCC) is an important issue despite effective treatments with tumor eradication. Identification of patients who are at high risk for recurrence may provide more efficacious screening and detection of tumor recurrence. The aim of this study was to develop recurrence predictive models for HCC patients who received radiofrequency ablation (RFA) treatment. From January 2007 to December 2009, 83 newly diagnosed HCC patients receiving RFA as their first treatment were enrolled. Five feature selection methods including genetic algorithm (GA), simulated annealing (SA) algorithm, random forests (RF) and hybrid methods (GA+RF and SA+RF) were utilized for selecting an important subset of features from a total of 16 clinical features. These feature selection methods were combined with support vector machine (SVM) for developing predictive models with better performance. Five-fold cross-validation was used to train and test SVM models. The developed SVM-based predictive models with hybrid feature selection methods and 5-fold cross-validation had averages of the sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and area under the ROC curve as 67%, 86%, 82%, 69%, 90%, and 0.69, respectively. The SVM derived predictive model can provide suggestive high-risk recurrent patients, who should be closely followed up after complete RFA treatment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Regional-scale Predictions of Agricultural N Losses in an Area with a High Livestock Density

    Directory of Open Access Journals (Sweden)

    Carlo Grignani

    2011-02-01

    Full Text Available The quantification of the N losses in territories characterised by intensive animal stocking is of primary importance. The development of simulation models coupled to a GIS, or of simple environmental indicators, is strategic to suggest the best specific management practices. The aims of this work were: a to couple a GIS to a simulation model in order to predict N losses; b to estimate leaching and gaseous N losses from a territory with intensive livestock farming; c to derive a simplified empirical metamodel from the model output that could be used to rank the relative importance of the variables which influence N losses and to extend the results to homogeneous situations. The work was carried out in a 7773 ha area in the Western Po plain in Italy. This area was chosen because it is characterised by intensive animal husbandry and might soon be included in the nitrate vulnerable zones. The high N load, the shallow water table and the coarse type of sub-soil sediments contribute to the vulnerability to N leaching. A CropSyst simulation model was coupled to a GIS, to account for the soil surface N budget. A linear multiple regression approach was used to describe the influence of a series of independent variables on the N leaching, the N gaseous losses (including volatilisation and denitrification and on the sum of the two. Despite the fact that the available GIS was very detailed, a great deal of information necessary to run the model was lacking. Further soil measurements concerning soil hydrology, soil nitrate content and water table depth proved very valuable to integrate the data contained in the GIS in order to produce reliable input for the model. The results showed that the soils influence both the quantity and the pathways of the N losses to a great extent. The ratio between the N losses and the N supplied varied between 20 and 38%. The metamodel shows that manure input always played the most important role in determining the N losses

  18. Regional-scale Predictions of Agricultural N Losses in an Area with a High Livestock Density

    Directory of Open Access Journals (Sweden)

    Dario Sacco

    2006-12-01

    Full Text Available The quantification of the N losses in territories characterised by intensive animal stocking is of primary importance. The development of simulation models coupled to a GIS, or of simple environmental indicators, is strategic to suggest the best specific management practices. The aims of this work were: a to couple a GIS to a simulation model in order to predict N losses; b to estimate leaching and gaseous N losses from a territory with intensive livestock farming; c to derive a simplified empirical metamodel from the model output that could be used to rank the relative importance of the variables which influence N losses and to extend the results to homogeneous situations. The work was carried out in a 7773 ha area in the Western Po plain in Italy. This area was chosen because it is characterised by intensive animal husbandry and might soon be included in the nitrate vulnerable zones. The high N load, the shallow water table and the coarse type of sub-soil sediments contribute to the vulnerability to N leaching. A CropSyst simulation model was coupled to a GIS, to account for the soil surface N budget. A linear multiple regression approach was used to describe the influence of a series of independent variables on the N leaching, the N gaseous losses (including volatilisation and denitrification and on the sum of the two. Despite the fact that the available GIS was very detailed, a great deal of information necessary to run the model was lacking. Further soil measurements concerning soil hydrology, soil nitrate content and water table depth proved very valuable to integrate the data contained in the GIS in order to produce reliable input for the model. The results showed that the soils influence both the quantity and the pathways of the N losses to a great extent. The ratio between the N losses and the N supplied varied between 20 and 38%. The metamodel shows that manure input always played the most important role in determining the N losses

  19. Two methods for isolating the lung area of a CT scan for density information

    International Nuclear Information System (INIS)

    Hedlund, L.W.; Anderson, R.F.; Goulding, P.L.; Beck, J.W.; Effmann, E.L.; Putman, C.E.

    1982-01-01

    Extracting density information from irregularly shaped tissue areas of CT scans requires automated methods when many scans are involved. We describe two computer methods that automatically isolate the lung area of a CT scan. Each starts from a single, operator specified point in the lung. The first method follows the steep density gradient boundary between lung and adjacent tissues; this tracking method is useful for estimating the overall density and total area of lung in a scan because all pixels within the lung area are available for statistical sampling. The second method finds all contiguous pixels of lung that are within the CT number range of air to water and are not a part of strong density gradient edges; this method is useful for estimating density and area of the lung parenchyma. Structures within the lung area that are surrounded by strong density gradient edges, such as large blood vessels, airways and nodules, are excluded from the lung sample while lung areas with diffuse borders, such as an area of mild or moderate edema, are retained. Both methods were tested on scans from an animal model of pulmonary edema and were found to be effective in isolating normal and diseased lungs. These methods are also suitable for isolating other organ areas of CT scans that are bounded by density gradient edges

  20. Specification and prediction of nickel mobilization using artificial intelligence methods

    Science.gov (United States)

    Gholami, Raoof; Ziaii, Mansour; Ardejani, Faramarz Doulati; Maleki, Shahoo

    2011-12-01

    Groundwater and soil pollution from pyrite oxidation, acid mine drainage generation, and release and transport of toxic metals are common environmental problems associated with the mining industry. Nickel is one toxic metal considered to be a key pollutant in some mining setting; to date, its formation mechanism has not yet been fully evaluated. The goals of this study are 1) to describe the process of nickel mobilization in waste dumps by introducing a novel conceptual model, and 2) to predict nickel concentration using two algorithms, namely the support vector machine (SVM) and the general regression neural network (GRNN). The results obtained from this study have shown that considerable amount of nickel concentration can be arrived into the water flow system during the oxidation of pyrite and subsequent Acid Drainage (AMD) generation. It was concluded that pyrite, water, and oxygen are the most important factors for nickel pollution generation while pH condition, SO4, HCO3, TDS, EC, Mg, Fe, Zn, and Cu are measured quantities playing significant role in nickel mobilization. SVM and GRNN have predicted nickel concentration with a high degree of accuracy. Hence, SVM and GRNN can be considered as appropriate tools for environmental risk assessment.

  1. Verifying a computational method for predicting extreme ground motion

    Science.gov (United States)

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  2. Using deuterated PAH amendments to validate chemical extraction methods to predict PAH bioavailability in soils

    International Nuclear Information System (INIS)

    Gomez-Eyles, Jose L.; Collins, Chris D.; Hodson, Mark E.

    2011-01-01

    Validating chemical methods to predict bioavailable fractions of polycyclic aromatic hydrocarbons (PAHs) by comparison with accumulation bioassays is problematic. Concentrations accumulated in soil organisms not only depend on the bioavailable fraction but also on contaminant properties. A historically contaminated soil was freshly spiked with deuterated PAHs (dPAHs). dPAHs have a similar fate to their respective undeuterated analogues, so chemical methods that give good indications of bioavailability should extract the fresh more readily available dPAHs and historic more recalcitrant PAHs in similar proportions to those in which they are accumulated in the tissues of test organisms. Cyclodextrin and butanol extractions predicted the bioavailable fraction for earthworms (Eisenia fetida) and plants (Lolium multiflorum) better than the exhaustive extraction. The PAHs accumulated by earthworms had a larger dPAH:PAH ratio than that predicted by chemical methods. The isotope ratio method described here provides an effective way of evaluating other chemical methods to predict bioavailability. - Research highlights: → Isotope ratios can be used to evaluate chemical methods to predict bioavailability. → Chemical methods predicted bioavailability better than exhaustive extractions. → Bioavailability to earthworms was still far from that predicted by chemical methods. - A novel method using isotope ratios to assess the ability of chemical methods to predict PAH bioavailability to soil biota.

  3. Using deuterated PAH amendments to validate chemical extraction methods to predict PAH bioavailability in soils

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Eyles, Jose L., E-mail: j.l.gomezeyles@reading.ac.uk [University of Reading, School of Human and Environmental Sciences, Soil Research Centre, Reading, RG6 6DW Berkshire (United Kingdom); Collins, Chris D.; Hodson, Mark E. [University of Reading, School of Human and Environmental Sciences, Soil Research Centre, Reading, RG6 6DW Berkshire (United Kingdom)

    2011-04-15

    Validating chemical methods to predict bioavailable fractions of polycyclic aromatic hydrocarbons (PAHs) by comparison with accumulation bioassays is problematic. Concentrations accumulated in soil organisms not only depend on the bioavailable fraction but also on contaminant properties. A historically contaminated soil was freshly spiked with deuterated PAHs (dPAHs). dPAHs have a similar fate to their respective undeuterated analogues, so chemical methods that give good indications of bioavailability should extract the fresh more readily available dPAHs and historic more recalcitrant PAHs in similar proportions to those in which they are accumulated in the tissues of test organisms. Cyclodextrin and butanol extractions predicted the bioavailable fraction for earthworms (Eisenia fetida) and plants (Lolium multiflorum) better than the exhaustive extraction. The PAHs accumulated by earthworms had a larger dPAH:PAH ratio than that predicted by chemical methods. The isotope ratio method described here provides an effective way of evaluating other chemical methods to predict bioavailability. - Research highlights: > Isotope ratios can be used to evaluate chemical methods to predict bioavailability. > Chemical methods predicted bioavailability better than exhaustive extractions. > Bioavailability to earthworms was still far from that predicted by chemical methods. - A novel method using isotope ratios to assess the ability of chemical methods to predict PAH bioavailability to soil biota.

  4. [Prediction model of meteorological grade of wheat stripe rust in winter-reproductive area, Sichuan Basin, China].

    Science.gov (United States)

    Guo, Xiang; Wang, Ming Tian; Zhang, Guo Zhi

    2017-12-01

    The winter reproductive areas of Puccinia striiformis var. striiformis in Sichuan Basin are often the places mostly affected by wheat stripe rust. With data on the meteorological condition and stripe rust situation at typical stations in the winter reproductive area in Sichuan Basin from 1999 to 2016, this paper classified the meteorological conditions inducing wheat stripe rust into 5 grades, based on the incidence area ratio of the disease. The meteorological factors which were biologically related to wheat stripe rust were determined through multiple analytical methods, and a meteorological grade model for forecasting wheat stripe rust was created. The result showed that wheat stripe rust in Sichuan Basin was significantly correlated with many meteorological factors, such as the ave-rage (maximum and minimum) temperature, precipitation and its anomaly percentage, relative humidity and its anomaly percentage, average wind speed and sunshine duration. Among these, the average temperature and the anomaly percentage of relative humidity were the determining factors. According to a historical retrospective test, the accuracy of the forecast based on the model was 64% for samples in the county-level test, and 89% for samples in the municipal-level test. In a meteorological grade forecast of wheat stripe rust in the winter reproductive areas in Sichuan Basin in 2017, the prediction was accurate for 62.8% of the samples, with 27.9% error by one grade and only 9.3% error by two or more grades. As a result, the model could deliver satisfactory forecast results, and predicate future wheat stripe rust from a meteorological point of view.

  5. Method to predict process signals to learn for SVM

    International Nuclear Information System (INIS)

    Minowa, Hirotsugu; Gofuku, Akio

    2013-01-01

    Study of diagnostic system using machine learning to reduce the incidents of the plant is in advance because an accident causes large damage about human, economic and social loss. There is a problem that 2 performances between a classification performance and generalization performance on the machine diagnostic machine is exclusive. However, multi agent diagnostic system makes it possible to use a diagnostic machine specialized either performance by multi diagnostic machines can be used. We propose method to select optimized variables to improve classification performance. The method can also be used for other supervised learning machine but Support Vector Machine. This paper reports that our method and result of evaluation experiment applied our method to output 40% of Monju. (author)

  6. Kinetic mesh-free method for flutter prediction in turbomachines

    Indian Academy of Sciences (India)

    -based mesh-free method for unsteady flows. ... Council for Scientific and Industrial Research, National Aerospace Laboratories, Computational and Theoretical Fluid Dynamics Division, Bangalore 560 017, India; Engineering Mechanics Unit, ...

  7. Digital photography and transparency-based methods for measuring wound surface area.

    Science.gov (United States)

    Bhedi, Amul; Saxena, Atul K; Gadani, Ravi; Patel, Ritesh

    2013-04-01

    To compare and determine a credible method of measurement of wound surface area by linear, transparency, and photographic methods for monitoring progress of wound healing accurately and ascertaining whether these methods are significantly different. From April 2005 to December 2006, 40 patients (30 men, 5 women, 5 children) admitted to the surgical ward of Shree Sayaji General Hospital, Baroda, had clean as well as infected wound following trauma, debridement, pressure sore, venous ulcer, and incision and drainage. Wound surface areas were measured by these three methods (linear, transparency, and photographic methods) simultaneously on alternate days. The linear method is statistically and significantly different from transparency and photographic methods (P value transparency and photographic methods (P value >0.05). Photographic and transparency methods provided measurements of wound surface area with equivalent result and there was no statistically significant difference between these two methods.

  8. Flash Flood Hazard Susceptibility Mapping Using Frequency Ratio and Statistical Index Methods in Coalmine Subsidence Areas

    Directory of Open Access Journals (Sweden)

    Chen Cao

    2016-09-01

    Full Text Available This study focused on producing flash flood hazard susceptibility maps (FFHSM using frequency ratio (FR and statistical index (SI models in the Xiqu Gully (XQG of Beijing, China. First, a total of 85 flash flood hazard locations (n = 85 were surveyed in the field and plotted using geographic information system (GIS software. Based on the flash flood hazard locations, a flood hazard inventory map was built. Seventy percent (n = 60 of the flooding hazard locations were randomly selected for building the models. The remaining 30% (n = 25 of the flooded hazard locations were used for validation. Considering that the XQG used to be a coal mining area, coalmine caves and subsidence caused by coal mining exist in this catchment, as well as many ground fissures. Thus, this study took the subsidence risk level into consideration for FFHSM. The ten conditioning parameters were elevation, slope, curvature, land use, geology, soil texture, subsidence risk area, stream power index (SPI, topographic wetness index (TWI, and short-term heavy rain. This study also tested different classification schemes for the values for each conditional parameter and checked their impacts on the results. The accuracy of the FFHSM was validated using area under the curve (AUC analysis. Classification accuracies were 86.61%, 83.35%, and 78.52% using frequency ratio (FR-natural breaks, statistical index (SI-natural breaks and FR-manual classification schemes, respectively. Associated prediction accuracies were 83.69%, 81.22%, and 74.23%, respectively. It was found that FR modeling using a natural breaks classification method was more appropriate for generating FFHSM for the Xiqu Gully.

  9. Predicting areas of sustainable error growth in quasigeostrophic flows using perturbation alignment properties

    Science.gov (United States)

    Rivière, G.; Hua, B. L.

    2004-10-01

    A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.

  10. Is this car looking at you? How anthropomorphism predicts fusiform face area activation when seeing cars.

    Science.gov (United States)

    Kühn, Simone; Brick, Timothy R; Müller, Barbara C N; Gallinat, Jürgen

    2014-01-01

    Anthropomorphism encompasses the attribution of human characteristics to non-living objects. In particular the human tendency to see faces in cars has long been noticed, yet its neural correlates are unknown. We set out to investigate whether the fusiform face area (FFA) is associated with seeing human features in car fronts, or whether, the higher-level theory of mind network (ToM), namely temporoparietal junction (TPJ) and medial prefrontal cortex (MPFC) show a link to anthropomorphism. Twenty participants underwent fMRI scanning during a passive car-front viewing task. We extracted brain activity from FFA, TPJ and MPFC. After the fMRI session participants were asked to spontaneously list adjectives that characterize each car front. Five raters judged the degree to which each adjective can be applied as a characteristic of human beings. By means of linear mixed models we found that the implicit tendency to anthropomorphize individual car fronts predicts FFA, but not TPJ or MPFC activity. The results point to an important role of FFA in the phenomenon of ascribing human attributes to non-living objects. Interestingly, brain regions that have been associated with thinking about beliefs and mental states of others (TPJ, MPFC) do not seem to be related to anthropomorphism of car fronts.

  11. Computer prediction of subsurface radionuclide transport: an adaptive numerical method

    International Nuclear Information System (INIS)

    Neuman, S.P.

    1983-01-01

    Radionuclide transport in the subsurface is often modeled with the aid of the advection-dispersion equation. A review of existing computer methods for the solution of this equation shows that there is need for improvement. To answer this need, a new adaptive numerical method is proposed based on an Eulerian-Lagrangian formulation. The method is based on a decomposition of the concentration field into two parts, one advective and one dispersive, in a rigorous manner that does not leave room for ambiguity. The advective component of steep concentration fronts is tracked forward with the aid of moving particles clustered around each front. Away from such fronts the advection problem is handled by an efficient modified method of characteristics called single-step reverse particle tracking. When a front dissipates with time, its forward tracking stops automatically and the corresponding cloud of particles is eliminated. The dispersion problem is solved by an unconventional Lagrangian finite element formulation on a fixed grid which involves only symmetric and diagonal matrices. Preliminary tests against analytical solutions of ne- and two-dimensional dispersion in a uniform steady state velocity field suggest that the proposed adaptive method can handle the entire range of Peclet numbers from 0 to infinity, with Courant numbers well in excess of 1

  12. DDR: Efficient computational method to predict drug–target interactions using graph mining and machine learning approaches

    KAUST Repository

    Olayan, Rawan S.

    2017-11-23

    Motivation Finding computationally drug-target interactions (DTIs) is a convenient strategy to identify new DTIs at low cost with reasonable accuracy. However, the current DTI prediction methods suffer the high false positive prediction rate. Results We developed DDR, a novel method that improves the DTI prediction accuracy. DDR is based on the use of a heterogeneous graph that contains known DTIs with multiple similarities between drugs and multiple similarities between target proteins. DDR applies non-linear similarity fusion method to combine different similarities. Before fusion, DDR performs a pre-processing step where a subset of similarities is selected in a heuristic process to obtain an optimized combination of similarities. Then, DDR applies a random forest model using different graph-based features extracted from the DTI heterogeneous graph. Using five repeats of 10-fold cross-validation, three testing setups, and the weighted average of area under the precision-recall curve (AUPR) scores, we show that DDR significantly reduces the AUPR score error relative to the next best start-of-the-art method for predicting DTIs by 34% when the drugs are new, by 23% when targets are new, and by 34% when the drugs and the targets are known but not all DTIs between them are not known. Using independent sources of evidence, we verify as correct 22 out of the top 25 DDR novel predictions. This suggests that DDR can be used as an efficient method to identify correct DTIs.

  13. Comparison of four statistical and machine learning methods for crash severity prediction.

    Science.gov (United States)

    Iranitalab, Amirfarrokh; Khattak, Aemal

    2017-11-01

    Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Performance prediction of electrohydrodynamic thrusters by the perturbation method

    International Nuclear Information System (INIS)

    Shibata, H.; Watanabe, Y.; Suzuki, K.

    2016-01-01

    In this paper, we present a novel method for analyzing electrohydrodynamic (EHD) thrusters. The method is based on a perturbation technique applied to a set of drift-diffusion equations, similar to the one introduced in our previous study on estimating breakdown voltage. The thrust-to-current ratio is generalized to represent the performance of EHD thrusters. We have compared the thrust-to-current ratio obtained theoretically with that obtained from the proposed method under atmospheric air conditions, and we have obtained good quantitative agreement. Also, we have conducted a numerical simulation in more complex thruster geometries, such as the dual-stage thruster developed by Masuyama and Barrett [Proc. R. Soc. A 469, 20120623 (2013)]. We quantitatively clarify the fact that if the magnitude of a third electrode voltage is low, the effective gap distance shortens, whereas if the magnitude of the third electrode voltage is sufficiently high, the effective gap distance lengthens.

  15. Evaluation of four methods for estimating leaf area of isolated trees

    Science.gov (United States)

    P.J. Peper; E.G. McPherson

    2003-01-01

    The accurate modeling of the physiological and functional processes of urban forests requires information on the leaf area of urban tree species. Several non-destructive, indirect leaf area sampling methods have shown good performance for homogenous canopies. These methods have not been evaluated for use in urban settings where trees are typically isolated and...

  16. A computer-based method for precise detection and calculation of affected skin areas

    DEFF Research Database (Denmark)

    Henriksen, Sille Mølvig; Nybing, Janus Damm; Bouert, Rasmus

    2016-01-01

    BACKGROUND: The aim of this study was to describe and validate a method to obtain reproducible and comparable results concerning extension of a specific skin area, unaffected by individual differences in body surface area. METHODS: A phantom simulating the human torso was equipped with three irre...

  17. A Method for Derivation of Areas for Assessment in Marital Relationships.

    Science.gov (United States)

    Broderick, Joan E.

    1981-01-01

    Expands upon factor-analytic and rational methods and introduces a third method for determining content areas to be assessed in marital relationships. Definitions of a "good marriage" were content analyzed, and a number of areas were added. Demographic subgroup differences were found not to be influential factors. (Author)

  18. Surface area of antimony oxide by isotope exchange and other methods

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Y.K.; Acharya, B.V.; Rangamannar, B.

    1985-06-17

    Specific surface areas of antimony oxide samples, one commercial, the other prepared from antimony trichloride were measured by heterogeneous isotope exchange, gas adsorption, air permeability and microscopic methods. Specific surface areas obtained by these four methods for the two samples were compared and the observed differences are explained.

  19. Water Pollution Prediction in the Three Gorges Reservoir Area and Countermeasures for Sustainable Development of the Water Environment

    Directory of Open Access Journals (Sweden)

    Yinghui Li

    2017-10-01

    Full Text Available The Three Gorges Project was implemented in 1994 to promote sustainable water resource use and development of the water environment in the Three Gorges Reservoir Area (hereafter “Reservoir Area”. However, massive discharge of wastewater along the river threatens these goals; therefore, this study employs a grey prediction model (GM to predict the annual emissions of primary pollution sources, including industrial wastewater, domestic wastewater, and oily and domestic wastewater from ships, that influence the Three Gorges Reservoir Area water environment. First, we optimize the initial values of a traditional GM (1,1 model, and build a new GM (1,1 model that minimizes the sum of squares of the relative simulation errors. Second, we use the new GM (1,1 model to simulate historical annual emissions data for the four pollution sources and thereby test the effectiveness of the model. Third, we predict the annual emissions of the four pollution sources in the Three Gorges Reservoir Area for a future period. The prediction results reveal the annual emission trends for the major wastewater types, and indicate the primary sources of water pollution in the Three Gorges Reservoir Area. Based on our predictions, we suggest several countermeasures against water pollution and towards the sustainable development of the water environment in the Three Gorges Reservoir Area.

  20. Water Pollution Prediction in the Three Gorges Reservoir Area and Countermeasures for Sustainable Development of the Water Environment

    Science.gov (United States)

    Huang, Shuaijin; Qu, Xuexin

    2017-01-01

    The Three Gorges Project was implemented in 1994 to promote sustainable water resource use and development of the water environment in the Three Gorges Reservoir Area (hereafter “Reservoir Area”). However, massive discharge of wastewater along the river threatens these goals; therefore, this study employs a grey prediction model (GM) to predict the annual emissions of primary pollution sources, including industrial wastewater, domestic wastewater, and oily and domestic wastewater from ships, that influence the Three Gorges Reservoir Area water environment. First, we optimize the initial values of a traditional GM (1,1) model, and build a new GM (1,1) model that minimizes the sum of squares of the relative simulation errors. Second, we use the new GM (1,1) model to simulate historical annual emissions data for the four pollution sources and thereby test the effectiveness of the model. Third, we predict the annual emissions of the four pollution sources in the Three Gorges Reservoir Area for a future period. The prediction results reveal the annual emission trends for the major wastewater types, and indicate the primary sources of water pollution in the Three Gorges Reservoir Area. Based on our predictions, we suggest several countermeasures against water pollution and towards the sustainable development of the water environment in the Three Gorges Reservoir Area. PMID:29077006

  1. A critical pressure based panel method for prediction of unsteady loading of marine propellers under cavitation

    International Nuclear Information System (INIS)

    Liu, P.; Bose, N.; Colbourne, B.

    2002-01-01

    A simple numerical procedure is established and implemented into a time domain panel method to predict hydrodynamic performance of marine propellers with sheet cavitation. This paper describes the numerical formulations and procedures to construct this integration. Predicted hydrodynamic loads were compared with both a previous numerical model and experimental measurements for a propeller in steady flow. The current method gives a substantial improvement in thrust and torque coefficient prediction over a previous numerical method at low cavitation numbers of less than 2.0, where severe cavitation occurs. Predicted pressure coefficient distributions are also presented. (author)

  2. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    Science.gov (United States)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  3. Methods to compute reliabilities for genomic predictions of feed intake

    Science.gov (United States)

    For new traits without historical reference data, cross-validation is often the preferred method to validate reliability (REL). Time truncation is less useful because few animals gain substantial REL after the truncation point. Accurate cross-validation requires separating genomic gain from pedigree...

  4. Comparison of Four Weighting Methods in Fuzzy-based Land Suitability to Predict Wheat Yield

    Directory of Open Access Journals (Sweden)

    Fatemeh Rahmati

    2017-06-01

    Full Text Available Introduction: Land suitability evaluation is a process to examine the degree of land fitness for specific utilization and also makes it possible to estimate land productivity potential. In 1976, FAO provided a general framework for land suitability classification. It has not been proposed a specific method to perform this classification in the framework. In later years, a collection of methods was presented based on the FAO framework. In parametric method, different land suitability aspects are defined as completely discrete groups and are separated from each other by distinguished and consistent ranges. Therefore, land units that have moderate suitability can only choose one of the characteristics of predefined classes of land suitability. Fuzzy logic is an extension of Boolean logic by LotfiZadeh in 1965 based on the mathematical theory of fuzzy sets, which is a generalization of the classical set theory. By introducing the notion of degree in the verification of a condition, fuzzy method enables a condition to be in a state other than true or false, as well as provides a very valuable flexibility for reasoning, which makes it possible to take into account inaccuracies and uncertainties. One advantage of fuzzy logic in order to formalize human reasoning is that the rules are set in natural language. In evaluation method based on fuzzy logic, the weights are used for land characteristics. The objective of this study was to compare four methods of weight calculation in the fuzzy logic to predict the yield of wheat in the study area covering 1500 ha in Kian town in Shahrekord (Chahrmahal and Bakhtiari province, Iran. Materials and Methods: In such investigations, climatic factors, and soil physical and chemical characteristics are studied. This investigation involves several studies including a lab study, and qualitative and quantitative land suitability evaluation with fuzzy logic for wheat. Factors affecting the wheat production consist of

  5. Prediction of IRI in short and long terms for flexible pavements: ANN and GMDH methods

    NARCIS (Netherlands)

    Ziari, H.; Sobhani, J.; Ayoubinejad, J.; Hartmann, Timo

    2015-01-01

    Prediction of pavement condition is one of the most important issues in pavement management systems. In this paper, capabilities of artificial neural networks (ANNs) and group method of data handling (GMDH) methods in predicting flexible pavement conditions were analysed in three levels: in 1 year,

  6. Ensemble approach combining multiple methods improves human transcription start site prediction.

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-01-01

    The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets.

  7. Modification of an Existing In vitro Method to Predict Relative Bioavailable Arsenic in Soils

    Science.gov (United States)

    The soil matrix can sequester arsenic (As) and reduces its exposure by soil ingestion. In vivo dosing studies and in vitro gastrointestinal (IVG) methods have been used to predict relative bioavailable (RBA) As. Originally, the Ohio State University (OSU-IVG) method predicted R...

  8. Signal predictions for a proposed fast neutron interrogation method

    International Nuclear Information System (INIS)

    Sale, K.E.

    1992-12-01

    We have applied the Monte Carlo radiation transport code COG) to assess the utility of a proposed explosives detection scheme based on neutron emission. In this scheme a pulsed neutron beam is generated by an approximately seven MeV deuteron beam incident on a thick Be target. A scintillation detector operating in the current mode measures the neutrons transmitted through the object as a function of time. The flight time of unscattered neutrons from the source to the detector is simply related to the neutron energy. This information along with neutron cross section excitation functions is used to infer the densities of H, C, N and O in the volume sampled. The code we have chosen to use enables us to create very detailed and realistic models of the geometrical configuration of the system, the neutron source and of the detector response. By calculating the signals that will be observed for several configurations and compositions of interrogated object we can investigate and begin to understand how a system that could actually be fielded will perform. Using this modeling capability many early on with substantial savings in time and cost and with improvements in performance. We will present our signal predictions for simple single element test cases and for explosive compositions. From these studies it is dear that the interpretation of the signals from such an explosives identification system will pose a substantial challenge

  9. Predicted Infiltration for Sodic/Saline Soils from Reclaimed Coastal Areas: Sensitivity to Model Parameters

    Directory of Open Access Journals (Sweden)

    Dongdong Liu

    2014-01-01

    Full Text Available This study was conducted to assess the influences of soil surface conditions and initial soil water content on water movement in unsaturated sodic soils of reclaimed coastal areas. Data was collected from column experiments in which two soils from a Chinese coastal area reclaimed in 2007 (Soil A, saline and 1960 (Soil B, nonsaline were used, with bulk densities of 1.4 or 1.5 g/cm3. A 1D-infiltration model was created using a finite difference method and its sensitivity to hydraulic related parameters was tested. The model well simulated the measured data. The results revealed that soil compaction notably affected the water retention of both soils. Model simulations showed that increasing the ponded water depth had little effect on the infiltration process, since the increases in cumulative infiltration and wetting front advancement rate were small. However, the wetting front advancement rate increased and the cumulative infiltration decreased to a greater extent when θ0 was increased. Soil physical quality was described better by the S parameter than by the saturated hydraulic conductivity since the latter was also affected by the physical chemical effects on clay swelling occurring in the presence of different levels of electrolytes in the soil solutions of the two soils.

  10. Predicted infiltration for sodic/saline soils from reclaimed coastal areas: sensitivity to model parameters.

    Science.gov (United States)

    Liu, Dongdong; She, Dongli; Yu, Shuang'en; Shao, Guangcheng; Chen, Dan

    2014-01-01

    This study was conducted to assess the influences of soil surface conditions and initial soil water content on water movement in unsaturated sodic soils of reclaimed coastal areas. Data was collected from column experiments in which two soils from a Chinese coastal area reclaimed in 2007 (Soil A, saline) and 1960 (Soil B, nonsaline) were used, with bulk densities of 1.4 or 1.5 g/cm(3). A 1D-infiltration model was created using a finite difference method and its sensitivity to hydraulic related parameters was tested. The model well simulated the measured data. The results revealed that soil compaction notably affected the water retention of both soils. Model simulations showed that increasing the ponded water depth had little effect on the infiltration process, since the increases in cumulative infiltration and wetting front advancement rate were small. However, the wetting front advancement rate increased and the cumulative infiltration decreased to a greater extent when θ₀ was increased. Soil physical quality was described better by the S parameter than by the saturated hydraulic conductivity since the latter was also affected by the physical chemical effects on clay swelling occurring in the presence of different levels of electrolytes in the soil solutions of the two soils.

  11. An SEU rate prediction method for microprocessors of space applications

    International Nuclear Information System (INIS)

    Gao Jie; Li Qiang

    2012-01-01

    In this article,the relationship between static SEU (Single Event Upset) rate and dynamic SEU rate in microprocessors for satellites is studied by using process duty cycle concept and fault injection technique. The results are compared to in-orbit flight monitoring data. The results show that dynamic SEU rate by using process duty cycle can estimate in-orbit SEU rate of microprocessor reasonable; and the fault injection technique is a workable method to estimate SEU rate. (authors)

  12. Experimentally aided development of a turbine heat transfer prediction method

    International Nuclear Information System (INIS)

    Forest, A.E.; White, A.J.; Lai, C.C.; Guo, S.M.; Oldfield, M.L.G.; Lock, G.D.

    2004-01-01

    In the design of cooled turbomachinery blading a central role is played by the computer methods used to optimise the aerodynamic and thermal performance of the turbine aerofoils. Estimates of the heat load on the turbine blading should be as accurate as possible, in order that adequate life may be obtained with the minimum cooling air requirement. Computer methods are required which are able to model transonic flows, which are a mixture of high temperature combustion gases and relatively cool air injected through holes in the aerofoil surface. These holes may be of complex geometry, devised after empirical studies of the optimum shape and the most cost effective manufacturing technology. The method used here is a further development of the heat transfer design code (HTDC), originally written by Rolls-Royce plc under subcontract to Rolls-Royce Inc for the United States Air Force. The physical principles of the modelling employed in the code are explained without extensive mathematical details. The paper describes the calibration of the code in conjunction with a series of experimental measurements on a scale model of a high-pressure nozzle guide vane at non-dimensionally correct engine conditions. The results are encouraging, although indicating that some further work is required in modelling highly accelerated pressure surface flow

  13. Prediction of periodically correlated processes by wavelet transform and multivariate methods with applications to climatological data

    Science.gov (United States)

    Ghanbarzadeh, Mitra; Aminghafari, Mina

    2015-05-01

    This article studies the prediction of periodically correlated process using wavelet transform and multivariate methods with applications to climatological data. Periodically correlated processes can be reformulated as multivariate stationary processes. Considering this fact, two new prediction methods are proposed. In the first method, we use stepwise regression between the principal components of the multivariate stationary process and past wavelet coefficients of the process to get a prediction. In the second method, we propose its multivariate version without principal component analysis a priori. Also, we study a generalization of the prediction methods dealing with a deterministic trend using exponential smoothing. Finally, we illustrate the performance of the proposed methods on simulated and real climatological data (ozone amounts, flows of a river, solar radiation, and sea levels) compared with the multivariate autoregressive model. The proposed methods give good results as we expected.

  14. An Electrochemical Method to Predict Corrosion Rates in Soils

    Energy Technology Data Exchange (ETDEWEB)

    Dafter, M. R. [Hunter Water Australia Pty Ltd, Newcastle (Australia)

    2016-10-15

    Linear polarization resistance (LPR) testing of soils has been used extensively by a number of water utilities across Australia for many years now to determine the condition of buried ferrous water mains. The LPR test itself is a relatively simple, inexpensive test that serves as a substitute for actual exhumation and physical inspection of buried water mains to determine corrosion losses. LPR testing results (and the corresponding pit depth estimates) in combination with proprietary pipe failure algorithms can provide a useful predictive tool in determining the current and future conditions of an asset{sup 1)}. A number of LPR tests have been developed on soil by various researchers over the years{sup 1)}, but few have gained widespread commercial use, partly due to the difficulty in replicating the results. This author developed an electrochemical cell that was suitable for LPR soil testing and utilized this cell to test a series of soil samples obtained through an extensive program of field exhumations. The objective of this testing was to examine the relationship between short-term electrochemical testing and long-term in-situ corrosion of buried water mains, utilizing an LPR test that could be robustly replicated. Forty-one soil samples and related corrosion data were obtained from ad hoc condition assessments of buried water mains located throughout the Hunter region of New South Wales, Australia. Each sample was subjected to the electrochemical test developed by the author, and the resulting polarization data were compared with long-term pitting data obtained from each water main. The results of this testing program enabled the author to undertake a comprehensive review of the LPR technique as it is applied to soils and to examine whether correlations can be made between LPR testing results and long-term field corrosion.

  15. Numerical evaluation of the five sensor probe method for measurement of local interfacial area concentration of cap bubbles

    International Nuclear Information System (INIS)

    Euh, D.J.; Yun, B.J.; Song, C.H.; Kwon, T.S.; Chung, M.K.; Lee, U.C.

    2000-01-01

    The interfacial area concentration (IAC) is one of the most important parameters in the two-fluid model for two-phase flow analysis. The IAC can be measured by a local conductivity probe method that uses the difference of conductivity between water and air/steam. The number of sensors in the conductivity probe may be differently chosen by considering the flow regime of two-phase flow. The four sensor conductivity probe method predicts the IAC without any assumptions of the bubble shape. The local IAC can be obtained by measuring the three dimensional velocity vector elements at the measuring point, and the directional cosines of the sensors. The five sensor conductivity probe method proposed in this study is based on the four sensor probe method. With the five sensor probe, the local IAC for a given referred measuring area of the probe can be predicted more exactly. In this paper, the mathematical approach of the five sensor probe method for measuring the IAC is described, and a numerical simulation is carried out for ideal cap bubbles of which the sizes and locations are determined by a random number generator. (author)

  16. Prediction of methylmercury accumulation in rice grains by chemical extraction methods

    International Nuclear Information System (INIS)

    Zhu, Dai-Wen; Zhong, Huan; Zeng, Qi-Long; Yin, Ying

    2015-01-01

    To explore the possibility of using chemical extraction methods to predict phytoavailability/bioaccumulation of soil-bound MeHg, MeHg extractions by three widely-used extractants (CaCl 2 , DTPA, and (NH 4 ) 2 S 2 O 3 ) were compared with MeHg accumulation in rice grains. Despite of variations in characteristics of different soils, MeHg extracted by (NH 4 ) 2 S 2 O 3 (highly affinitive to MeHg) correlated well with grain MeHg levels. Thus (NH 4 ) 2 S 2 O 3 extraction, solubilizing not only weakly-bound and but also strongly-bound MeHg, may provide a measure of ‘phytoavailable MeHg pool’ for rice plants. Besides, a better prediction of grain MeHg levels was obtained when growing condition of rice plants was also considered. However, MeHg extracted by CaCl 2 or DTPA, possibly quantifying ‘exchangeable MeHg pool’ or ‘weakly-complexed MeHg pool’ in soils, may not indicate phytoavailable MeHg or predict grain MeHg levels. Our results provided the possibility of predicting MeHg phytoavailability/bioaccumulation by (NH 4 ) 2 S 2 O 3 extraction, which could be useful in screening soils for rice cultivation in contaminated areas. - Highlights: • MeHg extraction by (NH 4 ) 2 S 2 O 3 correlates well with its accumulation in rice grains. • MeHg extraction by (NH 4 ) 2 S 2 O 3 provides a measure of phytoavailable MeHg in soils. • Some strongly-bound MeHg could be desorbed from soils and available to rice plants. • MeHg extraction by CaCl 2 or DTPA could not predict grain MeHg levels. - Methylmercury extraction from soils by (NH 4 ) 2 S 2 O 3 could possibly be used for predicting methylmercury phytoavailability and its bioaccumulation in rice grains

  17. [Predictive methods versus clinical titration for the initiation of lithium therapy. A systematic review].

    Science.gov (United States)

    Geeraerts, I; Sienaert, P

    2013-01-01

    When lithium is administered, the clinician needs to know when the lithium in the patient’s blood has reached a therapeutic level. At the initiation of treatment the level is usually achieved gradually through the application of the titration method. In order to increase the efficacy of this procedure several methods for dosing lithium and for predicting lithium levels have been developed. To conduct a systematic review of the publications relating to the various methods for dosing lithium or predicting lithium levels at the initiation of therapy. We searched Medline systematically for articles published in English, French or Dutch between 1966 and April 2012 which described or studied a method for dosing lithium or for predicting the lithium level reached following a specific dosage. We screened the reference lists of relevant articles in order to locate additional papers. We found 38 lithium prediction methods, in addition to the clinical titration method. These methods can be divided into two categories: the ‘a priori’ methods and the ‘test-dose’ methods, the latter requiring the administration of a test dose of lithium. The lithium prediction methods generally achieve a therapeutic blood level faster than the clinical titration method, but none of the methods achieves convincing results. On the basis of our review, we propose that the titration method should be used as the standard method in clinical practice.

  18. Ensemble Methods in Data Mining Improving Accuracy Through Combining Predictions

    CERN Document Server

    Seni, Giovanni

    2010-01-01

    This book is aimed at novice and advanced analytic researchers and practitioners -- especially in Engineering, Statistics, and Computer Science. Those with little exposure to ensembles will learn why and how to employ this breakthrough method, and advanced practitioners will gain insight into building even more powerful models. Throughout, snippets of code in R are provided to illustrate the algorithms described and to encourage the reader to try the techniques. The authors are industry experts in data mining and machine learning who are also adjunct professors and popular speakers. Although e

  19. A Novel Method to Predict Circulation Control Noise

    Science.gov (United States)

    2016-03-17

    and incompressibility of the flow are automatically satisfied (Sirovich, 1987). The difference be- tween these two methods stems from the...From top left to bottom right 1 kHz, 2 kHz, 4 kHz, 8 kHz, 12 kHz , and 16 kHz. blowing and lowest blowing (CJ.l = 0 and Cll = 0.004 , respectively...also shown that at low frequencies for the lower blowing conditions (Gil = 0, Gil = 0.004, and Cll = 0.017) the levels are quite similar. By using

  20. Shelf life prediction of apple brownies using accelerated method

    Science.gov (United States)

    Pulungan, M. H.; Sukmana, A. D.; Dewi, I. A.

    2018-03-01

    The aim of this research was to determine shelf life of apple brownies. Shelf life was determined with Accelerated Shelf Life Testing method and Arrhenius equation. Experiment was conducted at 25, 35, and 45°C for 30 days. Every five days, the sample was analysed for free fatty acid (FFA), water activity (Aw), and organoleptic acceptance (flavour, aroma, and texture). The shelf life of the apple brownies based on FFA were 110, 54, and 28 days at temperature of 25, 35, and 45°C, respectively.

  1. Comparison of diffusion charging and mobility-based methods for measurement of aerosol agglomerate surface area.

    Science.gov (United States)

    Ku, Bon Ki; Kulkarni, Pramod

    2012-05-01

    We compare different approaches to measure surface area of aerosol agglomerates. The objective was to compare field methods, such as mobility and diffusion charging based approaches, with laboratory approach, such as Brunauer, Emmett, Teller (BET) method used for bulk powder samples. To allow intercomparison of various surface area measurements, we defined 'geometric surface area' of agglomerates (assuming agglomerates are made up of ideal spheres), and compared various surface area measurements to the geometric surface area. Four different approaches for measuring surface area of agglomerate particles in the size range of 60-350 nm were compared using (i) diffusion charging-based sensors from three different manufacturers, (ii) mobility diameter of an agglomerate, (iii) mobility diameter of an agglomerate assuming a linear chain morphology with uniform primary particle size, and (iv) surface area estimation based on tandem mobility-mass measurement and microscopy. Our results indicate that the tandem mobility-mass measurement, which can be applied directly to airborne particles unlike the BET method, agrees well with the BET method. It was also shown that the three diffusion charging-based surface area measurements of silver agglomerates were similar within a factor of 2 and were lower than those obtained from the tandem mobility-mass and microscopy method by a factor of 3-10 in the size range studied. Surface area estimated using the mobility diameter depended on the structure or morphology of the agglomerate with significant underestimation at high fractal dimensions approaching 3.

  2. Short-term prediction method of wind speed series based on fractal interpolation

    International Nuclear Information System (INIS)

    Xiu, Chunbo; Wang, Tiantian; Tian, Meng; Li, Yanqing; Cheng, Yi

    2014-01-01

    Highlights: • An improved fractal interpolation prediction method is proposed. • The chaos optimization algorithm is used to obtain the iterated function system. • The fractal extrapolate interpolation prediction of wind speed series is performed. - Abstract: In order to improve the prediction performance of the wind speed series, the rescaled range analysis is used to analyze the fractal characteristics of the wind speed series. An improved fractal interpolation prediction method is proposed to predict the wind speed series whose Hurst exponents are close to 1. An optimization function which is composed of the interpolation error and the constraint items of the vertical scaling factors in the fractal interpolation iterated function system is designed. The chaos optimization algorithm is used to optimize the function to resolve the optimal vertical scaling factors. According to the self-similarity characteristic and the scale invariance, the fractal extrapolate interpolation prediction can be performed by extending the fractal characteristic from internal interval to external interval. Simulation results show that the fractal interpolation prediction method can get better prediction result than others for the wind speed series with the fractal characteristic, and the prediction performance of the proposed method can be improved further because the fractal characteristic of its iterated function system is similar to that of the predicted wind speed series

  3. Limited Area Predictability: Is There A Limit To The Operational Usefulness of A Lam

    Science.gov (United States)

    Mesinger, F.

    The issue of the limited area predictability in the context of the operational experience of the Eta Model, driven by the LBCs of the NCEP global spectral (Avn) model, is examined. The traditional view is that "the contamination at the lateral boundaries ... limits the operational usefulness of the LAM beyond some forecast time range". In the case of the Eta this contamination consists not only of the lower resolution of the Avn LBCs and the much discussed mathematical "lateral boundary error", but also of the use of the LBCs of the previous Avn run, at 0000 and 1200 UTC estimated to amount to about an 8 h loss in accuracy. Looking for the signs of the Eta accuracy in relative terms falling behind that of the Avn we have examined the trend of the Eta vs Avn precipitation scores, the rms fits to raobs of the two models as a function of time, and the errors of these models at extended forecast times in placing the centers of major lows. In none of these efforts, some including forecasts out to 84 h, we were able to notice signs of the Eta accuracy being visibly affected by the inflow of the lateral boundary errors. It is therefore hypothesized that some of the Eta design features compensate for the increasing influence of the Avn LBC errors. Candidate features are discussed, with the eta coordinate being a contender to play a major role. This situation being possible for the pair of models discussed, existence of a general limit for the operational usefulness of a LAM seems questionable.

  4. A Method for Driving Route Predictions Based on Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Ning Ye

    2015-01-01

    Full Text Available We present a driving route prediction method that is based on Hidden Markov Model (HMM. This method can accurately predict a vehicle’s entire route as early in a trip’s lifetime as possible without inputting origins and destinations beforehand. Firstly, we propose the route recommendation system architecture, where route predictions play important role in the system. Secondly, we define a road network model, normalize each of driving routes in the rectangular coordinate system, and build the HMM to make preparation for route predictions using a method of training set extension based on K-means++ and the add-one (Laplace smoothing technique. Thirdly, we present the route prediction algorithm. Finally, the experimental results of the effectiveness of the route predictions that is based on HMM are shown.

  5. Machine Learning Methods for Prediction of CDK-Inhibitors

    Science.gov (United States)

    Ramana, Jayashree; Gupta, Dinesh

    2010-01-01

    Progression through the cell cycle involves the coordinated activities of a suite of cyclin/cyclin-dependent kinase (CDK) complexes. The activities of the complexes are regulated by CDK inhibitors (CDKIs). Apart from its role as cell cycle regulators, CDKIs are involved in apoptosis, transcriptional regulation, cell fate determination, cell migration and cytoskeletal dynamics. As the complexes perform crucial and diverse functions, these are important drug targets for tumour and stem cell therapeutic interventions. However, CDKIs are represented by proteins with considerable sequence heterogeneity and may fail to be identified by simple similarity search methods. In this work we have evaluated and developed machine learning methods for identification of CDKIs. We used different compositional features and evolutionary information in the form of PSSMs, from CDKIs and non-CDKIs for generating SVM and ANN classifiers. In the first stage, both the ANN and SVM models were evaluated using Leave-One-Out Cross-Validation and in the second stage these were tested on independent data sets. The PSSM-based SVM model emerged as the best classifier in both the stages and is publicly available through a user-friendly web interface at http://bioinfo.icgeb.res.in/cdkipred. PMID:20967128

  6. Prediction of skin sensitizers using alternative methods to animal experimentation.

    Science.gov (United States)

    Johansson, Henrik; Lindstedt, Malin

    2014-07-01

    Regulatory frameworks within the European Union demand that chemical substances are investigated for their ability to induce sensitization, an adverse health effect caused by the human immune system in response to chemical exposure. A recent ban on the use of animal tests within the cosmetics industry has led to an urgent need for alternative animal-free test methods that can be used for assessment of chemical sensitizers. To date, no such alternative assay has yet completed formal validation. However, a number of assays are in development and the understanding of the biological mechanisms of chemical sensitization has greatly increased during the last decade. In this MiniReview, we aim to summarize and give our view on the recent progress of method development for alternative assessment of chemical sensitizers. We propose that integrated testing strategies should comprise complementary assays, providing measurements of a wide range of mechanistic events, to perform well-educated risk assessments based on weight of evidence. © 2014 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).

  7. A simple method for improving predictions of nuclear masses

    International Nuclear Information System (INIS)

    Yamada, Masami; Tsuchiya, Susumu; Tachibana, Takahiro

    1991-01-01

    The formula for atomic masses which exactly conforms to all nuclides does not exist in reality and cannot be anticipated for the time being hereafter. At present the masses of many nuclides are known experimentally with good accuracy, but the values of whichever mass formulas are more or less different from those experimental values except small number of accidental coincidence. Under such situation, for forecasting the mass of an unknown nuclide, how is it cleverly done ? Generally speaking, to take the value itself of a mass formula seems not the best means. It may be better to take the difference of the values of a mass formula and experiment for the nuclide close to that to be forecast in consideration and to correct the forecast value of the mass formula. In this report, the simple method for this correction is proposed. The formula which connects between two extreme cases, the difference between a true mass and the value of a mass formula is the sum of proton part and neutron part, and the difference distributes randomly around zero, was proposed. The procedure for its concrete application is explained. This method can be applied to other physical quantities than mass, for example the half life of beta decay. (K.I.)

  8. Maximum Likelihood Method for Predicting Environmental Conditions from Assemblage Composition: The R Package bio.infer

    Directory of Open Access Journals (Sweden)

    Lester L. Yuan

    2007-06-01

    Full Text Available This paper provides a brief introduction to the R package bio.infer, a set of scripts that facilitates the use of maximum likelihood (ML methods for predicting environmental conditions from assemblage composition. Environmental conditions can often be inferred from only biological data, and these inferences are useful when other sources of data are unavailable. ML prediction methods are statistically rigorous and applicable to a broader set of problems than more commonly used weighted averaging techniques. However, ML methods require a substantially greater investment of time to program algorithms and to perform computations. This package is designed to reduce the effort required to apply ML prediction methods.

  9. Extensions of Island Biogeography Theory predict the scaling of functional trait composition with habitat area and isolation.

    Science.gov (United States)

    Jacquet, Claire; Mouillot, David; Kulbicki, Michel; Gravel, Dominique

    2017-02-01

    The Theory of Island Biogeography (TIB) predicts how area and isolation influence species richness equilibrium on insular habitats. However, the TIB remains silent about functional trait composition and provides no information on the scaling of functional diversity with area, an observation that is now documented in many systems. To fill this gap, we develop a probabilistic approach to predict the distribution of a trait as a function of habitat area and isolation, extending the TIB beyond the traditional species-area relationship. We compare model predictions to the body-size distribution of piscivorous and herbivorous fishes found on tropical reefs worldwide. We find that small and isolated reefs have a higher proportion of large-sized species than large and connected reefs. We also find that knowledge of species body-size and trophic position improves the predictions of fish occupancy on tropical reefs, supporting both the allometric and trophic theory of island biogeography. The integration of functional ecology to island biogeography is broadly applicable to any functional traits and provides a general probabilistic approach to study the scaling of trait distribution with habitat area and isolation. © 2016 John Wiley & Sons Ltd/CNRS.

  10. Prediction of 5-year overall survival in cervical cancer patients treated with radical hysterectomy using computational intelligence methods.

    Science.gov (United States)

    Obrzut, Bogdan; Kusy, Maciej; Semczuk, Andrzej; Obrzut, Marzanna; Kluska, Jacek

    2017-12-12

    Computational intelligence methods, including non-linear classification algorithms, can be used in medical research and practice as a decision making tool. This study aimed to evaluate the usefulness of artificial intelligence models for 5-year overall survival prediction in patients with cervical cancer treated by radical hysterectomy. The data set was collected from 102 patients with cervical cancer FIGO stage IA2-IIB, that underwent primary surgical treatment. Twenty-three demographic, tumor-related parameters and selected perioperative data of each patient were collected. The simulations involved six computational intelligence methods: the probabilistic neural network (PNN), multilayer perceptron network, gene expression programming classifier, support vector machines algorithm, radial basis function neural network and k-Means algorithm. The prediction ability of the models was determined based on the accuracy, sensitivity, specificity, as well as the area under the receiver operating characteristic curve. The results of the computational intelligence methods were compared with the results of linear regression analysis as a reference model. The best results were obtained by the PNN model. This neural network provided very high prediction ability with an accuracy of 0.892 and sensitivity of 0.975. The area under the receiver operating characteristics curve of PNN was also high, 0.818. The outcomes obtained by other classifiers were markedly worse. The PNN model is an effective tool for predicting 5-year overall survival in cervical cancer patients treated with radical hysterectomy.

  11. A noninvasive method for the prediction of fetal hemolytic disease

    Directory of Open Access Journals (Sweden)

    E. N. Kravchenko

    2017-01-01

    Full Text Available Objective: to improve the diagnosis of fetal hemolytic disease.Subjects and methods. A study group consisted of 42 pregnant women whose newborn infants had varying degrees of hemolytic disease. The women were divided into 3 subgroups according to the severity of neonatal hemolytic disease: 1 pregnant women whose neonates were born with severe hemolytic disease (n = 14; 2 those who gave birth to babies with moderate hemolytic disease (n = 11; 3 those who delivered infants with mild hemolytic disease (n = 17. A comparison group included 42 pregnant women whose babies were born without signs of hemolytic disease. Curvesfor blood flow velocity in the middle cerebral artery were analyzed in a fetus of 25 to 39 weeks’ gestation.Results. The peak systolic blood flow velocity was observed in Subgroup 1; however, the indicator did not exceed 1.5 MoM even in severe fetal anemic syndrome. The fetal middle artery blood flow velocity rating scale was divided into 2 zones: 1 the boundary values of peak systolic blood flow velocity from the median to the obtained midscore; 2 the boundary values of peak systolic blood flow velocity of the obtained values of as high as 1.5 MoM.Conclusion. The value of peak systolic blood flow velocity being in Zone 2, or its dynamic changes by transiting to this zone can serve as a prognostic factor in the development of severe fetal hemolytic disease. 

  12. Building Customer Churn Prediction Models in Fitness Industry with Machine Learning Methods

    OpenAIRE

    Shan, Min

    2017-01-01

    With the rapid growth of digital systems, churn management has become a major focus within customer relationship management in many industries. Ample research has been conducted for churn prediction in different industries with various machine learning methods. This thesis aims to combine feature selection and supervised machine learning methods for defining models of churn prediction and apply them on fitness industry. Forward selection is chosen as feature selection methods. Support Vector ...

  13. A method of quantitative prediction for sandstone type uranium deposit in Russia and its application

    International Nuclear Information System (INIS)

    Chang Shushuai; Jiang Minzhong; Li Xiaolu

    2008-01-01

    The paper presents the foundational principle of quantitative predication for sandstone type uranium deposits in Russia. Some key methods such as physical-mathematical model construction and deposits prediction are described. The method has been applied to deposits prediction in Dahongshan region of Chaoshui basin. It is concluded that the technique can fortify the method of quantitative predication for sandstone type uranium deposits, and it could be used as a new technique in China. (authors)

  14. Validity of a Manual Soft Tissue Profile Prediction Method Following Mandibular Setback Osteotomy

    OpenAIRE

    Kolokitha, Olga-Elpis

    2007-01-01

    Objectives The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. Methods To test the validity of the manu...

  15. Plant management in natural areas: balancing chemical, mechanical, and cultural control methods

    Science.gov (United States)

    Steven Manning; James. Miller

    2011-01-01

    After determining the best course of action for control of an invasive plant population, it is important to understand the variety of methods available to the integrated pest management professional. A variety of methods are now widely used in managing invasive plants in natural areas, including chemical, mechanical, and cultural control methods. Once the preferred...

  16. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  17. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  18. Limited Sampling Strategy for the Prediction of Area Under the Curve (AUC) of Statins: Reliability of a Single Time Point for AUC Prediction for Pravastatin and Simvastatin.

    Science.gov (United States)

    Srinivas, N R

    2016-02-01

    Statins are widely prescribed medicines and are also available in fixed dose combinations with other drugs to treat several chronic ailments. Given the safety issues associated with statins it may be important to assess feasibility of a single time concentration strategy for prediction of exposure (area under the curve; AUC). The peak concentration (Cmax) was used to establish relationship with AUC separately for pravastatin and simvastatin using published pharmacokinetic data. The regression equations generated for statins were used to predict the AUC values from various literature references. The fold difference of the observed divided by predicted values along with correlation coefficient (r) were used to judge the feasibility of the single time point approach. Both pravastatin and simvastatin showed excellent correlation of Cmax vs. AUC values with r value ≥ 0.9638 (pAUC predictions and >81% of the predicted values were in a narrower range of >0.75-fold but AUC values showed excellent correlation for pravastatin (r=0.9708, n=115; pAUC predictions. On the basis of the present work, it is feasible to develop a single concentration time point strategy that coincides with Cmax occurrence for both pravastatin and simvastatin from a therapeutic drug monitoring perspective. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Real-time prediction of respiratory motion based on local regression methods

    International Nuclear Information System (INIS)

    Ruan, D; Fessler, J A; Balter, J M

    2007-01-01

    Recent developments in modulation techniques enable conformal delivery of radiation doses to small, localized target volumes. One of the challenges in using these techniques is real-time tracking and predicting target motion, which is necessary to accommodate system latencies. For image-guided-radiotherapy systems, it is also desirable to minimize sampling rates to reduce imaging dose. This study focuses on predicting respiratory motion, which can significantly affect lung tumours. Predicting respiratory motion in real-time is challenging, due to the complexity of breathing patterns and the many sources of variability. We propose a prediction method based on local regression. There are three major ingredients of this approach: (1) forming an augmented state space to capture system dynamics, (2) local regression in the augmented space to train the predictor from previous observation data using semi-periodicity of respiratory motion, (3) local weighting adjustment to incorporate fading temporal correlations. To evaluate prediction accuracy, we computed the root mean square error between predicted tumor motion and its observed location for ten patients. For comparison, we also investigated commonly used predictive methods, namely linear prediction, neural networks and Kalman filtering to the same data. The proposed method reduced the prediction error for all imaging rates and latency lengths, particularly for long prediction lengths

  20. A Novel Method for Predicting Anisakid Nematode Infection of Atlantic Cod Using Rough Set Theory.

    Science.gov (United States)

    Wąsikowska, Barbara; Sobecka, Ewa; Bielat, Iwona; Legierko, Monika; Więcaszek, Beata

    2018-03-01

    Atlantic cod ( Gadus morhua L.) is one of the most important fish species in the fisheries industries of many countries; however, these fish are often infected with parasites. The detection of pathogenic larval nematodes is usually performed in fish processing facilities by visual examination using candling or by digesting muscles in artificial digestive juices, but these methods are both time and labor intensive. This article presents an innovative approach to the analysis of cod parasites from both the Atlantic and Baltic Sea areas through the application of rough set theory, one of the methods of artificial intelligence, for the prediction of food safety in a food production chain. The parasitological examinations were performed focusing on nematode larvae pathogenic to humans, e.g., Anisakis simplex, Contracaecum osculatum, and Pseudoterranova decipiens. The analysis allowed identification of protocols with which it is possible to make preliminary estimates of the quantity and quality of parasites found in cod catches before detailed analyses are performed. The results indicate that the method used can be an effective analytical tool for these types of data. To achieve this goal, a database is needed that contains the patterns intensity of parasite infections and the conditions of commercial fish species in different localities in their distributions.

  1. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    Science.gov (United States)

    Anggraeni, Novia Antika

    2015-04-01

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano's inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 - 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between -2.86 up to 5.49 days.

  2. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    International Nuclear Information System (INIS)

    Anggraeni, Novia Antika

    2015-01-01

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano’s inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 – 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between −2.86 up to 5.49 days

  3. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    Energy Technology Data Exchange (ETDEWEB)

    Anggraeni, Novia Antika, E-mail: novia.antika.a@gmail.com [Geophysics Sub-department, Physics Department, Faculty of Mathematic and Natural Science, Universitas Gadjah Mada. BLS 21 Yogyakarta 55281 (Indonesia)

    2015-04-24

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano’s inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 – 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between −2.86 up to 5.49 days.

  4. Improvement of gas entrainment prediction method. Introduction of surface tension effect

    International Nuclear Information System (INIS)

    Ito, Kei; Sakai, Takaaki; Ohshima, Hiroyuki; Uchibori, Akihiro; Eguchi, Yuzuru; Monji, Hideaki; Xu, Yongze

    2010-01-01

    A gas entrainment (GE) prediction method has been developed to establish design criteria for the large-scale sodium-cooled fast reactor (JSFR) systems. The prototype of the GE prediction method was already confirmed to give reasonable gas core lengths by simple calculation procedures. However, for simplification, the surface tension effects were neglected. In this paper, the evaluation accuracy of gas core lengths is improved by introducing the surface tension effects into the prototype GE prediction method. First, the mechanical balance between gravitational, centrifugal, and surface tension forces is considered. Then, the shape of a gas core tip is approximated by a quadratic function. Finally, using the approximated gas core shape, the authors determine the gas core length satisfying the mechanical balance. This improved GE prediction method is validated by analyzing the gas core lengths observed in simple experiments. Results show that the analytical gas core lengths calculated by the improved GE prediction method become shorter in comparison to the prototype GE prediction method, and are in good agreement with the experimental data. In addition, the experimental data under different temperature and surfactant concentration conditions are reproduced by the improved GE prediction method. (author)

  5. Allometric relationships predicting foliar biomass and leaf area:sapwood area ratio from tree height in five Costa Rican rain forest species.

    Science.gov (United States)

    Calvo-Alvarado, J C; McDowell, N G; Waring, R H

    2008-11-01

    We developed allometric equations to predict whole-tree leaf area (A(l)), leaf biomass (M(l)) and leaf area to sapwood area ratio (A(l):A(s)) in five rain forest tree species of Costa Rica: Pentaclethra macroloba (Willd.) Kuntze (Fabaceae/Mim), Carapa guianensis Aubl. (Meliaceae), Vochysia ferru-gi-nea Mart. (Vochysiaceae), Virola koshnii Warb. (Myristicaceae) and Tetragastris panamensis (Engl.) Kuntze (Burseraceae). By destructive analyses (n = 11-14 trees per species), we observed strong nonlinear allometric relationships (r(2) > or = 0.9) for predicting A(l) or M(l) from stem diameters or A(s) measured at breast height. Linear relationships were less accurate. In general, A(l):A(s) at breast height increased linearly with tree height except for Penta-clethra, which showed a negative trend. All species, however, showed increased total A(l) with height. The observation that four of the five species increased in A(l):A(s) with height is consistent with hypotheses about trade--offs between morphological and anatomical adaptations that favor efficient water flow through variation in the amount of leaf area supported by sapwood and those imposed by the need to respond quickly to light gaps in the canopy.

  6. Machine learning methods to predict child posttraumatic stress: a proof of concept study.

    Science.gov (United States)

    Saxe, Glenn N; Ma, Sisi; Ren, Jiwen; Aliferis, Constantin

    2017-07-10

    The care of traumatized children would benefit significantly from accurate predictive models for Posttraumatic Stress Disorder (PTSD), using information available around the time of trauma. Machine Learning (ML) computational methods have yielded strong results in recent applications across many diseases and data types, yet they have not been previously applied to childhood PTSD. Since these methods have not been applied to this complex and debilitating disorder, there is a great deal that remains to be learned about their application. The first step is to prove the concept: Can ML methods - as applied in other fields - produce predictive classification models for childhood PTSD? Additionally, we seek to determine if specific variables can be identified - from the aforementioned predictive classification models - with putative causal relations to PTSD. ML predictive classification methods - with causal discovery feature selection - were applied to a data set of 163 children hospitalized with an injury and PTSD was determined three months after hospital discharge. At the time of hospitalization, 105 risk factor variables were collected spanning a range of biopsychosocial domains. Seven percent of subjects had a high level of PTSD symptoms. A predictive classification model was discovered with significant predictive accuracy. A predictive model constructed based on subsets of potentially causally relevant features achieves similar predictivity compared to the best predictive model constructed with all variables. Causal Discovery feature selection methods identified 58 variables of which 10 were identified as most stable. In this first proof-of-concept application of ML methods to predict childhood Posttraumatic Stress we were able to determine both predictive classification models for childhood PTSD and identify several causal variables. This set of techniques has great potential for enhancing the methodological toolkit in the field and future studies should seek to

  7. Validity of a manual soft tissue profile prediction method following mandibular setback osteotomy.

    Science.gov (United States)

    Kolokitha, Olga-Elpis

    2007-10-01

    The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. To test the validity of the manual method the prediction tracings were compared to the actual post-operative tracings. The Dentofacial Planner software was used to develop the computerized post-surgical prediction tracings. Both manual and computerized prediction printouts were analyzed by using the cephalometric system PORDIOS. Statistical analysis was performed by means of t-test. Comparison between manual prediction tracings and the actual post-operative profile showed that the manual method results in more convex soft tissue profiles; the upper lip was found in a more prominent position, upper lip thickness was increased and, the mandible and lower lip were found in a less posterior position than that of the actual profiles. Comparison between computerized and manual prediction methods showed that in the manual method upper lip thickness was increased, the upper lip was found in a more anterior position and the lower anterior facial height was increased as compared to the computerized prediction method. Cephalometric simulation of post-operative soft tissue profile following orthodontic-surgical management of mandibular prognathism imposes certain limitations related to the methods implied. However, both manual and computerized prediction methods remain a useful tool for patient communication.

  8. Using small area estimation and Lidar-derived variables for multivariate prediction of forest attributes

    Science.gov (United States)

    F. Mauro; Vicente Monleon; H. Temesgen

    2015-01-01

    Small area estimation (SAE) techniques have been successfully applied in forest inventories to provide reliable estimates for domains where the sample size is small (i.e. small areas). Previous studies have explored the use of either Area Level or Unit Level Empirical Best Linear Unbiased Predictors (EBLUPs) in a univariate framework, modeling each variable of interest...

  9. Bayesian Methods for Predicting the Shape of Chinese Yam in Terms of Key Diameters

    Directory of Open Access Journals (Sweden)

    Mitsunori Kayano

    2017-01-01

    Full Text Available This paper proposes Bayesian methods for the shape estimation of Chinese yam (Dioscorea opposita using a few key diameters of yam. Shape prediction of yam is applicable to determining optimal cutoff positions of a yam for producing seed yams. Our Bayesian method, which is a combination of Bayesian estimation model and predictive model, enables automatic, rapid, and low-cost processing of yam. After the construction of the proposed models using a sample data set in Japan, the models provide whole shape prediction of yam based on only a few key diameters. The Bayesian method performed well on the shape prediction in terms of minimizing the mean squared error between measured shape and the prediction. In particular, a multiple regression method with key diameters at two fixed positions attained the highest performance for shape prediction. We have developed automatic, rapid, and low-cost yam-processing machines based on the Bayesian estimation model and predictive model. Development of such shape prediction approaches, including our Bayesian method, can be a valuable aid in reducing the cost and time in food processing.

  10. A Novel Grey Wave Method for Predicting Total Chinese Trade Volume

    Directory of Open Access Journals (Sweden)

    Kedong Yin

    2017-12-01

    Full Text Available The total trade volume of a country is an important way of appraising its international trade situation. A prediction based on trade volume will help enterprises arrange production efficiently and promote the sustainability of the international trade. Because the total Chinese trade volume fluctuates over time, this paper proposes a Grey wave forecasting model with a Hodrick–Prescott filter (HP filter to forecast it. This novel model first parses time series into long-term trend and short-term cycle. Second, the model uses a general GM (1,1 to predict the trend term and the Grey wave forecasting model to predict the cycle term. Empirical analysis shows that the improved Grey wave prediction method provides a much more accurate forecast than the basic Grey wave prediction method, achieving better prediction results than autoregressive moving average model (ARMA.

  11. An influence function method based subsidence prediction program for longwall mining operations in inclined coal seams

    Energy Technology Data Exchange (ETDEWEB)

    Yi Luo; Jian-wei Cheng [West Virginia University, Morgantown, WV (United States). Department of Mining Engineering

    2009-09-15

    The distribution of the final surface subsidence basin induced by longwall operations in inclined coal seam could be significantly different from that in flat coal seam and demands special prediction methods. Though many empirical prediction methods have been developed, these methods are inflexible for varying geological and mining conditions. An influence function method has been developed to take the advantage of its fundamentally sound nature and flexibility. In developing this method, significant modifications have been made to the original Knothe function to produce an asymmetrical influence function. The empirical equations for final subsidence parameters derived from US subsidence data and Chinese empirical values have been incorporated into the mathematical models to improve the prediction accuracy. A corresponding computer program is developed. A number of subsidence cases for longwall mining operations in coal seams with varying inclination angles have been used to demonstrate the applicability of the developed subsidence prediction model. 9 refs., 8 figs.

  12. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. An Influence Function Method for Predicting Store Aerodynamic Characteristics during Weapon Separation,

    Science.gov (United States)

    1981-05-14

    8217 AO-Ail 777 GRUMMAN AEROSPACE CORP BETHPAGE NY F/G 20/4 AN INFLUENCE FUNCTION METHOD FOR PREDICTING STORE AERODYNAMIC C--ETCCU) MAY 8 1 R MEYER, A...CENKO, S YARDS UNCLASSIFIED N ’.**~~N**n I EHEEKI j~j .25 Q~4 111110 111_L 5. AN INFLUENCE FUNCTION METHOD FOR PREDICTING STORE AERODYNAMIC...extended to their logical conclusion one is led quite naturally to consideration of an " Influence Function Method" for I predicting store aerodynamic

  14. Studies of the Raman Spectra of Cyclic and Acyclic Molecules: Combination and Prediction Spectrum Methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Taijin; Assary, Rajeev S.; Marshall, Christopher L.; Gosztola, David J.; Curtiss, Larry A.; Stair, Peter C.

    2012-04-02

    A combination of Raman spectroscopy and density functional methods was employed to investigate the spectral features of selected molecules: furfural, 5-hydroxymethyl furfural (HMF), methanol, acetone, acetic acid, and levulinic acid. The computed spectra and measured spectra are in excellent agreement, consistent with previous studies. Using the combination and prediction spectrum method (CPSM), we were able to predict the important spectral features of two platform chemicals, HMF and levulinic acid.The results have shown that CPSM is a useful alternative method for predicting vibrational spectra of complex molecules in the biomass transformation process.

  15. Development of laboratory acceleration test method for service life prediction of concrete structures

    International Nuclear Information System (INIS)

    Cho, M. S.; Song, Y. C.; Bang, K. S.; Lee, J. S.; Kim, D. K.

    1999-01-01

    Service life prediction of nuclear power plants depends on the application of history of structures, field inspection and test, the development of laboratory acceleration tests, their analysis method and predictive model. In this study, laboratory acceleration test method for service life prediction of concrete structures and application of experimental test results are introduced. This study is concerned with environmental condition of concrete structures and is to develop the acceleration test method for durability factors of concrete structures e.g. carbonation, sulfate attack, freeze-thaw cycles and shrinkage-expansion etc

  16. Transient stability enhancement of modern power grid using predictive Wide-Area Monitoring and Control

    Science.gov (United States)

    Yousefian, Reza

    This dissertation presents a real-time Wide-Area Control (WAC) designed based on artificial intelligence for large scale modern power systems transient stability enhancement. The WAC using the measurements available from Phasor Measurement Units (PMUs) at generator buses, monitors the global oscillations in the system and optimally augments the local excitation system of the synchronous generators. The complexity of the power system stability problem along with uncertainties and nonlinearities makes the conventional modeling non-practical or inaccurate. In this work Reinforcement Learning (RL) algorithm on the benchmark of Neural Networks (NNs) is used to map the nonlinearities of the system in real-time. This method different from both the centralized and the decentralized control schemes, employs a number of semi-autonomous agents to collaborate with each other to perform optimal control theory well-suited for WAC applications. Also, to handle the delays in Wide-Area Monitoring (WAM) and adapt the RL toward the robust control design, Temporal Difference (TD) is proposed as a solver for RL problem or optimal cost function. However, the main drawback of such WAC design is that it is challenging to determine if an offline trained network is valid to assess the stability of the power system once the system is evolved to a different operating state or network topology. In order to address the generality issue of NNs, a value priority scheme is proposed in this work to design a hybrid linear and nonlinear controllers. The algorithm so-called supervised RL is based on mixture of experts, where it is initialized by linear controller and as the performance and identification of the RL controller improves in real-time switches to the other controller. This work also focuses on transient stability and develops Lyapunov energy functions for synchronous generators to monitor the stability stress of the system. Using such energies as a cost function guarantees the convergence

  17. Groundwater vulnerability assessment: from overlay methods to statistical methods in the Lombardy Plain area

    Directory of Open Access Journals (Sweden)

    Stefania Stevenazzi

    2017-06-01

    Full Text Available Groundwater is among the most important freshwater resources. Worldwide, aquifers are experiencing an increasing threat of pollution from urbanization, industrial development, agricultural activities and mining enterprise. Thus, practical actions, strategies and solutions to protect groundwater from these anthropogenic sources are widely required. The most efficient tool, which helps supporting land use planning, while protecting groundwater from contamination, is represented by groundwater vulnerability assessment. Over the years, several methods assessing groundwater vulnerability have been developed: overlay and index methods, statistical and process-based methods. All methods are means to synthesize complex hydrogeological information into a unique document, which is a groundwater vulnerability map, useable by planners, decision and policy makers, geoscientists and the public. Although it is not possible to identify an approach which could be the best one for all situations, the final product should always be scientific defensible, meaningful and reliable. Nevertheless, various methods may produce very different results at any given site. Thus, reasons for similarities and differences need to be deeply investigated. This study demonstrates the reliability and flexibility of a spatial statistical method to assess groundwater vulnerability to contamination at a regional scale. The Lombardy Plain case study is particularly interesting for its long history of groundwater monitoring (quality and quantity, availability of hydrogeological data, and combined presence of various anthropogenic sources of contamination. Recent updates of the regional water protection plan have raised the necessity of realizing more flexible, reliable and accurate groundwater vulnerability maps. A comparison of groundwater vulnerability maps obtained through different approaches and developed in a time span of several years has demonstrated the relevance of the

  18. WHAT ARE AGRICULTURAL ECONOMICS PH.D. STUDENTS LEARNING ABOUT AGRIBUSINESS RESEARCH METHODS AND SUBJECT AREAS?

    OpenAIRE

    House, Lisa; Sterns, James A.

    2002-01-01

    This document contains the PowerPoint presentation given by the authors at the 2002 WCC-72 meetings, regarding what agricultural economics Ph.D students are learning about agribusiness research methods and subject areas.

  19. Application of stereological methods to estimate post-mortem brain surface area using 3T MRI

    DEFF Research Database (Denmark)

    Furlong, Carolyn; García-Fiñana, Marta; Puddephat, Michael

    2013-01-01

    The Cavalieri and Vertical Sections methods of design based stereology were applied in combination with 3 tesla (i.e. 3T) Magnetic Resonance Imaging (MRI) to estimate cortical and subcortical volume, area of the pial surface, area of the grey-white matter boundary, and thickness of the cerebral...

  20. Development of method for evaluating estimated inundation area by using river flood analysis based on multiple flood scenarios

    Science.gov (United States)

    Ono, T.; Takahashi, T.

    2017-12-01

    Non-structural mitigation measures such as flood hazard map based on estimated inundation area have been more important because heavy rains exceeding the design rainfall frequently occur in recent years. However, conventional method may lead to an underestimation of the area because assumed locations of dike breach in river flood analysis are limited to the cases exceeding the high-water level. The objective of this study is to consider the uncertainty of estimated inundation area with difference of the location of dike breach in river flood analysis. This study proposed multiple flood scenarios which can set automatically multiple locations of dike breach in river flood analysis. The major premise of adopting this method is not to be able to predict the location of dike breach correctly. The proposed method utilized interval of dike breach which is distance of dike breaches placed next to each other. That is, multiple locations of dike breach were set every interval of dike breach. The 2D shallow water equations was adopted as the governing equation of river flood analysis, and the leap-frog scheme with staggered grid was used. The river flood analysis was verified by applying for the 2015 Kinugawa river flooding, and the proposed multiple flood scenarios was applied for the Akutagawa river in Takatsuki city. As the result of computation in the Akutagawa river, a comparison with each computed maximum inundation depth of dike breaches placed next to each other proved that the proposed method enabled to prevent underestimation of estimated inundation area. Further, the analyses on spatial distribution of inundation class and maximum inundation depth in each of the measurement points also proved that the optimum interval of dike breach which can evaluate the maximum inundation area using the minimum assumed locations of dike breach. In brief, this study found the optimum interval of dike breach in the Akutagawa river, which enabled estimated maximum inundation area

  1. Ensemble approach combining multiple methods improves human transcription start site prediction

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-11-30

    Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.

  2. Prediction of Human Phenotype Ontology terms by means of hierarchical ensemble methods.

    Science.gov (United States)

    Notaro, Marco; Schubach, Max; Robinson, Peter N; Valentini, Giorgio

    2017-10-12

    The prediction of human gene-abnormal phenotype associations is a fundamental step toward the discovery of novel genes associated with human disorders, especially when no genes are known to be associated with a specific disease. In this context the Human Phenotype Ontology (HPO) provides a standard categorization of the abnormalities associated with human diseases. While the problem of the prediction of gene-disease associations has been widely investigated, the related problem of gene-phenotypic feature (i.e., HPO term) associations has been largely overlooked, even if for most human genes no HPO term associations are known and despite the increasing application of the HPO to relevant medical problems. Moreover most of the methods proposed in literature are not able to capture the hierarchical relationships between HPO terms, thus resulting in inconsistent and relatively inaccurate predictions. We present two hierarchical ensemble methods that we formally prove to provide biologically consistent predictions according to the hierarchical structure of the HPO. The modular structure of the proposed methods, that consists in a "flat" learning first step and a hierarchical combination of the predictions in the second step, allows the predictions of virtually any flat learning method to be enhanced. The experimental results show that hierarchical ensemble methods are able to predict novel associations between genes and abnormal phenotypes with results that are competitive with state-of-the-art algorithms and with a significant reduction of the computational complexity. Hierarchical ensembles are efficient computational methods that guarantee biologically meaningful predictions that obey the true path rule, and can be used as a tool to improve and make consistent the HPO terms predictions starting from virtually any flat learning method. The implementation of the proposed methods is available as an R package from the CRAN repository.

  3. Hybrid ATDL-gamma distribution model for predicting area source acid gas concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Jakeman, A J; Taylor, J A

    1985-01-01

    An air quality model is developed to predict the distribution of concentrations of acid gas in an urban airshed. The model is hybrid in character, combining reliable features of a deterministic ATDL-based model with statistical distributional approaches. The gamma distribution was identified from a range of distributional models as the best model. The paper shows that the assumptions of a previous hybrid model may be relaxed and presents a methodology for characterizing the uncertainty associated with model predictions. Results are demonstrated for the 98-percentile predictions of 24-h average data over annual periods at six monitoring sites. This percentile relates to the World Health Organization goal for acid gas concentrations.

  4. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

    International Nuclear Information System (INIS)

    Ernst, Floris; Schweikard, Achim

    2008-01-01

    Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

  5. Predictive probability methods for interim monitoring in clinical trials with longitudinal outcomes.

    Science.gov (United States)

    Zhou, Ming; Tang, Qi; Lang, Lixin; Xing, Jun; Tatsuoka, Kay

    2018-04-17

    In clinical research and development, interim monitoring is critical for better decision-making and minimizing the risk of exposing patients to possible ineffective therapies. For interim futility or efficacy monitoring, predictive probability methods are widely adopted in practice. Those methods have been well studied for univariate variables. However, for longitudinal studies, predictive probability methods using univariate information from only completers may not be most efficient, and data from on-going subjects can be utilized to improve efficiency. On the other hand, leveraging information from on-going subjects could allow an interim analysis to be potentially conducted once a sufficient number of subjects reach an earlier time point. For longitudinal outcomes, we derive closed-form formulas for predictive probabilities, including Bayesian predictive probability, predictive power, and conditional power and also give closed-form solutions for predictive probability of success in a future trial and the predictive probability of success of the best dose. When predictive probabilities are used for interim monitoring, we study their distributions and discuss their analytical cutoff values or stopping boundaries that have desired operating characteristics. We show that predictive probabilities utilizing all longitudinal information are more efficient for interim monitoring than that using information from completers only. To illustrate their practical application for longitudinal data, we analyze 2 real data examples from clinical trials. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

    Energy Technology Data Exchange (ETDEWEB)

    Ernst, Floris; Schweikard, Achim [University of Luebeck, Institute for Robotics and Cognitive Systems, Luebeck (Germany)

    2008-06-15

    Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

  7. Assessment of anthropometric parameters including area of the psoas, area of the back muscle, and psoas-vertebra distance as indices for prediction of vertebral fracture

    International Nuclear Information System (INIS)

    Suzuki, Tamotsu; Morita, Masahumi; Mabuchi, Kiyoshi

    2005-01-01

    We assessed some anthropometric parameters as indices for the prediction of vertebral compression fracture. We measured the area of the total cross section, area of the back muscle, area of the psoas, area of subcutaneous fat tissue, ratio of the right and left area of the psoas, psoas-vertebra distance, the mediolateral length of the back muscle, anteroposterior length of the back muscle, the mediolateral length of the psoas, and anteroposterior length of the psoas, on computed tomography images. Logistic regression analysis was performed in order to test the correlation between each anthropometric parameter and the incidence of fracture. The odds ratio corresponding to one standard deviation of each parameter was calculated. The ratio of center and anterior vertebral heights and the ratio of center and posterior vertebral heights were measured from the positioning image. The smaller value of these was defined as the vertebral height ratio value. Vertebral height ratio was used as the parameter directly related to vertebral fracture. The subjects for research were 25 women with vertebral compression fracture and 36 women without fracture. Vertebral height ratio had a significant correlation with area of the psoas (correlation coefficient, r=0.609 p<0.001), area of the back muscle (r=0.547 p<0.001), and the psoas-vertebra distance (r=-0.523 p<0.001) in the anthropometric parameters. The odds ratios of the area of the psoas (odds ratio, OR:0.18, 95% confidence interval, CI:0.43 to 0.08), area of the back muscle (OR:0.13, 95% CI:0.37 to 0.05), and the psoas-vertebra distance (OR:3.01, 95% CI:6.22 to 1.46) were high. The odds ratio of the mediolateral length of the psoas (OR:0.34, 95% CI:0.67 to 0.18), and the left-to-right area ratio of the psoas (OR:0.41, 95% CI:0.76 to 0.22) were rather high. However, the vertebral height ratio had no significant correlation with the left-to-right area ratio of the psoas. It was considered that area of the psoas, area of the back

  8. Evaluation of Airborne Remote Sensing Techniques for Predicting the Distribution of Energetic Compounds on Impact Areas

    National Research Council Canada - National Science Library

    Graves, Mark R; Dove, Linda P; Jenkins, Thomas F; Bigl, Susan; Walsh, Marianne E; Hewitt, Alan D; Lambert, Dennis; Perron, Nancy; Ramsey, Charles; Gamey, Jeff; Beard, Les; Doll, William E; Magoun, Dale

    2007-01-01

    .... Remote sensing and geographic information system (GIS) technologies were utilized to assist in the development of enhanced sampling strategies to better predict the landscape-scale distribution of energetic compounds...

  9. Analysis of deep learning methods for blind protein contact prediction in CASP12.

    Science.gov (United States)

    Wang, Sheng; Sun, Siqi; Xu, Jinbo

    2018-03-01

    Here we present the results of protein contact prediction achieved in CASP12 by our RaptorX-Contact server, which is an early implementation of our deep learning method for contact prediction. On a set of 38 free-modeling target domains with a median family size of around 58 effective sequences, our server obtained an average top L/5 long- and medium-range contact accuracy of 47% and 44%, respectively (L = length). A complete implementation has an average accuracy of 59% and 57%, respectively. Our deep learning method formulates contact prediction as a pixel-level image labeling problem and simultaneously predicts all residue pairs of a protein using a combination of two deep residual neural networks, taking as input the residue conservation information, predicted secondary structure and solvent accessibility, contact potential, and coevolution information. Our approach differs from existing methods mainly in (1) formulating contact prediction as a pixel-level image labeling problem instead of an image-level classification problem; (2) simultaneously predicting all contacts of an individual protein to make effective use of contact occurrence patterns; and (3) integrating both one-dimensional and two-dimensional deep convolutional neural networks to effectively learn complex sequence-structure relationship including high-order residue correlation. This paper discusses the RaptorX-Contact pipeline, both contact prediction and contact-based folding results, and finally the strength and weakness of our method. © 2017 Wiley Periodicals, Inc.

  10. NetMHCpan, a method for MHC class I binding prediction beyond humans

    DEFF Research Database (Denmark)

    Hoof, Ilka; Peters, B; Sidney, J

    2009-01-01

    molecules. We show that the NetMHCpan-2.0 method can accurately predict binding to uncharacterized HLA molecules, including HLA-C and HLA-G. Moreover, NetMHCpan-2.0 is demonstrated to accurately predict peptide binding to chimpanzee and macaque MHC class I molecules. The power of NetMHCpan-2.0 to guide...

  11. A method of analyzing rectal surface area irradiated and rectal complications in prostate conformal radiotherapy

    International Nuclear Information System (INIS)

    Lu Yong; Song, Paul Y.; Li Shidong; Spelbring, Danny R.; Vijayakumar, Srinivasan; Haraf, Daniel J.; Chen, George T.Y.

    1995-01-01

    Purpose: To develop a method of analyzing rectal surface area irradiated and rectal complications in prostate conformal radiotherapy. Methods and Materials: Dose-surface histograms of the rectum, which state the rectal surface area irradiated to any given dose, were calculated for a group of 27 patients treated with a four-field box technique to a total (tumor minimum) dose ranging from 68 to 70 Gy. Occurrences of rectal toxicities as defined by the Radiation Therapy Oncology Group (RTOG) were recorded and examined in terms of dose and rectal surface area irradiated. For a specified end point of rectal complication, the complication probability was analyzed as a function of dose irradiated to a fixed rectal area, and as a function of area receiving a fixed dose. Lyman's model of normal tissue complication probability (NTCP) was used to fit the data. Results: The observed occurrences of rectal complications appear to depend on the rectal surface area irradiated to a given dose level. The patient distribution of each toxicity grade exhibits a maximum as a function of percentage surface area irradiated, and the maximum moves to higher values of percentage surface area as the toxicity grade increases. The dependence of the NTCP for the specified end point on dose and percentage surface area irradiated was fitted to Lyman's NTCP model with a set of parameters. The curvature of the NTCP as a function of the surface area suggests that the rectum is a parallel structured organ. Conclusions: The described method of analyzing rectal surface area irradiated yields interesting insight into understanding rectal complications in prostate conformal radiotherapy. Application of the method to a larger patient data set has the potential to facilitate the construction of a full dose-surface-complication relationship, which would be most useful in guiding clinical practice

  12. Vehicle navigation in populated areas using predictive control with environmental uncertainty handling

    Directory of Open Access Journals (Sweden)

    Skrzypczyk Krzysztof

    2017-06-01

    Full Text Available This paper addresses the problem of navigating an autonomous vehicle using environmental dynamics prediction. The usefulness of the Game Against Nature formalism adapted to modelling environmental prediction uncertainty is discussed. The possibility of the control law synthesis on the basis of strategies against Nature is presented. The properties and effectiveness of the approach presented are verified by simulations carried out in MATLAB.

  13. A new general dynamic model predicting radionuclide concentrations and fluxes in coastal areas from readily accessible driving variables

    International Nuclear Information System (INIS)

    Haakanson, Lars

    2004-01-01

    This paper presents a general, process-based dynamic model for coastal areas for radionuclides (metals, organics and nutrients) from both single pulse fallout and continuous deposition. The model gives radionuclide concentrations in water (total, dissolved and particulate phases and concentrations in sediments and fish) for entire defined coastal areas. The model gives monthly variations. It accounts for inflow from tributaries, direct fallout to the coastal area, internal fluxes (sedimentation, resuspension, diffusion, burial, mixing and biouptake and retention in fish) and fluxes to and from the sea outside the defined coastal area and/or adjacent coastal areas. The fluxes of water and substances between the sea and the coastal area are differentiated into three categories of coast types: (i) areas where the water exchange is regulated by tidal effects; (ii) open coastal areas where the water exchange is regulated by coastal currents; and (iii) semi-enclosed archipelago coasts. The coastal model gives the fluxes to and from the following four abiotic compartments: surface water, deep water, ET areas (i.e., areas where fine sediment erosion and transport processes dominate the bottom dynamic conditions and resuspension appears) and A-areas (i.e., areas of continuous fine sediment accumulation). Criteria to define the boundaries for the given coastal area towards the sea, and to define whether a coastal area is open or closed are given in operational terms. The model is simple to apply since all driving variables may be readily accessed from maps and standard monitoring programs. The driving variables are: latitude, catchment area, mean annual precipitation, fallout and month of fallout and parameters expressing coastal size and form as determined from, e.g., digitized bathymetric maps using a GIS program. Selected results: the predictions of radionuclide concentrations in water and fish largely depend on two factors, the concentration in the sea outside the given

  14. Novel computational methods to predict drug–target interactions using graph mining and machine learning approaches

    KAUST Repository

    Olayan, Rawan S.

    2017-12-01

    Computational drug repurposing aims at finding new medical uses for existing drugs. The identification of novel drug-target interactions (DTIs) can be a useful part of such a task. Computational determination of DTIs is a convenient strategy for systematic screening of a large number of drugs in the attempt to identify new DTIs at low cost and with reasonable accuracy. This necessitates development of accurate computational methods that can help focus on the follow-up experimental validation on a smaller number of highly likely targets for a drug. Although many methods have been proposed for computational DTI prediction, they suffer the high false positive prediction rate or they do not predict the effect that drugs exert on targets in DTIs. In this report, first, we present a comprehensive review of the recent progress in the field of DTI prediction from data-centric and algorithm-centric perspectives. The aim is to provide a comprehensive review of computational methods for identifying DTIs, which could help in constructing more reliable methods. Then, we present DDR, an efficient method to predict the existence of DTIs. DDR achieves significantly more accurate results compared to the other state-of-theart methods. As supported by independent evidences, we verified as correct 22 out of the top 25 DDR DTIs predictions. This validation proves the practical utility of DDR, suggesting that DDR can be used as an efficient method to identify 5 correct DTIs. Finally, we present DDR-FE method that predicts the effect types of a drug on its target. On different representative datasets, under various test setups, and using different performance measures, we show that DDR-FE achieves extremely good performance. Using blind test data, we verified as correct 2,300 out of 3,076 DTIs effects predicted by DDR-FE. This suggests that DDR-FE can be used as an efficient method to identify correct effects of a drug on its target.

  15. Improving local clustering based top-L link prediction methods via asymmetric link clustering information

    Science.gov (United States)

    Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan

    2018-02-01

    Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.

  16. Evaluation of Airborne Remote Sensing Techniques for Predicting the Distribution of Energetic Compounds on Impact Areas

    National Research Council Canada - National Science Library

    Graves, Mark R; Dove, Linda P; Jenkins, Thomas F; Bigl, Susan; Walsh, Marianne E; Hewitt, Alan D; Lambert, Dennis; Perron, Nancy; Ramsey, Charles; Gamey, Jeff; Beard, Les; Doll, William E; Magoun, Dale

    2007-01-01

    .... These sampling approaches do not accurately account for the distribution of such contaminants over the landscape due to the distributed nature of explosive compound sources throughout impact areas...

  17. A method for uncertainty quantification in the life prediction of gas turbine components

    Energy Technology Data Exchange (ETDEWEB)

    Lodeby, K.; Isaksson, O.; Jaervstraat, N. [Volvo Aero Corporation, Trolhaettan (Sweden)

    1998-12-31

    A failure in an aircraft jet engine can have severe consequences which cannot be accepted and high requirements are therefore raised on engine reliability. Consequently, assessment of the reliability of life predictions used in design and maintenance are important. To assess the validity of the predicted life a method to quantify the contribution to the total uncertainty in the life prediction from different uncertainty sources is developed. The method is a structured approach for uncertainty quantification that uses a generic description of the life prediction process. It is based on an approximate error propagation theory combined with a unified treatment of random and systematic errors. The result is an approximate statistical distribution for the predicted life. The method is applied on life predictions for three different jet engine components. The total uncertainty became of reasonable order of magnitude and a good qualitative picture of the distribution of the uncertainty contribution from the different sources was obtained. The relative importance of the uncertainty sources differs between the three components. It is also highly dependent on the methods and assumptions used in the life prediction. Advantages and disadvantages of this method is discussed. (orig.) 11 refs.

  18. Advanced Materials Test Methods for Improved Life Prediction of Turbine Engine Components

    National Research Council Canada - National Science Library

    Stubbs, Jack

    2000-01-01

    Phase I final report developed under SBIR contract for Topic # AF00-149, "Durability of Turbine Engine Materials/Advanced Material Test Methods for Improved Use Prediction of Turbine Engine Components...

  19. Prediction methods and databases within chemoinformatics: emphasis on drugs and drug candidates

    DEFF Research Database (Denmark)

    Jonsdottir, Svava Osk; Jorgensen, FS; Brunak, Søren

    2005-01-01

    about drugs and drug candidates, and of databases with relevant properties. Access to experimental data and numerical methods for selecting and utilizing these data is crucial for developing accurate predictive in silico models. Many interesting predictive methods for classifying the suitability......MOTIVATION: To gather information about available databases and chemoinformatics methods for prediction of properties relevant to the drug discovery and optimization process. RESULTS: We present an overview of the most important databases with 2-dimensional and 3-dimensional structural information...... of chemical compounds as potential drugs, as well as for predicting their physico-chemical and ADMET properties have been proposed in recent years. These methods are discussed, and some possible future directions in this rapidly developing field are described....

  20. Mapping and predicting sinkholes by integration of remote sensing and spectroscopy methods

    Science.gov (United States)

    Goldshleger, N.; Basson, U.; Azaria, I.

    2013-08-01

    The Dead Sea coastal area is exposed to the destructive process of sinkhole collapse. The increase in sinkhole activity in the last two decades has been substantial, resulting from the continuous decrease in the Dead Sea's level, with more than 1,000 sinkholes developing as a result of upper layer collapse. Large sinkholes can reach 25 m in diameter. They are concentrated mainly in clusters in several dozens of sites with different characteristics. In this research, methods for mapping, monitoring and predicting sinkholes were developed using active and passive remote-sensing methods: field spectrometer, geophysical ground penetration radar (GPR) and a frequency domain electromagnetic instrument (FDEM). The research was conducted in three stages: 1) literature review and data collection; 2) mapping regions abundant with sinkholes in various stages and regions vulnerable to sinkholes; 3) analyzing the data and translating it into cognitive and accessible scientific information. Field spectrometry enabled a comparison between the spectral signatures of soil samples collected near active or progressing sinkholes, and those collected in regions with no visual sign of sinkhole occurrence. FDEM and GPR investigations showed that electrical conductivity and soil moisture are higher in regions affected by sinkholes. Measurements taken at different time points over several seasons allowed monitoring the progress of an 'embryonic' sinkhole.

  1. Slope Stability Assessment Using Trigger Parameters and SINMAP Methods on Tamblingan-Buyan Ancient Mountain Area in Buleleng Regency, Bali

    Directory of Open Access Journals (Sweden)

    I Nengah Sinarta

    2017-10-01

    Full Text Available The mapping of soil movement was examined by comparing an extension of the deterministic Soil Stability Index Mapping (SINMAP method, and an overlay method with trigger parameters of soil movement. The SINMAP model used soil parameters in the form of the cohesion value (c, internal friction angle (φ, and hydraulic conductivity (ks for the prediction of soil movement based on the factor of safety (FS, while the indirect method used a literature review and field observations. The weightings of soil movement trigger parameters in assessments were based on natural physical aspects: (1 slope inclination = 30%; (2 rock weathering = 15%; (3 geological structure = 20%; (4 rainfall = 15%; (5 groundwater potential = 7%; (6 seismicity = 3%; and (7 vegetation = 10%. The research area was located in the Buleleng district, in particular in the ancient mountain area of Buyan-Tamblingan, in the Sukasada sub-district. The hazard mapping gave a high and very high hazard scale. The SINMAP model gave a validation accuracy of 14.29%, while the overlay method with seven trigger parameters produced an accuracy of 71.43%. Based on the analysis of the very high and high hazard class and the validation of the landslide occurrence points, the deterministic method using soil parameters and water absorption gave a much lower accuracy than the overlay method with a study of soil motion trigger parameters.

  2. Application of the backstepping method to the prediction of increase or decrease of infected population.

    Science.gov (United States)

    Kuniya, Toshikazu; Sano, Hideki

    2016-05-10

    In mathematical epidemiology, age-structured epidemic models have usually been formulated as the boundary-value problems of the partial differential equations. On the other hand, in engineering, the backstepping method has recently been developed and widely studied by many authors. Using the backstepping method, we obtained a boundary feedback control which plays the role of the threshold criteria for the prediction of increase or decrease of newly infected population. Under an assumption that the period of infectiousness is same for all infected individuals (that is, the recovery rate is given by the Dirac delta function multiplied by a sufficiently large positive constant), the prediction method is simplified to the comparison of the numbers of reported cases at the current and previous time steps. Our prediction method was applied to the reported cases per sentinel of influenza in Japan from 2006 to 2015 and its accuracy was 0.81 (404 correct predictions to the total 500 predictions). It was higher than that of the ARIMA models with different orders of the autoregressive part, differencing and moving-average process. In addition, a proposed method for the estimation of the number of reported cases, which is consistent with our prediction method, was better than that of the best-fitted ARIMA model ARIMA(1,1,0) in the sense of mean square error. Our prediction method based on the backstepping method can be simplified to the comparison of the numbers of reported cases of the current and previous time steps. In spite of its simplicity, it can provide a good prediction for the spread of influenza in Japan.

  3. Estimation of Mechanical Signals in Induction Motors using the Recursive Prediction Error Method

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik

    1993-01-01

    Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed ........Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed .....

  4. NetMHCcons: a consensus method for the major histocompatibility complex class I predictions

    DEFF Research Database (Denmark)

    Karosiene, Edita; Lundegaard, Claus; Lund, Ole

    2012-01-01

    A key role in cell-mediated immunity is dedicated to the major histocompatibility complex (MHC) molecules that bind peptides for presentation on the cell surface. Several in silico methods capable of predicting peptide binding to MHC class I have been developed. The accuracy of these methods depe...... at www.cbs.dtu.dk/services/NetMHCcons, and allows the user in an automatic manner to obtain the most accurate predictions for any given MHC molecule....

  5. Pornographic information of Internet views detection method based on the connected areas

    Science.gov (United States)

    Wang, Huibai; Fan, Ajie

    2017-01-01

    Nowadays online porn video broadcasting and downloading is very popular. In view of the widespread phenomenon of Internet pornography, this paper proposed a new method of pornographic video detection based on connected areas. Firstly, decode the video into a serious of static images and detect skin color on the extracted key frames. If the area of skin color reaches a certain threshold, use the AdaBoost algorithm to detect the human face. Judge the connectivity of the human face and the large area of skin color to determine whether detect the sensitive area finally. The experimental results show that the method can effectively remove the non-pornographic videos contain human who wear less. This method can improve the efficiency and reduce the workload of detection.

  6. Measurement of the specific surface area of loose copper deposit by electrochemical methods

    Directory of Open Access Journals (Sweden)

    E. A. Dolmatova

    2016-07-01

    Full Text Available In the work the surface area of the electrode with dispersed copper deposit obtained within 30 seconds was evaluated by techniques of chronopotentiometry (CPM and impedance spectroscopy. In method CPM the electrode surface available for measurement depends on the value of the polarizing current. At high currents during the transition time there is a change of surface relief that can not determine the full surface of loose deposit. The electrochemical impedance method is devoid of this shortcoming since the measurements are carried out in indifferent electrolyte in the absence of current. The area measured by the impedance is tens of times higher than the value obtained by chronopotentiometry. It is found that from a solution containing sulfuric acid the deposits form with a high specific surface area. Based on these data it was concluded that the method of impedance spectroscopy can be used to measure in situ the surface area of the dispersed copper deposits.

  7. Development and implementation of a geographical area categorisation method with targeted performance indicators for nationwide EMS in Finland.

    Science.gov (United States)

    Pappinen, Jukka; Laukkanen-Nevala, Päivi; Mäntyselkä, Pekka; Kurola, Jouni

    2018-05-15

    In Finland, hospital districts (HD) are required by law to determine the level and availability of Emergency Medical Services (EMS) for each 1-km 2 sized area (cell) within their administrative area. The cells are currently categorised into five risk categories based on the predicted number of missions. Methodological defects and insufficient instructions have led to incomparability between EMS services. The aim of this study was to describe a new, nationwide method for categorising the cells, analyse EMS response time data and describe possible differences in mission profiles between the new risk category areas. National databases of EMS missions, population and buildings were combined with an existing nationwide 1-km 2 hexagon-shaped cell grid. The cells were categorised into four groups, based on the Finnish Environment Institute's (FEI) national definition of urban and rural areas, population and historical EMS mission density within each cell. The EMS mission profiles of the cell categories were compared using risk ratios with confidence intervals in 12 mission groups. In total, 87.3% of the population lives and 87.5% of missions took place in core or other urban areas, which covered only 4.7% of the HDs' surface area. Trauma mission incidence per 1000 inhabitants was higher in core urban areas (42.2) than in other urban (24.2) or dispersed settlement areas (24.6). The results were similar for non-trauma missions (134.8, 93.2 and 92.2, respectively). Each cell category had a characteristic mission profile. High-energy trauma missions and cardiac problems were more common in rural and uninhabited cells, while violence, intoxication and non-specific problems dominated in urban areas. The proposed area categories and grid-based data collection appear to be a useful method for evaluating EMS demand and availability in different parts of the country for statistical purposes. Due to a similar rural/urban area definition, the method might also be usable for

  8. Quantitative assessment and prediction of the contact area development during spherical tip indentation of glassy polymers.

    NARCIS (Netherlands)

    Pelletier, C.G.N.; Toonder, den J.M.J.; Govaert, L.E.; Hakiri, N.; Sakai, M.

    2008-01-01

    This paper describes the development of the contact area during indentation of polycarbonate. The contact area was measured in situ using an instrumented indentation microscope and compared with numerical simulations using an elasto-plastic constitutive model. The parameters in the model were

  9. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    Directory of Open Access Journals (Sweden)

    Nielsen Morten

    2009-07-01

    Full Text Available Abstract Background Estimation of the reliability of specific real value predictions is nontrivial and the efficacy of this is often questionable. It is important to know if you can trust a given prediction and therefore the best methods associate a prediction with a reliability score or index. For discrete qualitative predictions, the reliability is conventionally estimated as the difference between output scores of selected classes. Such an approach is not feasible for methods that predict a biological feature as a single real value rather than a classification. As a solution to this challenge, we have implemented a method that predicts the relative surface accessibility of an amino acid and simultaneously predicts the reliability for each prediction, in the form of a Z-score. Results An ensemble of artificial neural networks has been trained on a set of experimentally solved protein structures to predict the relative exposure of the amino acids. The method assigns a reliability score to each surface accessibility prediction as an inherent part of the training process. This is in contrast to the most commonly used procedures where reliabilities are obtained by post-processing the output. Conclusion The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability score with the individual predictions. However, our implementation of reliability scores in the form of a Z-score is shown to be the more informative measure for discriminating good predictions from bad ones in the entire range from completely buried to fully exposed amino acids. This is evident when comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0

  10. The Use of Data Mining Methods to Predict the Result of Infertility Treatment Using the IVF ET Method

    Directory of Open Access Journals (Sweden)

    Malinowski Paweł

    2014-12-01

    Full Text Available The IVF ET method is a scientifically recognized infertility treat- ment method. The problem, however, is this method’s unsatisfactory efficiency. This calls for a more thorough analysis of the information available in the treat- ment process, in order to detect the factors that have an effect on the results, as well as to effectively predict result of treatment. Classical statistical methods have proven to be inadequate in this issue. Only the use of modern methods of data mining gives hope for a more effective analysis of the collected data. This work provides an overview of the new methods used for the analysis of data on infertility treatment, and formulates a proposal for further directions for research into increasing the efficiency of the predicted result of the treatment process.

  11. Methods of developing core collections based on the predicted genotypic value of rice ( Oryza sativa L.).

    Science.gov (United States)

    Li, C T; Shi, C H; Wu, J G; Xu, H M; Zhang, H Z; Ren, Y L

    2004-04-01

    The selection of an appropriate sampling strategy and a clustering method is important in the construction of core collections based on predicted genotypic values in order to retain the greatest degree of genetic diversity of the initial collection. In this study, methods of developing rice core collections were evaluated based on the predicted genotypic values for 992 rice varieties with 13 quantitative traits. The genotypic values of the traits were predicted by the adjusted unbiased prediction (AUP) method. Based on the predicted genotypic values, Mahalanobis distances were calculated and employed to measure the genetic similarities among the rice varieties. Six hierarchical clustering methods, including the single linkage, median linkage, centroid, unweighted pair-group average, weighted pair-group average and flexible-beta methods, were combined with random, preferred and deviation sampling to develop 18 core collections of rice germplasm. The results show that the deviation sampling strategy in combination with the unweighted pair-group average method of hierarchical clustering retains the greatest degree of genetic diversities of the initial collection. The core collections sampled using predicted genotypic values had more genetic diversity than those based on phenotypic values.

  12. Prediction of the solubility of selected pharmaceuticals in water and alcohols with a group contribution method

    International Nuclear Information System (INIS)

    Pelczarska, Aleksandra; Ramjugernath, Deresh; Rarey, Jurgen; Domańska, Urszula

    2013-01-01

    Highlights: ► The prediction of solubility of pharmaceuticals in water and alcohols was presented. ► Improved group contribution method UNIFAC was proposed for 42 binary mixtures. ► Infinite activity coefficients were used in a model. ► A semi-predictive model with one experimental point was proposed. ► This model qualitatively describes the temperature dependency of Pharms. -- Abstract: An improved group contribution approach using activity coefficients at infinite dilution, which has been proposed by our group, was used for the prediction of the solubility of selected pharmaceuticals in water and alcohols [B. Moller, Activity of complex multifunctional organic compounds in common solvents, PhD Thesis, Chemical Engineering, University of KwaZulu-Natal, 2009]. The solubility of 16 different pharmaceuticals in water, ethanol and octan-1-ol was predicted over a fairly wide range of temperature with this group contribution model. The predicted values, along with values computed with the Schroeder-van Laar equation, are compared to experimental results published by us previously for 42 binary mixtures. The predicted solubility values were lower than those from the experiments for most of the mixtures. In order to improve the prediction method, a semi-predictive calculation using one experimental solubility value was implemented. This one point prediction has given acceptable results when comparison is made to experimental values

  13. Method for Assessing the Integrated Risk of Soil Pollution in Industrial and Mining Gathering Areas

    Science.gov (United States)

    Guan, Yang; Shao, Chaofeng; Gu, Qingbao; Ju, Meiting; Zhang, Qian

    2015-01-01

    Industrial and mining activities are recognized as major sources of soil pollution. This study proposes an index system for evaluating the inherent risk level of polluting factories and introduces an integrated risk assessment method based on human health risk. As a case study, the health risk, polluting factories and integrated risks were analyzed in a typical industrial and mining gathering area in China, namely, Binhai New Area. The spatial distribution of the risk level was determined using a Geographic Information System. The results confirmed the following: (1) Human health risk in the study area is moderate to extreme, with heavy metals posing the greatest threat; (2) Polluting factories pose a moderate to extreme inherent risk in the study area. Such factories are concentrated in industrial and urban areas, but are irregularly distributed and also occupy agricultural land, showing a lack of proper planning and management; (3) The integrated risks of soil are moderate to high in the study area. PMID:26580644

  14. Method for Assessing the Integrated Risk of Soil Pollution in Industrial and Mining Gathering Areas.

    Science.gov (United States)

    Guan, Yang; Shao, Chaofeng; Gu, Qingbao; Ju, Meiting; Zhang, Qian

    2015-11-13

    Industrial and mining activities are recognized as major sources of soil pollution. This study proposes an index system for evaluating the inherent risk level of polluting factories and introduces an integrated risk assessment method based on human health risk. As a case study, the health risk, polluting factories and integrated risks were analyzed in a typical industrial and mining gathering area in China, namely, Binhai New Area. The spatial distribution of the risk level was determined using a Geographic Information System. The results confirmed the following: (1) Human health risk in the study area is moderate to extreme, with heavy metals posing the greatest threat; (2) Polluting factories pose a moderate to extreme inherent risk in the study area. Such factories are concentrated in industrial and urban areas, but are irregularly distributed and also occupy agricultural land, showing a lack of proper planning and management; (3) The integrated risks of soil are moderate to high in the study area.

  15. Estimation of debonded area in bearing babbitt metal by C-Scan method

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Gye-jo; Park, Sang-ki [Korea Electric Power Research Inst., Taejeon (Korea); Cha, Seok-ju [Korea South Eastern Power Corp., Seoul (Korea). GEN Sector; Park, Young-woo [Chungnam National Univ., Taejeon (Korea). Mechatronics

    2006-07-01

    The debonding area which had a complex boundary was imaged with a immersion technique, and the acoustic image was compared with the actual area. The amplitude information from focused transducer can discriminate between a defected boundary area and a sound interface of dissimilar metal. The shape of irregular boundary and area was processed by a histogram equalization, after that, through the clustering and labelling, it makes the defect area cleared. Each pixel has ultrasonic intensity rate and represents a position data. The estimation error in measuring debonding area was within 4% by image processing technique. The validity of this immersion method and image equalizing technique has been done for the inspection of power plant turbine's thrust bearings. (orig.)

  16. Prediction of Aerosol Optical Depth in West Asia: Machine Learning Methods versus Numerical Models

    Science.gov (United States)

    Omid Nabavi, Seyed; Haimberger, Leopold; Abbasi, Reyhaneh; Samimi, Cyrus

    2017-04-01

    Dust-prone areas of West Asia are releasing increasingly large amounts of dust particles during warm months. Because of the lack of ground-based observations in the region, this phenomenon is mainly monitored through remotely sensed aerosol products. The recent development of mesoscale Numerical Models (NMs) has offered an unprecedented opportunity to predict dust emission, and, subsequently Aerosol Optical Depth (AOD), at finer spatial and temporal resolutions. Nevertheless, the significant uncertainties in input data and simulations of dust activation and transport limit the performance of numerical models in dust prediction. The presented study aims to evaluate if machine-learning algorithms (MLAs), which require much less computational expense, can yield the same or even better performance than NMs. Deep blue (DB) AOD, which is observed by satellites but also predicted by MLAs and NMs, is used for validation. We concentrate our evaluations on the over dry Iraq plains, known as the main origin of recently intensified dust storms in West Asia. Here we examine the performance of four MLAs including Linear regression Model (LM), Support Vector Machine (SVM), Artificial Neural Network (ANN), Multivariate Adaptive Regression Splines (MARS). The Weather Research and Forecasting model coupled to Chemistry (WRF-Chem) and the Dust REgional Atmosphere Model (DREAM) are included as NMs. The MACC aerosol re-analysis of European Centre for Medium-range Weather Forecast (ECMWF) is also included, although it has assimilated satellite-based AOD data. Using the Recursive Feature Elimination (RFE) method, nine environmental features including soil moisture and temperature, NDVI, dust source function, albedo, dust uplift potential, vertical velocity, precipitation and 9-month SPEI drought index are selected for dust (AOD) modeling by MLAs. During the feature selection process, we noticed that NDVI and SPEI are of the highest importance in MLAs predictions. The data set was divided

  17. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua

    2014-01-01

    the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...

  18. Supplementary Material for: DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail

    2016-01-01

    Abstract Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery

  19. Prediction of intestinal absorption and blood-brain barrier penetration by computational methods.

    Science.gov (United States)

    Clark, D E

    2001-09-01

    This review surveys the computational methods that have been developed with the aim of identifying drug candidates likely to fail later on the road to market. The specifications for such computational methods are outlined, including factors such as speed, interpretability, robustness and accuracy. Then, computational filters aimed at predicting "drug-likeness" in a general sense are discussed before methods for the prediction of more specific properties--intestinal absorption and blood-brain barrier penetration--are reviewed. Directions for future research are discussed and, in concluding, the impact of these methods on the drug discovery process, both now and in the future, is briefly considered.

  20. Reliability of CKD-EPI predictive equation in estimating chronic kidney disease prevalence in the Croatian endemic nephropathy area.

    Science.gov (United States)

    Fuček, Mirjana; Dika, Živka; Karanović, Sandra; Vuković Brinar, Ivana; Premužić, Vedran; Kos, Jelena; Cvitković, Ante; Mišić, Maja; Samardžić, Josip; Rogić, Dunja; Jelaković, Bojan

    2018-02-15

    Chronic kidney disease (CKD) is a significant public health problem and it is not possible to precisely predict its progression to terminal renal failure. According to current guidelines, CKD stages are classified based on the estimated glomerular filtration rate (eGFR) and albuminuria. Aims of this study were to determine the reliability of predictive equation in estimation of CKD prevalence in Croatian areas with endemic nephropathy (EN), compare the results with non-endemic areas, and to determine if the prevalence of CKD stages 3-5 was increased in subjects with EN. A total of 1573 inhabitants of the Croatian Posavina rural area from 6 endemic and 3 non-endemic villages were enrolled. Participants were classified according to the modified criteria of the World Health Organization for EN. Estimated GFR was calculated using Chronic Kidney Disease Epidemiology Collaboration equation (CKD-EPI). The results showed a very high CKD prevalence in the Croatian rural area (19%). CKD prevalence was significantly higher in EN then in non EN villages with the lowest eGFR value in diseased subgroup. eGFR correlated significantly with the diagnosis of EN. Kidney function assessment using CKD-EPI predictive equation proved to be a good marker in differentiating the study subgroups, remained as one of the diagnostic criteria for EN.

  1. A prediction method based on grey system theory in equipment condition based maintenance

    International Nuclear Information System (INIS)

    Yan, Shengyuan; Yan, Shengyuan; Zhang, Hongguo; Zhang, Zhijian; Peng, Minjun; Yang, Ming

    2007-01-01

    Grey prediction is a modeling method based on historical or present, known or indefinite information, which can be used for forecasting the development of the eigenvalues of the targeted equipment system and setting up the model by using less information. In this paper, the postulate of grey system theory, which includes the grey generating, the sorts of grey generating and the grey forecasting model, is introduced first. The concrete application process, which includes the grey prediction modeling, grey prediction, error calculation, equal dimension and new information approach, is introduced secondly. Application of a so-called 'Equal Dimension and New Information' (EDNI) technology in grey system theory is adopted in an application case, aiming at improving the accuracy of prediction without increasing the amount of calculation by replacing old data with new ones. The proposed method can provide a new way for solving the problem of eigenvalue data exploding in equal distance effectively, short time interval and real time prediction. The proposed method, which was based on historical or present, known or indefinite information, was verified by the vibration prediction of induced draft fan of a boiler of the Yantai Power Station in China, and the results show that the proposed method based on grey system theory is simple and provides a high accuracy in prediction. So, it is very useful and significant to the controlling and controllable management in safety production. (authors)

  2. Bayesian Predictive Inference of a Proportion Under a Twofold Small-Area Model

    Directory of Open Access Journals (Sweden)

    Nandram Balgobin

    2016-03-01

    Full Text Available We extend the twofold small-area model of Stukel and Rao (1997; 1999 to accommodate binary data. An example is the Third International Mathematics and Science Study (TIMSS, in which pass-fail data for mathematics of students from US schools (clusters are available at the third grade by regions and communities (small areas. We compare the finite population proportions of these small areas. We present a hierarchical Bayesian model in which the firststage binary responses have independent Bernoulli distributions, and each subsequent stage is modeled using a beta distribution, which is parameterized by its mean and a correlation coefficient. This twofold small-area model has an intracluster correlation at the first stage and an intercluster correlation at the second stage. The final-stage mean and all correlations are assumed to be noninformative independent random variables. We show how to infer the finite population proportion of each area. We have applied our models to synthetic TIMSS data to show that the twofold model is preferred over a onefold small-area model that ignores the clustering within areas. We further compare these models using a simulation study, which shows that the intracluster correlation is particularly important.

  3. The Methods for Diagnosing the Attractiveness of Ecological Entrepreneurship in Rural Areas

    Directory of Open Access Journals (Sweden)

    Shuliak Bogdan V.

    2018-03-01

    Full Text Available The study is aimed at substantiating the methods for diagnosing the attractiveness of ecological entrepreneurship in rural areas. The article defines the objectives of diagnosing the attractiveness of ecological entrepreneurship in rural areas. It is determined that the methods for diagnosing the attractiveness of environmentally oriented entrepreneurial activity should take into account its effectiveness in the context of economic, ecological, and social components; current status, dynamics and tendencies of development of basic indicators of attractiveness; relationship between the actual and target values of such indicators. The system of methods which is expedient for use in the process of diagnosing, has been defined as follows: methods of correlation analysis (substantiation of the purposes of diagnostics; coefficient analysis and fuzzy logic methods (estimation of the actual levels of attractiveness indicators; regression analysis and mathematical-statistical method (estimation of tendencies, building of forecasts; cluster analysis, ranking, rationing, and integral estimation (comparative analysis of the estimation results.

  4. Method for simulating predictive control of building systems operation in the early stages of building design

    DEFF Research Database (Denmark)

    Petersen, Steffen; Svendsen, Svend

    2011-01-01

    A method for simulating predictive control of building systems operation in the early stages of building design is presented. The method uses building simulation based on weather forecasts to predict whether there is a future heating or cooling requirement. This information enables the thermal...... control systems of the building to respond proactively to keep the operational temperature within the thermal comfort range with the minimum use of energy. The method is implemented in an existing building simulation tool designed to inform decisions in the early stages of building design through...... parametric analysis. This enables building designers to predict the performance of the method and include it as a part of the solution space. The method furthermore facilitates the task of configuring appropriate building systems control schemes in the tool, and it eliminates time consuming manual...

  5. Study on model current predictive control method of PV grid- connected inverters systems with voltage sag

    Science.gov (United States)

    Jin, N.; Yang, F.; Shang, S. Y.; Tao, T.; Liu, J. S.

    2016-08-01

    According to the limitations of the LVRT technology of traditional photovoltaic inverter existed, this paper proposes a low voltage ride through (LVRT) control method based on model current predictive control (MCPC). This method can effectively improve the photovoltaic inverter output characteristics and response speed. The MCPC method of photovoltaic grid-connected inverter designed, the sum of the absolute value of the predictive current and the given current error is adopted as the cost function with the model predictive control method. According to the MCPC, the optimal space voltage vector is selected. Photovoltaic inverter has achieved automatically switches of priority active or reactive power control of two control modes according to the different operating states, which effectively improve the inverter capability of LVRT. The simulation and experimental results proves that the proposed method is correct and effective.

  6. A Prediction Method of Airport Noise Based on Hybrid Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Tao XU

    2014-05-01

    Full Text Available Using monitoring history data to build and to train a prediction model for airport noise is a normal method in recent years. However, the single model built in different ways has various performances in the storage, efficiency and accuracy. In order to predict the noise accurately in some complex environment around airport, this paper presents a prediction method based on hybrid ensemble learning. The proposed method ensembles three algorithms: artificial neural network as an active learner, nearest neighbor as a passive leaner and nonlinear regression as a synthesized learner. The experimental results show that the three learners can meet forecast demands respectively in on- line, near-line and off-line. And the accuracy of prediction is improved by integrating these three learners’ results.

  7. A study on the fatigue life prediction of tire belt-layers using probabilistic method

    International Nuclear Information System (INIS)

    Lee, Dong Woo; Park, Jong Sang; Lee, Tae Won; Kim, Seong Rae; Sung, Ki Deug; Huh, Sun Chul

    2013-01-01

    Tire belt separation failure is occurred by internal cracks generated in *1 and *2 belt layers and by its growth. And belt failure seriously affects tire endurance. Therefore, to improve the tire endurance, it is necessary to analyze tire crack growth behavior and predict fatigue life. Generally, the prediction of tire endurance is performed by the experimental method using tire test machine. But it takes much cost and time to perform experiment. In this paper, to predict tire fatigue life, we applied deterministic fracture mechanics approach, based on finite element analysis. Also, probabilistic analysis method based on statistics using Monte Carlo simulation is presented. Above mentioned two methods include a global-local finite element analysis to provide the detail necessary to model explicitly an internal crack and calculate the J-integral for tire life prediction.

  8. A summary of methods of predicting reliability life of nuclear equipment with small samples

    International Nuclear Information System (INIS)

    Liao Weixian

    2000-03-01

    Some of nuclear equipment are manufactured in small batch, e.g., 1-3 sets. Their service life may be very difficult to determine experimentally in view of economy and technology. The method combining theoretical analysis with material tests to predict the life of equipment is put forward, based on that equipment consists of parts or elements which are made of different materials. The whole life of an equipment part consists of the crack forming life (i.e., the fatigue life or the damage accumulation life) and the crack extension life. Methods of predicting machine life has systematically summarized with the emphasis on those which use theoretical analysis to substitute large scale prototype experiments. Meanwhile, methods and steps of predicting reliability life have been described by taking into consideration of randomness of various variables and parameters in engineering. Finally, the latest advance and trends of machine life prediction are discussed

  9. A noise level prediction method based on electro-mechanical frequency response function for capacitors.

    Science.gov (United States)

    Zhu, Lingyu; Ji, Shengchang; Shen, Qi; Liu, Yuan; Li, Jinyu; Liu, Hao

    2013-01-01

    The capacitors in high-voltage direct-current (HVDC) converter stations radiate a lot of audible noise which can reach higher than 100 dB. The existing noise level prediction methods are not satisfying enough. In this paper, a new noise level prediction method is proposed based on a frequency response function considering both electrical and mechanical characteristics of capacitors. The electro-mechanical frequency response function (EMFRF) is defined as the frequency domain quotient of the vibration response and the squared capacitor voltage, and it is obtained from impulse current experiment. Under given excitations, the vibration response of the capacitor tank is the product of EMFRF and the square of the given capacitor voltage in frequency domain, and the radiated audible noise is calculated by structure acoustic coupling formulas. The noise level under the same excitations is also measured in laboratory, and the results are compared with the prediction. The comparison proves that the noise prediction method is effective.

  10. Left atrial low-voltage areas predict atrial fibrillation recurrence after catheter ablation in patients with paroxysmal atrial fibrillation.

    Science.gov (United States)

    Masuda, Masaharu; Fujita, Masashi; Iida, Osamu; Okamoto, Shin; Ishihara, Takayuki; Nanto, Kiyonori; Kanda, Takashi; Tsujimura, Takuya; Matsuda, Yasuhiro; Okuno, Shota; Ohashi, Takuya; Tsuji, Aki; Mano, Toshiaki

    2018-04-15

    Association between the presence of left atrial low-voltage areas and atrial fibrillation (AF) recurrence after pulmonary vein isolation (PVI) has been shown mainly in persistent AF patients. We sought to compare the AF recurrence rate in paroxysmal AF patients with and without left atrial low-voltage areas. This prospective observational study included 147 consecutive patients undergoing initial ablation for paroxysmal AF. Voltage mapping was performed after PVI during sinus rhythm, and low-voltage areas were defined as regions where bipolar peak-to-peak voltage was low-voltage areas after PVI were observed in 22 (15%) patients. Patients with low-voltage areas were significantly older (72±6 vs. 66±10, plow-voltage areas than without (36% vs. 6%, pLow-voltage areas were independently associated with AF recurrence even after adjustment for the other related factors (Hazard ratio, 5.89; 95% confidence interval, 2.16 to 16.0, p=0.001). The presence of left atrial low-voltage areas after PVI predicts AF recurrence in patients with paroxysmal AF as well as in patients with persistent AF. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Location estimation method of steam leak in pipeline using leakage area analysis

    International Nuclear Information System (INIS)

    Kim, Se Oh; Jeon, Hyeong Seop; Son, Ki Sung; Park, Jong Won

    2016-01-01

    It is important to have a pipeline leak-detection system that determines the presence of a leak and quickly identifies its location. Current leak detection methods use a acoustic emission sensors, microphone arrays, and camera images. Recently, many researchers have been focusing on using cameras for detecting leaks. The advantage of this method is that it can survey a wide area and monitor a pipeline over a long distance. However, conventional methods using camera monitoring are unable to target an exact leak location. In this paper, we propose a method of detecting leak locations using leak-detection results combined with multi-frame analysis. The proposed method is verified by experiment

  12. Location estimation method of steam leak in pipeline using leakage area analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Se Oh; Jeon, Hyeong Seop; Son, Ki Sung [Sae An Engineering Corp., Seoul (Korea, Republic of); Park, Jong Won [Dept. of Information Communications Engineering, Chungnam National University, Daejeon (Korea, Republic of)

    2016-10-15

    It is important to have a pipeline leak-detection system that determines the presence of a leak and quickly identifies its location. Current leak detection methods use a acoustic emission sensors, microphone arrays, and camera images. Recently, many researchers have been focusing on using cameras for detecting leaks. The advantage of this method is that it can survey a wide area and monitor a pipeline over a long distance. However, conventional methods using camera monitoring are unable to target an exact leak location. In this paper, we propose a method of detecting leak locations using leak-detection results combined with multi-frame analysis. The proposed method is verified by experiment.

  13. A comparison of radiosity with current methods of sound level prediction in commercial spaces

    Science.gov (United States)

    Beamer, C. Walter, IV; Muehleisen, Ralph T.

    2002-11-01

    The ray tracing and image methods (and variations thereof) are widely used for the computation of sound fields in architectural spaces. The ray tracing and image methods are best suited for spaces with mostly specular reflecting surfaces. The radiosity method, a method based on solving a system of energy balance equations, is best applied to spaces with mainly diffusely reflective surfaces. Because very few spaces are either purely specular or purely diffuse, all methods must deal with both types of reflecting surfaces. A comparison of the radiosity method to other methods for the prediction of sound levels in commercial environments is presented. [Work supported by NSF.

  14. An ensemble method for predicting subnuclear localizations from primary protein structures.

    Directory of Open Access Journals (Sweden)

    Guo Sheng Han

    Full Text Available BACKGROUND: Predicting protein subnuclear localization is a challenging problem. Some previous works based on non-sequence information including Gene Ontology annotations and kernel fusion have respective limitations. The aim of this work is twofold: one is to propose a novel individual feature extraction method; another is to develop an ensemble method to improve prediction performance using comprehensive information represented in the form of high dimensional feature vector obtained by 11 feature extraction methods. METHODOLOGY/PRINCIPAL FINDINGS: A novel two-stage multiclass support vector machine is proposed to predict protein subnuclear localizations. It only considers those feature extraction methods based on amino acid classifications and physicochemical properties. In order to speed up our system, an automatic search method for the kernel parameter is used. The prediction performance of our method is evaluated on four datasets: Lei dataset, multi-localization dataset, SNL9 dataset and a new independent dataset. The overall accuracy of prediction for 6 localizations on Lei dataset is 75.2% and that for 9 localizations on SNL9 dataset is 72.1% in the leave-one-out cross validation, 71.7% for the multi-localization dataset and 69.8% for the new independent dataset, respectively. Comparisons with those existing methods show that our method performs better for both single-localization and multi-localization proteins and achieves more balanced sensitivities and specificities on large-size and small-size subcellular localizations. The overall accuracy improvements are 4.0% and 4.7% for single-localization proteins and 6.5% for multi-localization proteins. The reliability and stability of our classification model are further confirmed by permutation analysis. CONCLUSIONS: It can be concluded that our method is effective and valuable for predicting protein subnuclear localizations. A web server has been designed to implement the proposed method

  15. Evaluating polymer degradation with complex mixtures using a simplified surface area method.

    Science.gov (United States)

    Steele, Kandace M; Pelham, Todd; Phalen, Robert N

    2017-09-01

    Chemical-resistant gloves, designed to protect workers from chemical hazards, are made from a variety of polymer materials such as plastic, rubber, and synthetic rubber. One material does not provide protection against all chemicals, thus proper polymer selection is critical. Standardized testing, such as chemical degradation tests, are used to aid in the selection process. The current methods of degradation ratings based on changes in weight or tensile properties can be expensive and data often do not exist for complex chemical mixtures. There are hundreds of thousands of chemical products on the market that do not have chemical resistance data for polymer selection. The method described in this study provides an inexpensive alternative to gravimetric analysis. This method uses surface area change to evaluate degradation of a polymer material. Degradation tests for 5 polymer types against 50 complex mixtures were conducted using both gravimetric and surface area methods. The percent change data were compared between the two methods. The resulting regression line was y = 0.48x + 0.019, in units of percent, and the Pearson correlation coefficient was r = 0.9537 (p ≤ 0.05), which indicated a strong correlation between percent weight change and percent surface area change. On average, the percent change for surface area was about half that of the weight change. Using this information, an equivalent rating system was developed for determining the chemical degradation of polymer gloves using surface area.

  16. KFC2: a knowledge-based hot spot prediction method based on interface solvation, atomic density, and plasticity features.

    Science.gov (United States)

    Zhu, Xiaolei; Mitchell, Julie C

    2011-09-01

    Hot spots constitute a small fraction of protein-protein interface residues, yet they account for a large fraction of the binding affinity. Based on our previous method (KFC), we present two new methods (KFC2a and KFC2b) that outperform other methods at hot spot prediction. A number of improvements were made in developing these new methods. First, we created a training data set that contained a similar number of hot spot and non-hot spot residues. In addition, we generated 47 different features, and different numbers of features were used to train the models to avoid over-fitting. Finally, two feature combinations were selected: One (used in KFC2a) is composed of eight features that are mainly related to solvent accessible surface area and local plasticity; the other (KFC2b) is composed of seven features, only two of which are identical to those used in KFC2a. The two models were built using support vector machines (SVM). The two KFC2 models were then tested on a mixed independent test set, and compared with other methods such as Robetta, FOLDEF, HotPoint, MINERVA, and KFC. KFC2a showed the highest predictive accuracy for hot spot residues (True Positive Rate: TPR = 0.85); however, the false positive rate was somewhat higher than for other models. KFC2b showed the best predictive accuracy for hot spot residues (True Positive Rate: TPR = 0.62) among all methods other than KFC2a, and the False Positive Rate (FPR = 0.15) was comparable with other highly predictive methods. Copyright © 2011 Wiley-Liss, Inc.

  17. An ensemble method to predict target genes and pathways in uveal melanoma

    Directory of Open Access Journals (Sweden)

    Wei Chao

    2018-04-01

    Full Text Available This work proposes to predict target genes and pathways for uveal melanoma (UM based on an ensemble method and pathway analyses. Methods: The ensemble method integrated a correlation method (Pearson correlation coefficient, PCC, a causal inference method (IDA and a regression method (Lasso utilizing the Borda count election method. Subsequently, to validate the performance of PIL method, comparisons between confirmed database and predicted miRNA targets were performed. Ultimately, pathway enrichment analysis was conducted on target genes in top 1000 miRNA-mRNA interactions to identify target pathways for UM patients. Results: Thirty eight of the predicted interactions were matched with the confirmed interactions, indicating that the ensemble method was a suitable and feasible approach to predict miRNA targets. We obtained 50 seed miRNA-mRNA interactions of UM patients and extracted target genes from these interactions, such as ASPG, BSDC1 and C4BP. The 601 target genes in top 1,000 miRNA-mRNA interactions were enriched in 12 target pathways, of which Phototransduction was the most significant one. Conclusion: The target genes and pathways might provide a new way to reveal the molecular mechanism of UM and give hand for target treatments and preventions of this malignant tumor.

  18. An acoustic method for predicting relative strengths of cohesive sediment deposits

    Science.gov (United States)

    Reed, A. H.; Sanders, W. M.

    2017-12-01

    Cohesive sediment dynamics are fundamentally determined by sediment mineralogy, organic matter composition, ionic strength of water, and currents. These factors work to bind the cohesive sediments and to determine depositional rates. Once deposited the sediments exhibit a nonlinear response to stress and they develop increases in shear strength. Shear strength is critically important in resuspension, transport, creep, and failure predictions. Typically, shear strength is determined by point measurements, both indirectly from free-fall penetrometers or directly on cores with a shear vane. These values are then used to interpolate over larger areas. However, the remote determination of these properties would provide continuos coverage, yet it has proven difficult with sonar systems. Recently, findings from an acoustic study on cohesive sediments in a laboratory setting suggests that cohesive sediments may be differentiated using parametric acoustics; this method pulses two primary frequencies into the sediment and the resultant difference frequency is used to determine the degree of acoustic nonlinearity within the sediment. In this study, two marine clay species, kaolinite and montmorillonite, and two biopolymers, guar gum and xanthan gum were mixed to make nine different samples. The samples were evaluated in a parametric acoustic measurement tank. From the parametric acoustic measurements, the quadratic nonlinearity coefficient (beta) was determined. beta was correlated with the cation exchange capacity (CEC), an indicator of shear strength. The results indicate that increased acoustic nonlinearity correlates with increased CEC. From this work, laboratory measurements indicate that this correlation may be used evaluate geotechnical properties of cohesive sediments and may provide a means to predict sediment weakness in subaqueous environments.

  19. A channel-by-channel method of reducing the errors associated with peak area integration

    International Nuclear Information System (INIS)

    Luedeke, T.P.; Tripard, G.E.

    1996-01-01

    A new method of reducing the errors associated with peak area integration has been developed. This method utilizes the signal content of each channel as an estimate of the overall peak area. These individual estimates can then be weighted according to the precision with which each estimate is known, producing an overall area estimate. Experimental measurements were performed on a small peak sitting on a large background, and the results compared to those obtained from a commercial software program. Results showed a marked decrease in the spread of results around the true value (obtained by counting for a long period of time), and a reduction in the statistical uncertainty associated with the peak area. (orig.)

  20. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network.

    Science.gov (United States)

    Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.

  1. The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network

    Science.gov (United States)

    Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.

    2017-05-01

    The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.

  2. Predicting human splicing branchpoints by combining sequence-derived features and multi-label learning methods.

    Science.gov (United States)

    Zhang, Wen; Zhu, Xiaopeng; Fu, Yu; Tsuji, Junko; Weng, Zhiping

    2017-12-01

    Alternative splicing is the critical process in a single gene coding, which removes introns and joins exons, and splicing branchpoints are indicators for the alternative splicing. Wet experiments have identified a great number of human splicing branchpoints, but many branchpoints are still unknown. In order to guide wet experiments, we develop computational methods to predict human splicing branchpoints. Considering the fact that an intron may have multiple branchpoints, we transform the branchpoint prediction as the multi-label learning problem, and attempt to predict branchpoint sites from intron sequences. First, we investigate a variety of intron sequence-derived features, such as sparse profile, dinucleotide profile, position weight matrix profile, Markov motif profile and polypyrimidine tract profile. Second, we consider several multi-label learning methods: partial least squares regression, canonical correlation analysis and regularized canonical correlation analysis, and use them as the basic classification engines. Third, we propose two ensemble learning schemes which integrate different features and different classifiers to build ensemble learning systems for the branchpoint prediction. One is the genetic algorithm-based weighted average ensemble method; the other is the logistic regression-based ensemble method. In the computational experiments, two ensemble learning methods outperform benchmark branchpoint prediction methods, and can produce high-accuracy results on the benchmark dataset.

  3. SGC method for predicting the standard enthalpy of formation of pure compounds from their molecular structures

    International Nuclear Information System (INIS)

    Albahri, Tareq A.; Aljasmi, Abdulla F.

    2013-01-01

    Highlights: • ΔH° f is predicted from the molecular structure of the compounds alone. • ANN-SGC model predicts ΔH° f with a correlation coefficient of 0.99. • ANN-MNLR model predicts ΔH° f with a correlation coefficient of 0.90. • Better definition of the atom-type molecular groups is presented. • The method is better than others in terms of combined simplicity, accuracy and generality. - Abstract: A theoretical method for predicting the standard enthalpy of formation of pure compounds from various chemical families is presented. Back propagation artificial neural networks were used to investigate several structural group contribution (SGC) methods available in literature. The networks were used to probe the structural groups that have significant contribution to the overall enthalpy of formation property of pure compounds and arrive at the set of groups that can best represent the enthalpy of formation for about 584 substances. The 51 atom-type structural groups listed provide better definitions of group contributions than others in the literature. The proposed method can predict the standard enthalpy of formation of pure compounds with an AAD of 11.38 kJ/mol and a correlation coefficient of 0.9934 from only their molecular structure. The results are further compared with those of the traditional SGC method based on MNLR as well as other methods in the literature

  4. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231

  5. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction.

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P; Rother, Kristian M; Bujnicki, Janusz M

    2013-04-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks.

  6. Functional connectivity between somatosensory and motor brain areas predicts individual differences in motor learning by observing.

    Science.gov (United States)

    McGregor, Heather R; Gribble, Paul L

    2017-08-01

    Action observation can facilitate the acquisition of novel motor skills; however, there is considerable individual variability in the extent to which observation promotes motor learning. Here we tested the hypothesis that individual differences in brain function or structure can predict subsequent observation-related gains in motor learning. Subjects underwent an anatomical MRI scan and resting-state fMRI scans to assess preobservation gray matter volume and preobservation resting-state functional connectivity (FC), respectively. On the following day, subjects observed a video of a tutor adapting her reaches to a novel force field. After observation, subjects performed reaches in a force field as a behavioral assessment of gains in motor learning resulting from observation. We found that individual differences in resting-state FC, but not gray matter volume, predicted postobservation gains in motor learning. Preobservation resting-state FC between left primary somatosensory cortex and bilateral dorsal premotor cortex, primary motor cortex, and primary somatosensory cortex and left superior parietal lobule was positively correlated with behavioral measures of postobservation motor learning. Sensory-motor resting-state FC can thus predict the extent to which observation will promote subsequent motor learning. NEW & NOTEWORTHY We show that individual differences in preobservation brain function can predict subsequent observation-related gains in motor learning. Preobservation resting-state functional connectivity within a sensory-motor network may be used as a biomarker for the extent to which observation promotes motor learning. This kind of information may be useful if observation is to be used as a way to boost neuroplasticity and sensory-motor recovery for patients undergoing rehabilitation for diseases that impair movement such as stroke. Copyright © 2017 the American Physiological Society.

  7. Characteristic and Prediction of Carbon Monoxide Concentration using Time Series Analysis in Selected Urban Area in Malaysia

    Directory of Open Access Journals (Sweden)

    Abdul Hamid Hazrul

    2017-01-01

    Full Text Available Carbon monoxide (CO is a poisonous, colorless, odourless and tasteless gas. The main source of carbon monoxide is from motor vehicles and carbon monoxide levels in residential areas closely reflect the traffic density. Prediction of carbon monoxide is important to give an early warning to sufferer of respiratory problems and also can help the related authorities to be more prepared to prevent and take suitable action to overcome the problem. This research was carried out using secondary data from Department of Environment Malaysia from 2013 to 2014. The main objectives of this research is to understand the characteristic of CO concentration and also to find the most suitable time series model to predict the CO concentration in Bachang, Melaka and Kuala Terengganu. Based on the lowest AIC value and several error measure, the results show that ARMA (1,1 is the most appropriate model to predict CO concentration level in Bachang, Melaka while ARMA (1,2 is the most suitable model with smallest error to predict the CO concentration level for residential area in Kuala Terengganu.

  8. Comparison of selected methods of prediction of wine exports and imports

    Directory of Open Access Journals (Sweden)

    Radka Šperková

    2008-01-01

    Full Text Available For prediction of future events, there exist a number of methods usable in managerial practice. Decision on which of them should be used in a particular situation depends not only on the amount and quality of input information, but also on a subjective managerial judgement. Paper performs a practical application and consequent comparison of results of two selected methods, which are statistical method and deductive method. Both methods were used for predicting wine exports and imports in (from the Czech Republic. Prediction was done in 2003 and it related to the economic years 2003/2004, 2004/2005, 2005/2006, and 2006/2007, within which it was compared with the real values of the given indicators.Within the deductive methods there were characterized the most important factors of external environment including the most important influence according to authors’ opinion, which was the integration of the Czech Republic into the EU from 1st May, 2004. On the contrary, the statistical method of time-series analysis did not regard the integration, which is comes out of its principle. Statistics only calculates based on data from the past, and cannot incorporate the influence of irregular future conditions, just as the EU integration. Because of this the prediction based on deductive method was more optimistic and more precise in terms of its difference from real development in the given field.

  9. Effectiveness of the cervical vertebral maturation method to predict postpeak circumpubertal growth of craniofacial structures.

    NARCIS (Netherlands)

    Fudalej, P.S.; Bollen, A.M.

    2010-01-01

    INTRODUCTION: Our aim was to assess effectiveness of the cervical vertebral maturation (CVM) method to predict circumpubertal craniofacial growth in the postpeak period. METHODS: The CVM stage was determined in 176 subjects (51 adolescent boys and 125 adolescent girls) on cephalograms taken at the

  10. Statistical Analysis of a Method to Predict Drug-Polymer Miscibility

    DEFF Research Database (Denmark)

    Knopp, Matthias Manne; Olesen, Niels Erik; Huang, Yanbin

    2016-01-01

    In this study, a method proposed to predict drug-polymer miscibility from differential scanning calorimetry measurements was subjected to statistical analysis. The method is relatively fast and inexpensive and has gained popularity as a result of the increasing interest in the formulation of drug...... as provided in this study. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci....

  11. Deep learning versus traditional machine learning methods for aggregated energy demand prediction

    NARCIS (Netherlands)

    Paterakis, N.G.; Mocanu, E.; Gibescu, M.; Stappers, B.; van Alst, W.

    2018-01-01

    In this paper the more advanced, in comparison with traditional machine learning approaches, deep learning methods are explored with the purpose of accurately predicting the aggregated energy consumption. Despite the fact that a wide range of machine learning methods have been applied to

  12. Some new results on correlation-preserving factor scores prediction methods

    NARCIS (Netherlands)

    Ten Berge, J.M.F.; Krijnen, W.P.; Wansbeek, T.J.; Shapiro, A.

    1999-01-01

    Anderson and Rubin and McDonald have proposed a correlation-preserving method of factor scores prediction which minimizes the trace of a residual covariance matrix for variables. Green has proposed a correlation-preserving method which minimizes the trace of a residual covariance matrix for factors.

  13. A Simple Microsoft Excel Method to Predict Antibiotic Outbreaks and Underutilization.

    Science.gov (United States)

    Miglis, Cristina; Rhodes, Nathaniel J; Avedissian, Sean N; Zembower, Teresa R; Postelnick, Michael; Wunderink, Richard G; Sutton, Sarah H; Scheetz, Marc H

    2017-07-01

    Benchmarking strategies are needed to promote the appropriate use of antibiotics. We have adapted a simple regressive method in Microsoft Excel that is easily implementable and creates predictive indices. This method trends consumption over time and can identify periods of over- and underuse at the hospital level. Infect Control Hosp Epidemiol 2017;38:860-862.

  14. State of the art and trends of radiometric methods for measuring the mass per unit area

    International Nuclear Information System (INIS)

    Bernhardt, R.

    1984-01-01

    The determination of the mass per unit area by means of transmission or backscattering methods is one of the traditional radioisotope applications. Microelectronics have essentially contributed to the noticeable progress achieved in the development of radiometric instruments for mass per unit area measurements. The use of microcomputers led to both a reliable solution of the main problem of processing the measured data - the correlation of the mass per unit area value with the detector signal under nonlinear calibration conditions - and a considerable increase in the efficiency of the measuring equipment

  15. Reliable B cell epitope predictions: impacts of method development and improved benchmarking

    DEFF Research Database (Denmark)

    Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole

    2012-01-01

    biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping...... evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve highly significant predictive performances suggesting these tools to be a powerful asset in rational epitope discovery. The updated version...

  16. A computational method to predict fluid-structure interaction of pressure relief valves

    Energy Technology Data Exchange (ETDEWEB)

    Kang, S. K.; Lee, D. H.; Park, S. K.; Hong, S. R. [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    2004-07-01

    An effective CFD (Computational fluid dynamics) method to predict important performance parameters, such as blowdown and chattering, for pressure relief valves in NPPs is provided in the present study. To calculate the valve motion, 6DOF (six degree of freedom) model is used. A chimera overset grid method is utilized to this study for the elimination of grid remeshing problem, when the disk moves. Further, CFD-Fastran which is developed by CFD-RC for compressible flow analysis is applied to an 1' safety valve. The prediction results ensure the applicability of the presented method in this study.

  17. Why choose Random Forest to predict rare species distribution with few samples in large undersampled areas? Three Asian crane species models provide supporting evidence

    Directory of Open Access Journals (Sweden)

    Chunrong Mi

    2017-01-01

    Full Text Available Species distribution models (SDMs have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane (Grus monacha, n = 33, White-naped Crane (Grus vipio, n = 40, and Black-necked Crane (Grus nigricollis, n = 75 in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model, Random Forest, CART (Classification and Regression Tree and Maxent (Maximum Entropy Models. In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC and true skill statistic (TSS were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid

  18. Why choose Random Forest to predict rare species distribution with few samples in large undersampled areas? Three Asian crane species models provide supporting evidence.

    Science.gov (United States)

    Mi, Chunrong; Huettmann, Falk; Guo, Yumin; Han, Xuesong; Wen, Lijia

    2017-01-01

    Species distribution models (SDMs) have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane ( Grus monacha , n  = 33), White-naped Crane ( Grus vipio , n  = 40), and Black-necked Crane ( Grus nigricollis , n  = 75) in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model), Random Forest, CART (Classification and Regression Tree) and Maxent (Maximum Entropy Models). In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC) and true skill statistic (TSS)) were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial) ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid

  19. Double Separation Method for Translation of the Infrared Information into a Visible Area

    Directory of Open Access Journals (Sweden)

    Ivana Žiljak

    2009-06-01

    Full Text Available Information visualization refers to the wavelength area ranging from 400 to 700 nm. Areas in lower wavelengths ranging from 100 to 400 nm are translated into the visual area with the goal to protect information visible only by applying instruments adapted for the ultraviolet area. Our recent research work refers to the infrared wavelength areas above the visible specter up to 1000 nm. The scientific contribution of this paper is in setting the double separation method for printing with CMYK printing inks with the goal to detect graphic information in the infrared area only. An algorithm has been created for making visual basics in the overall visible specter containing material that responds in the infrared section. This allows planning of areas in all coloring types for one and the same document that contains a secure piece of information. The system is based on double transition transformation of the visible RGB1 information recognition into CMYK2 in the same document. Secure information is recognized with the help of instruments in the set wavelength range. Most of the experiments have been carried out by analyzing the same set of RGB records. Each sample in the set was a test unit coming from another source containing different IR3 components. Thus an innovative method of color mixing has been set where colors appear respectively in daylight and separately according to IR light programming. New IR cryptography is proposed as shown in the experimental work.

  20. Improved time series prediction with a new method for selection of model parameters

    International Nuclear Information System (INIS)

    Jade, A M; Jayaraman, V K; Kulkarni, B D

    2006-01-01

    A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)

  1. Improving Allergen Prediction in Main Crops Using a Weighted Integrative Method.

    Science.gov (United States)

    Li, Jing; Wang, Jing; Li, Jing

    2017-12-01

    As a public health problem, food allergy is frequently caused by food allergy proteins, which trigger a type-I hypersensitivity reaction in the immune system of atopic individuals. The food allergens in our daily lives are mainly from crops including rice, wheat, soybean and maize. However, allergens in these main crops are far from fully uncovered. Although some bioinformatics tools or methods predicting the potential allergenicity of proteins have been proposed, each method has their limitation. In this paper, we built a novel algorithm PREAL W , which integrated PREAL, FAO/WHO criteria and motif-based method by a weighted average score, to benefit the advantages of different methods. Our results illustrated PREAL W has better performance significantly in the crops' allergen prediction. This integrative allergen prediction algorithm could be useful for critical food safety matters. The PREAL W could be accessed at http://lilab.life.sjtu.edu.cn:8080/prealw .

  2. Incrementally Detecting Change Types of Spatial Area Object: A Hierarchical Matching Method Considering Change Process

    Directory of Open Access Journals (Sweden)

    Yanhui Wang

    2018-01-01

    Full Text Available Detecting and extracting the change types of spatial area objects can track area objects’ spatiotemporal change pattern and provide the change backtracking mechanism for incrementally updating spatial datasets. To respond to the problems of high complexity of detection methods, high redundancy rate of detection factors, and the low automation degree during incrementally update process, we take into account the change process of area objects in an integrated way and propose a hierarchical matching method to detect the nine types of changes of area objects, while minimizing the complexity of the algorithm and the redundancy rate of detection factors. We illustrate in details the identification, extraction, and database entry of change types, and how we achieve a close connection and organic coupling of incremental information extraction and object type-of-change detection so as to characterize the whole change process. The experimental results show that this method can successfully detect incremental information about area objects in practical applications, with the overall accuracy reaching above 90%, which is much higher than the existing weighted matching method, making it quite feasible and applicable. It helps establish the corresponding relation between new-version and old-version objects, and facilitate the linked update processing and quality control of spatial data.

  3. Automaticity and localisation of concurrents predicts colour area activity in grapheme-colour synaesthesia.

    Science.gov (United States)

    Gould van Praag, Cassandra D; Garfinkel, Sarah; Ward, Jamie; Bor, Daniel; Seth, Anil K

    2016-07-29

    In grapheme-colour synaesthesia (GCS), the presentation of letters or numbers induces an additional 'concurrent' experience of colour. Early functional MRI (fMRI) investigations of GCS reported activation in colour-selective area V4 during the concurrent experience. However, others have failed to replicate this key finding. We reasoned that individual differences in synaesthetic phenomenology might explain this inconsistency in the literature. To test this hypothesis, we examined fMRI BOLD responses in a group of grapheme-colour synaesthetes (n=20) and matched controls (n=20) while characterising the individual phenomenology of the synaesthetes along dimensions of 'automaticity' and 'localisation'. We used an independent functional localiser to identify colour-selective areas in both groups. Activations in these areas were then assessed during achromatic synaesthesia-inducing, and non-inducing conditions; we also explored whole brain activations, where we sought to replicate the existing literature regarding synaesthesia effects. Controls showed no significant activations in the contrast of inducing > non-inducing synaesthetic stimuli, in colour-selective ROIs or at the whole brain level. In the synaesthete group, we correlated activation within colour-selective ROIs with individual differences in phenomenology using the Coloured Letters and Numbers (CLaN) questionnaire which measures, amongst other attributes, the subjective automaticity/attention in synaesthetic concurrents, and their spatial localisation. Supporting our hypothesis, we found significant correlations between individual measures of synaesthetic phenomenology and BOLD responses in colour-selective areas, when contrasting inducing against non-inducing stimuli. Specifically, left-hemisphere colour area responses were stronger for synaesthetes scoring high on phenomenological localisation and automaticity/attention, while right-hemisphere colour area responses showed a relationship with localisation

  4. Weibull statistics effective area and volume in the ball-on-ring testing method

    DEFF Research Database (Denmark)

    Frandsen, Henrik Lund

    2014-01-01

    The ball-on-ring method is together with other biaxial bending methods often used for measuring the strength of plates of brittle materials, because machining defects are remote from the high stresses causing the failure of the specimens. In order to scale the measured Weibull strength...... to geometries relevant for the application of the material, the effective area or volume for the test specimen must be evaluated. In this work analytical expressions for the effective area and volume of the ball-on-ring test specimen is derived. In the derivation the multiaxial stress field has been accounted...

  5. Predictive equations for dimensions and leaf area of coastal Southern California street trees

    Science.gov (United States)

    P.J. Peper; E.G. McPherson; S.M. Mori

    2001-01-01

    Tree height, crown height, crown width, diameter at breast height (dbh), and leaf area were measured for 16 species of commonly planted street trees in the coastal southern California city of Santa Monica, USA. The randomly sampled trees were planted from 1 to 44 years ago. Using number of years after planting or dbh as explanatory variables, mean values of dbh, tree...

  6. Equations for predicting diameter, height, crown width, and leaf area of San Joaquin Valley street trees

    Science.gov (United States)

    P.J. Peper; E.G. McPherson; S.M. Mori

    2001-01-01

    Although the modeling of energy-use reduction, air pollution uptake, rainfall interception, and microclimate modification associated with urban trees depends on data relating diameter at breast height (dbh) , crown height, crown diameter, and leaf area to tree age or dbh, scant information is available for common municipal tree species . I n this study , tree height ,...

  7. Climate Prediction Center (CPC)Area-averaged 850-hPa Western Pacific Trade Wind Anomalies

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is one of the CPC?s Monthly Atmospheric and SST Indices. It is the 850-hPa trade wind anomalies averaged over the area 5oN ? 5oS, 135oE-180o (western equatorial...

  8. Climate Prediction Center (CPC)Area-averaged 850-hPa Eastern Pacific Trade Wind Anomalies

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is one of the CPC?s Monthly Atmospheric and SST Indices. It is the 850-hPa trade wind anomalies averaged over the area 5oN ? 5oS, 135oW-120oW (eastern...

  9. Climate Prediction Center (CPC)Area-averaged 850-hPa Central Pacific Trade Wind Anomalies

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is one of the CPC?s Monthly Atmospheric and SST Indices. It is the 850-hPa trade wind anomalies averaged over the area 5oN ? 5oS, 175oW-140oW (central...

  10. Olson method for locating and calculating the extent of transmural ischemic areas at risk of infarction.

    Science.gov (United States)

    Olson, Charles W; Wagner, Galen S; Terkelsen, Christian Juhl; Stickney, Ronald; Lim, Tobin; Pahlm, Olle; Estes, E Harvey

    2014-01-01

    The purpose of this study is to present a new and improved method for translating the electrocardiographic changes of acute myocardial ischemia into a display which reflects the location and extent of the ischemic area and the associated culprit coronary artery. This method could be automated to present a graphic image of the ischemic area in a manner understandable by all levels of caregivers; from emergency transport personnel to the consulting cardiologist. Current methods for the ECG diagnosis of ST elevated myocardial infarction (STEMI) are criteria driven, and complex, and beyond the interpretive capability of many caregivers. New methods are needed to accurately diagnose the presence of acute transmural myocardial ischemia in order to accelerate a patient's clinical "door to balloon time." The proposed new method could potentially provide the information needed to accomplish this objective. The new method improves the precision of diagnosis and quantification of ischemia by normalizing the ST segment inputs from the standard 12 lead ECG, transforming these into a three dimensional vector representation of the ischemia at the electrical center of the heart. The myocardial areas likely to be involved in this ischemia are separately analyzed to assess the probability that they contributed to this event. The source of the ischemia is revealed as a specific region of the heart, and the likely location of the associated culprit coronary artery. Seventy 12 lead ECGs from subjects with known single artery occlusion in one of the three main coronary arteries were selected to test this new method. Graphic plots of the distribution of ischemia as indicated by the method are consistent with the known occlusion. The analysis of the distribution of ischemic areas in the myocardium reveals that the relationships between leads with either ST elevation or ST depression, provide critical information improving the current method. Copyright © 2014 Elsevier Inc. All rights

  11. Prediction of protein post-translational modifications: main trends and methods

    Science.gov (United States)

    Sobolev, B. N.; Veselovsky, A. V.; Poroikov, V. V.

    2014-02-01

    The review summarizes main trends in the development of methods for the prediction of protein post-translational modifications (PTMs) by considering the three most common types of PTMs — phosphorylation, acetylation and glycosylation. Considerable attention is given to general characteristics of regulatory interactions associated with PTMs. Different approaches to the prediction of PTMs are analyzed. Most of the methods are based only on the analysis of the neighbouring environment of modification sites. The related software is characterized by relatively low accuracy of PTM predictions, which may be due both to the incompleteness of training data and the features of PTM regulation. Advantages and limitations of the phylogenetic approach are considered. The prediction of PTMs using data on regulatory interactions, including the modular organization of interacting proteins, is a promising field, provided that a more carefully selected training data will be used. The bibliography includes 145 references.

  12. Application of Method of Variation to Analyze and Predict Human Induced Modifications of Water Resource Systems

    Science.gov (United States)

    Dessu, S. B.; Melesse, A. M.; Mahadev, B.; McClain, M.

    2010-12-01

    Water resource systems have often used gravitational surface and subsurface flows because of their practicality in hydrological modeling and prediction. Activities such as inter/intra-basin water transfer, the use of small pumps and the construction of micro-ponds challenge the tradition of natural rivers as water resource management unit. On the contrary, precipitation is barely affected by topography and plot harvesting in wet regions can be more manageable than diverting from rivers. Therefore, it is indicative to attend to systems where precipitation drives the dynamics while the internal mechanics constitutes spectrum of human activity and decision in a network of plots. The trade-in volume and path of harvested precipitation depends on water balance, energy balance and the kinematics of supply and demand. Method of variation can be used to understand and predict the implication of local excess precipitation harvest and exchange on the natural water system. A system model was developed using the variational form of Euler-Bernoulli’s equation for the Kenyan Mara River basin. Satellite derived digital elevation models, precipitation estimates, and surface properties such as fractional impervious surface area, are used to estimate the available water resource. Four management conditions are imposed in the model: gravitational flow, open water extraction and high water use investment at upstream and downstream respectively. According to the model, the first management maintains the basin status quo while the open source management could induce externality. The high water market at the upstream in the third management offers more than 50% of the basin-wide total revenue to the upper third section of the basin thus may promote more harvesting. The open source and upstream exploitation suggest potential drop of water availability to downstream. The model exposed the latent potential of economic gradient to reconfigure the flow network along the direction where the

  13. EP BASED PSO METHOD FOR SOLVING PROFIT BASED MULTI AREA UNIT COMMITMENT PROBLEM