WorldWideScience

Sample records for model predictions compared

  1. Comparing model predictions for ecosystem-based management

    DEFF Research Database (Denmark)

    Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste

    2016-01-01

    Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...

  2. Comparing Sediment Yield Predictions from Different Hydrologic Modeling Schemes

    Science.gov (United States)

    Dahl, T. A.; Kendall, A. D.; Hyndman, D. W.

    2015-12-01

    Sediment yield, or the delivery of sediment from the landscape to a river, is a difficult process to accurately model. It is primarily a function of hydrology and climate, but influenced by landcover and the underlying soils. These additional factors make it much more difficult to accurately model than water flow alone. It is not intuitive what impact different hydrologic modeling schemes may have on the prediction of sediment yield. Here, two implementations of the Modified Universal Soil Loss Equation (MUSLE) are compared to examine the effects of hydrologic model choice. Both the Soil and Water Assessment Tool (SWAT) and the Landscape Hydrology Model (LHM) utilize the MUSLE for calculating sediment yield. SWAT is a lumped parameter hydrologic model developed by the USDA, which is commonly used for predicting sediment yield. LHM is a fully distributed hydrologic model developed primarily for integrated surface and groundwater studies at the watershed to regional scale. SWAT and LHM models were developed and tested for two large, adjacent watersheds in the Great Lakes region; the Maumee River and the St. Joseph River. The models were run using a variety of single model and ensemble downscaled climate change scenarios from the Coupled Model Intercomparison Project 5 (CMIP5). The initial results of this comparison are discussed here.

  3. Dinucleotide controlled null models for comparative RNA gene prediction

    Directory of Open Access Journals (Sweden)

    Gesell Tanja

    2008-05-01

    Full Text Available Abstract Background Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. Results We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. Conclusion SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require

  4. Locating Pleistocene Refugia: Comparing Phylogeographic and Ecological Niche Model Predictions

    Science.gov (United States)

    2007-07-01

    research groups [21,42–46], support the idea that the bioclimatic variables used in our ENM predictions (see Materials and Methods) are of importance to the...calibrating the downscaled LGM climate data to actual observed climate conditions. ENMs were based on the 19 bioclimatic variables in the WorldClim...phylogenetics and bioclimatic modeling. Systematic Biology 55: 785–802. 34. Graham CH, Ron SR, Santos JC, Schneider CJ, Moritz C (2004) Integrating

  5. COMPARING FINANCIAL DISTRESS PREDICTION MODELS BEFORE AND DURING RECESSION

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2011-02-01

    Full Text Available The purpose of this paper is to design three separate financial distress prediction models that will track the changes in a relative importance of financial ratios throughout three consecutive years. The models were based on the financial data from 2000 privately-owned small and medium-sized enterprises in Croatia from 2006 to 2009, and developed by means of logistic regression. Macroeconomic conditions as well as market dynamic have been changed over the mentioned period. Financial ratios that were less important in one period become more important in the next period. Composition of model starting in 2006 has been changed in the next years. It tells us what financial ratios are more important during the time of economic downturn. Besides, it helps us to understand behavior of small and medium-sized enterprises in the period of prerecession and in the period of recession.

  6. Predicting the fate of biodiversity using species' distribution models: enhancing model comparability and repeatability.

    Directory of Open Access Journals (Sweden)

    Genoveva Rodríguez-Castañeda

    Full Text Available Species distribution modeling (SDM is an increasingly important tool to predict the geographic distribution of species. Even though many problems associated with this method have been highlighted and solutions have been proposed, little has been done to increase comparability among studies. We reviewed recent publications applying SDMs and found that seventy nine percent failed to report methods that ensure comparability among studies, such as disclosing the maximum probability range produced by the models and reporting on the number of species occurrences used. We modeled six species of Falco from northern Europe and demonstrate that model results are altered by (1 spatial bias in species' occurrence data, (2 differences in the geographic extent of the environmental data, and (3 the effects of transformation of model output to presence/absence data when applying thresholds. Depending on the modeling decisions, forecasts of the future geographic distribution of Falco ranged from range contraction in 80% of the species to no net loss in any species, with the best model predicting no net loss of habitat in Northern Europe. The fact that predictions of range changes in response to climate change in published studies may be influenced by decisions in the modeling process seriously hampers the possibility of making sound management recommendations. Thus, each of the decisions made in generating SDMs should be reported and evaluated to ensure conclusions and policies are based on the biology and ecology of the species being modeled.

  7. Predicting the fate of biodiversity using species' distribution models: enhancing model comparability and repeatability.

    Science.gov (United States)

    Rodríguez-Castañeda, Genoveva; Hof, Anouschka R; Jansson, Roland; Harding, Larisa E

    2012-01-01

    Species distribution modeling (SDM) is an increasingly important tool to predict the geographic distribution of species. Even though many problems associated with this method have been highlighted and solutions have been proposed, little has been done to increase comparability among studies. We reviewed recent publications applying SDMs and found that seventy nine percent failed to report methods that ensure comparability among studies, such as disclosing the maximum probability range produced by the models and reporting on the number of species occurrences used. We modeled six species of Falco from northern Europe and demonstrate that model results are altered by (1) spatial bias in species' occurrence data, (2) differences in the geographic extent of the environmental data, and (3) the effects of transformation of model output to presence/absence data when applying thresholds. Depending on the modeling decisions, forecasts of the future geographic distribution of Falco ranged from range contraction in 80% of the species to no net loss in any species, with the best model predicting no net loss of habitat in Northern Europe. The fact that predictions of range changes in response to climate change in published studies may be influenced by decisions in the modeling process seriously hampers the possibility of making sound management recommendations. Thus, each of the decisions made in generating SDMs should be reported and evaluated to ensure conclusions and policies are based on the biology and ecology of the species being modeled.

  8. Comparing predictions made by a prediction model, clinical score, and physicians: pediatric asthma exacerbations in the emergency department.

    Science.gov (United States)

    Farion, K J; Wilk, S; Michalowski, W; O'Sullivan, D; Sayyad-Shirabad, J

    2013-01-01

    Asthma exacerbations are one of the most common medical reasons for children to be brought to the hospital emergency department (ED). Various prediction models have been proposed to support diagnosis of exacerbations and evaluation of their severity. First, to evaluate prediction models constructed from data using machine learning techniques and to select the best performing model. Second, to compare predictions from the selected model with predictions from the Pediatric Respiratory Assessment Measure (PRAM) score, and predictions made by ED physicians. A two-phase study conducted in the ED of an academic pediatric hospital. In phase 1 data collected prospectively using paper forms was used to construct and evaluate five prediction models, and the best performing model was selected. In phase 2 data collected prospectively using a mobile system was used to compare the predictions of the selected prediction model with those from PRAM and ED physicians. Area under the receiver operating characteristic curve and accuracy in phase 1; accuracy, sensitivity, specificity, positive and negative predictive values in phase 2. In phase 1 prediction models were derived from a data set of 240 patients and evaluated using 10-fold cross validation. A naive Bayes (NB) model demonstrated the best performance and it was selected for phase 2. Evaluation in phase 2 was conducted on data from 82 patients. Predictions made by the NB model were less accurate than the PRAM score and physicians (accuracy of 70.7%, 73.2% and 78.0% respectively), however, according to McNemar's test it is not possible to conclude that the differences between predictions are statistically significant. Both the PRAM score and the NB model were less accurate than physicians. The NB model can handle incomplete patient data and as such may complement the PRAM score. However, it requires further research to improve its accuracy.

  9. Comparing predictive validity of four ballistic swing phase models of human walking.

    Science.gov (United States)

    Selles, R W; Bussmann, J B; Wagenaar, R C; Stam, H J

    2001-09-01

    It is unclear to what extent ballistic walking models can be used to qualitatively predict the swing phase at comfortable walking speed. Different study findings regarding the accuracy of the predictions of the swing phase kinematics may have been caused by differences in (1) kinematic input, (2) model characteristics (e.g. the number of segments), and (3) evaluation criteria. In the present study, the predictive validity of four ballistic swing phase models was evaluated and compared, that is, (1) the ballistic walking model as originally introduced by Mochon and McMahon, (2) an extended version of this model in which heel-off of the stance leg is added, (3) a double pendulum model, consisting of a two-segment swing leg with a prescribed hip trajectory, and (4) a shank pendulum model consisting of a shank and rigidly attached foot with a prescribed knee trajectory. The predictive validity was evaluated by comparing the outcome of the model simulations with experimentally derived swing phase kinematics of six healthy subjects. In all models, statistically significant differences were found between model output and experimental data. All models underestimated swing time and step length. In addition, statistically significant differences were found between the output of the different models. The present study shows that although qualitative similarities exist between the ballistic models and normal gait at comfortable walking speed, these models cannot adequately predict swing phase kinematics.

  10. Comparing predictive models of glioblastoma multiforme built using multi-institutional and local data sources.

    Science.gov (United States)

    Singleton, Kyle W; Hsu, William; Bui, Alex A T

    2012-01-01

    The growing amount of electronic data collected from patient care and clinical trials is motivating the creation of national repositories where multiple institutions share data about their patient cohorts. Such efforts aim to provide sufficient sample sizes for data mining and predictive modeling, ultimately improving treatment recommendations and patient outcome prediction. While these repositories offer the potential to improve our understanding of a disease, potential issues need to be addressed to ensure that multi-site data and resultant predictive models are useful to non-contributing institutions. In this paper we examine the challenges of utilizing National Cancer Institute datasets for modeling glioblastoma multiforme. We created several types of prognostic models and compared their results against models generated using data solely from our institution. While overall model performance between the data sources was similar, different variables were selected during model generation, suggesting that mapping data resources between models is not a straightforward issue.

  11. Human experts' and a fuzzy model's predictions of outcomes of scoliosis treatment: a comparative analysis.

    Science.gov (United States)

    Chalmers, Eric; Pedrycz, Witold; Lou, Edmond

    2015-03-01

    Brace treatment is the most commonly used nonsurgical treatment for adolescents with idiopathic scoliosis. However, brace treatment is not always successful and the factors influencing its success are not completely clear. This makes treatment outcome difficult to predict. A computer model which can accurately predict treatment outcomes could potentially provide valuable treatment recommendations. This paper describes a fuzzy system that includes a prediction model and a decision support engine. The model was constructed using conditional fuzzy c-means clustering to discover patterns in retrospective patient data. The model's ability to predict treatment outcome was compared to the ability of eight Scoliosis experts. The model and experts each predicted treatment outcome retrospectively for 28 braced patients, and these predictions were compared to the actual outcomes. The model outperformed all but one expert individually and performed similarly to the experts as a group. These results suggest that the fuzzy model is capable of providing meaningful treatment recommendations. This study offers the first model for this application whose performance has been shown to be at or above the human expert level.

  12. Comparative analysis of modified PMV models and SET models to predict human thermal sensation in naturally ventilated buildings

    DEFF Research Database (Denmark)

    Gao, Jie; Wang, Yi; Wargocki, Pawel

    2015-01-01

    In this paper, a comparative analysis was performed on the human thermal sensation estimated by modified predicted mean vote (PMV) models and modified standard effective temperature (SET) models in naturally ventilated buildings; the data were collected in field study. These prediction models were...... between the measured and predicted values using the modified PMV models exceeded 25%, while the difference between the measured thermal sensation and the predicted thermal sensation using modified SET models was approximately less than 25%. It is concluded that the modified SET models can predict human...... developed on the basis of the original PMV/SET models and consider the influence of occupants' expectations and human adaptive functions, including the extended PMV/SET models and the adaptive PMV/SET models. The results showed that when the indoor air velocity ranged from 0 to 0.2m/s and from 0.2 to 0.8m...

  13. Material Models Used to Predict Spring-in of Composite Elements: a Comparative Study

    Science.gov (United States)

    Galińska, Anna

    2017-02-01

    There have been several approaches used in the modelling of the process-induced deformations of composite parts developed so far. The most universal and most frequently used approach is the FEM modelling. In the scope of the FEM modelling several material models have been used to model the composite behaviour. In the present work two of the most popular material models: elastic and CHILE (cure hardening instantaneous linear elastic) are used to model the spring-in deformations of composite specimens and a structure fragment. The elastic model is more effective, whereas the CHILE model is considered more accurate. The results of the models are compared with each other and with the measured deformations of the real composite parts. Such a comparison shows that both models allow to predict the deformations reasonably well and that there is little difference between their results. This leads to a conclusion that the use of the simpler elastic model is a valid engineering practice.

  14. Comparing discrete fracture and continuum models to predict contaminant transport in fractured porous media.

    Science.gov (United States)

    Blessent, Daniela; Jørgensen, Peter R; Therrien, René

    2014-01-01

    We used the FRAC3Dvs numerical model (Therrien and Sudicky 1996) to compare the dual-porosity (DP), equivalent porous medium (EPM), and discrete fracture matrix diffusion (DFMD) conceptual models to predict field-scale contaminant transport in a fractured clayey till aquitard. The simulations show that the DP, EPM, and DFMD models could be equally well calibrated to reproduce contaminant breakthrough in the till aquitard for a base case. In contrast, when groundwater velocity and degradation rates are modified with respect to the base case, the DP method simulated contaminant concentrations up to three orders of magnitude different from those calculated by the DFMD model. In previous simulations of well-characterized column experiments, the DFMD method reproduced observed changes in solute transport for a range of flow and transport conditions comparable to those of the field-scale simulations, while the DP and EPM models required extensive recalibration to avoid high magnitude errors in predicted mass transport. The lack of robustness with respect to variable flow and transport conditions suggests that DP models and effective porosity EPM models have limitations for predicting cause-effect relationships in environmental planning. The study underlines the importance of obtaining well-characterized experimental data for further studies and evaluation of model key process descriptions and model suitability. © 2013, National Groundwater Association.

  15. Enhancing prediction power of chemometric models through manipulation of the fed spectrophotometric data: A comparative study

    Science.gov (United States)

    Saad, Ahmed S.; Hamdy, Abdallah M.; Salama, Fathy M.; Abdelkawy, Mohamed

    2016-10-01

    Effect of data manipulation in preprocessing step proceeding construction of chemometric models was assessed. The same set of UV spectral data was used for construction of PLS and PCR models directly and after mathematically manipulation as per well known first and second derivatives of the absorption spectra, ratio spectra and first and second derivatives of the ratio spectra spectrophotometric methods, meanwhile the optimal working wavelength ranges were carefully selected for each model and the models were constructed. Unexpectedly, number of latent variables used for models' construction varied among the different methods. The prediction power of the different models was compared using a validation set of 8 mixtures prepared as per the multilevel multifactor design and results were statistically compared using two-way ANOVA test. Root mean squares error of prediction (RMSEP) was used for further comparison of the predictability among different constructed models. Although no significant difference was found between results obtained using Partial Least Squares (PLS) and Principal Component Regression (PCR) models, however, discrepancies among results was found to be attributed to the variation in the discrimination power of adopted spectrophotometric methods on spectral data.

  16. Comparing flow duration curve and rainfall-runoff modelling for predicting daily runoff in ungauged catchments

    Science.gov (United States)

    Zhang, Yongqiang; Vaze, Jai; Chiew, Francis H. S.; Li, Ming

    2015-06-01

    Predicting daily runoff time series in ungauged catchments is both important and challenging. For the last few decades, the rainfall-runoff (RR) modelling approach has been the method of choice. There have been very few studies reported in literature which attempt to use flow duration curve (FDC) to predict daily runoff time series. This study comprehensively compares the two approaches using an extensive dataset (228 catchments) for a large region of south-eastern Australia and provides guidelines for choosing the suitable method. For each approach we used the nearest neighbour method and two weightings - a 5-donor simple mathematical average (SA) and a 5-donor inverse-distance weighting (5-IDW) - to predict daily runoff time series. The results show that 5-IDW was noticeably better than a single donor to predict daily runoff time series, especially for the FDC approach. The RR modelling approach calibrated against daily runoff outperformed the FDC approach for predicting high flows. The FDC approach was better at predicting medium to low flows in traditional calibration against the Nash-Sutcliffe-Efficiency or Root Mean Square Error, but when calibrated against a low flow objective function, both the FDC and rainfall-runoff models performed equally well in simulating the low flows. These results indicate that both methods can be further improved to simulate daily hydrographs describing the range of flow metrics in ungauged catchments. Further studies should be carried out for improving the accuracy of predicted FDC in ungauged catchments, including improving the FDC model structure and parameter fitting.

  17. Considerations for comparing radiation-induced chromosome aberration data with predictions from biophysical models

    Science.gov (United States)

    Wu, H.; Furusawa, Y.; George, K.; Kawata, T.; Cucinotta, F.

    Biophysical models addressing the formation of radiation-induced chromosome aberrations are usually based on the assumption that chromosome aberrations are formed by DNA double strand break (DSB) misrejoining, via either the homologous or the non-homologous repair pathway. However, comparing chromosome aberration data with model predictions is not always straightforward. In this paper we discuss some of the aspects that must be considered to make these comparisons meaningful. Firstly, biophysical models are usually applied to DSB rejoining and misrejoining in the G0/G1 phase of the cell cycle, while most chromosome aberration data reported in the literature are analyzed in metaphase. Since cells must progress through the cell cycle check points in order to reach mitosis, model predictions that differ from the metaphase chromosome analysis may actually agree with the aberration data in chromosomes collected in interphase. Secondly, high- LET radiation generally produces more complex aberrations involving exchanges between three or more DSB. While some models have successfully provided quantitative predictions of high-LET radiation induced complex aberrations in human lymphocytes, applying such models to other cell types requires special considerations due to the lack of geometric symmetry of the nucleus. Chromosome aberration data for non-spherical human fibroblast cells bombarded from various directions by high-LET charged particles will be presented, and their implication on physical modeling will be discussed.

  18. A comparative study of two prediction models for brain tumor progression

    Science.gov (United States)

    Zhou, Deqi; Tran, Loc; Wang, Jihong; Li, Jiang

    2015-03-01

    MR diffusion tensor imaging (DTI) technique together with traditional T1 or T2 weighted MRI scans supplies rich information sources for brain cancer diagnoses. These images form large-scale, high-dimensional data sets. Due to the fact that significant correlations exist among these images, we assume low-dimensional geometry data structures (manifolds) are embedded in the high-dimensional space. Those manifolds might be hidden from radiologists because it is challenging for human experts to interpret high-dimensional data. Identification of the manifold is a critical step for successfully analyzing multimodal MR images. We have developed various manifold learning algorithms (Tran et al. 2011; Tran et al. 2013) for medical image analysis. This paper presents a comparative study of an incremental manifold learning scheme (Tran. et al. 2013) versus the deep learning model (Hinton et al. 2006) in the application of brain tumor progression prediction. The incremental manifold learning is a variant of manifold learning algorithm to handle large-scale datasets in which a representative subset of original data is sampled first to construct a manifold skeleton and remaining data points are then inserted into the skeleton by following their local geometry. The incremental manifold learning algorithm aims at mitigating the computational burden associated with traditional manifold learning methods for large-scale datasets. Deep learning is a recently developed multilayer perceptron model that has achieved start-of-the-art performances in many applications. A recent technique named "Dropout" can further boost the deep model by preventing weight coadaptation to avoid over-fitting (Hinton et al. 2012). We applied the two models on multiple MRI scans from four brain tumor patients to predict tumor progression and compared the performances of the two models in terms of average prediction accuracy, sensitivity, specificity and precision. The quantitative performance metrics were

  19. Comparing artificial neural networks, general linear models and support vector machines in building predictive models for small interfering RNAs.

    Directory of Open Access Journals (Sweden)

    Kyle A McQuisten

    Full Text Available BACKGROUND: Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models. PRINCIPAL FINDINGS: Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation. CONCLUSIONS: The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features

  20. Model-based Comparative Prediction of Transcription-Factor Binding Motifs in Anabolic Responses in Bone

    Institute of Scientific and Technical Information of China (English)

    Andy; B.; Chen; Kazunori; Hamamura; Guohua; Wang; Weirong; Xing; Subburaman; Mohan; Hiroki; Yokota; Yunlong; Liu

    2007-01-01

    Understanding the regulatory mechanism that controls the alteration of global gene expression patterns continues to be a challenging task in computational biology. We previously developed an ant algorithm, a biologically-inspired computational technique for microarray data, and predicted putative transcription-factor binding motifs (TFBMs) through mimicking interactive behaviors of natural ants. Here we extended the algorithm into a set of web-based software, Ant Modeler, and applied it to investigate the transcriptional mechanism underlying bone formation. Mechanical loading and administration of bone morphogenic proteins (BMPs) are two known treatments to strengthen bone. We addressed a question: Is there any TFBM that stimulates both "anabolic responses of mechanical loading" and "BMP-mediated osteogenic signaling"? Although there is no significant overlap among genes in the two responses, a comparative model-based analysis suggests that the two independent osteogenic processes employ common TFBMs, such as a stress responsive element and a motif for peroxisome proliferator-activated recep- tor (PPAR). The post-modeling in vitro analysis using mouse osteoblast cells sup- ported involvements of the predicted TFBMs such as PPAR, Ikaros 3, and LMO2 in response to mechanical loading. Taken together, the results would be useful to derive a set of testable hypotheses and examine the role of specific regulators in complex transcriptional control of bone formation.

  1. Comparing Spatial Predictions

    KAUST Repository

    Hering, Amanda S.

    2011-11-01

    Under a general loss function, we develop a hypothesis test to determine whether a significant difference in the spatial predictions produced by two competing models exists on average across the entire spatial domain of interest. The null hypothesis is that of no difference, and a spatial loss differential is created based on the observed data, the two sets of predictions, and the loss function chosen by the researcher. The test assumes only isotropy and short-range spatial dependence of the loss differential but does allow it to be non-Gaussian, non-zero-mean, and spatially correlated. Constant and nonconstant spatial trends in the loss differential are treated in two separate cases. Monte Carlo simulations illustrate the size and power properties of this test, and an example based on daily average wind speeds in Oklahoma is used for illustration. Supplemental results are available online. © 2011 American Statistical Association and the American Society for Qualitys.

  2. COMPARATIVE MODELLING AND LIGAND BINDING SITE PREDICTION OF A FAMILY 43 GLYCOSIDE HYDROLASE FROM Clostridium thermocellum

    Directory of Open Access Journals (Sweden)

    Shadab Ahmed

    2012-06-01

    Full Text Available The phylogenetic analysis of Clostridium thermocellum family 43 glycoside hydrolase (CtGH43 showed close evolutionary relation with carbohydrate binding family 6 proteins from C. cellulolyticum, C. papyrosolvens, C. cellulyticum, and A. cellulyticum. Comparative modeling of CtGH43 was performed based on crystal structures with PDB IDs 3C7F, 1YIF, 1YRZ, 2EXH and 1WL7. The structure having lowest MODELLER objective function was selected. The three-dimensional structure revealed typical 5-fold beta–propeller architecture. Energy minimization and validation of predicted model with VERIFY 3D indicated acceptability of the proposed atomic structure. The Ramachandran plot analysis by RAMPAGE confirmed that family 43 glycoside hydrolase (CtGH43 contains little or negligible segments of helices. It also showed that out of 301 residues, 267 (89.3% were in most favoured region, 23 (7.7% were in allowed region and 9 (3.0% were in outlier region. IUPred analysis of CtGH43 showed no disordered region. Active site analysis showed presence of two Asp and one Glu, assumed to form a catalytic triad. This study gives us information about three-dimensional structure and reaffirms the fact that it has the similar core 5-fold beta–propeller architecture and so probably has the same inverting mechanism of action with the formation of above mentioned catalytic triad for catalysis of polysaccharides.

  3. A comparative study of slope failure prediction using logistic regression, support vector machine and least square support vector machine models

    Science.gov (United States)

    Zhou, Lim Yi; Shan, Fam Pei; Shimizu, Kunio; Imoto, Tomoaki; Lateh, Habibah; Peng, Koay Swee

    2017-08-01

    A comparative study of logistic regression, support vector machine (SVM) and least square support vector machine (LSSVM) models has been done to predict the slope failure (landslide) along East-West Highway (Gerik-Jeli). The effects of two monsoon seasons (southwest and northeast) that occur in Malaysia are considered in this study. Two related factors of occurrence of slope failure are included in this study: rainfall and underground water. For each method, two predictive models are constructed, namely SOUTHWEST and NORTHEAST models. Based on the results obtained from logistic regression models, two factors (rainfall and underground water level) contribute to the occurrence of slope failure. The accuracies of the three statistical models for two monsoon seasons are verified by using Relative Operating Characteristics curves. The validation results showed that all models produced prediction of high accuracy. For the results of SVM and LSSVM, the models using RBF kernel showed better prediction compared to the models using linear kernel. The comparative results showed that, for SOUTHWEST models, three statistical models have relatively similar performance. For NORTHEAST models, logistic regression has the best predictive efficiency whereas the SVM model has the second best predictive efficiency.

  4. Comparative Research on Prediction Model of China’s Urban-rural Residents’ Income Gap

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    By using the data concerning China’s urban-rural residents’ income gap from 1978 to 2010,this paper mainly researches the application of several kinds of models in predicting China’s urban-rural residents’ income gap.By conducting empirical analysis,we establish ARIMA prediction model,grey prediction model and quadratic-polynomial prediction model and conduct accuracy comparison.The results show that quadratic-polynomial prediction model has excellent fitting effect.By using quadratic-polynomial prediction model,this paper conducts prediction on trend of China’s urban-rural residents’ income gap from 2011 to 2013,and the prediction value of income gap of urban-rural residents in China from 2011 to 2013 is 14 173.20,15 212.92 and 16 289.67 yuan respectively.Finally,on the basis of analysis,corresponding countermeasures are put forward,in order to provide scientific basis for energy planning and policy formulation:first,strengthen government’s function of public service,coordinate resources,and strive to provide an equal opportunity of development for social members,so as to promote people’s welfare and promote social equality;second,breach industrial monopoly and bridge income gap between employees in monopoly industry and general industry;last but not the least,support,encourage and call for government to establish social relief fund,adjust residents’ income distribution from the non-governmental perspective,and endeavor to promote the income level of low-income class.

  5. Comparative study of model prediction of diffuse nutrient losses in response to changes in agricultural practices

    NARCIS (Netherlands)

    Vagstad, N.; French, H.K.; Andersen, H.E.; Groenendijk, P.; Siderius, C.

    2009-01-01

    This article presents a comparative study of modelled changes in nutrient losses from two European catchments caused by modifications in agricultural practices. The purpose was not to compare the actual models used, but rather to assess the uncertainties a manager may be faced with after receiving d

  6. Hierarchical Linear Models for Energy Prediction using Inertial Sensors: A Comparative Study for Treadmill Walking.

    Science.gov (United States)

    Vathsangam, Harshvardhan; Emken, B Adar; Schroeder, E Todd; Spruijt-Metz, Donna; Sukhatme, Gaurav S

    2013-12-01

    Walking is a commonly available activity to maintain a healthy lifestyle. Accurately tracking and measuring calories expended during walking can improve user feedback and intervention measures. Inertial sensors are a promising measurement tool to achieve this purpose. An important aspect in mapping inertial sensor data to energy expenditure is the question of normalizing across physiological parameters. Common approaches such as weight scaling require validation for each new population. An alternative is to use a hierarchical approach to model subject-specific parameters at one level and cross-subject parameters connected by physiological variables at a higher level. In this paper, we evaluate an inertial sensor-based hierarchical model to measure energy expenditure across a target population. We first determine the optimal movement and physiological features set to represent data. Periodicity based features are more accurate (phierarchical model with a subject-specific regression model and weight exponent scaled models. Subject-specific models perform significantly better (pmodels at all exponent scales whereas the hierarchical model performed worse than both. However, using an informed prior from the hierarchical model produces similar errors to using a subject-specific model with large amounts of training data (phierarchical modeling is a promising technique for generalized prediction energy expenditure prediction across a target population in a clinical setting.

  7. The Prediction of Consumer Buying Intentions: A Comparative Study of the Predictive Efficacy of Two Attitudinal Models. Faculty Working Paper No. 234.

    Science.gov (United States)

    Bhagat, Rabi S.; And Others

    The role of attitudes in the conduct of buyer behavior is examined in the context of two competitive models of attitude structure and attitude-behavior relationship. Specifically, the objectives of the study were to compare the Fishbein and Sheth models on the criteria of predictive as well as cross validities. Data on both the models were…

  8. Model predictions of metal speciation in freshwaters compared to measurements by in situ techniques.

    NARCIS (Netherlands)

    Unsworth, Emily R; Warnken, Kent W; Zhang, Hao; Davison, William; Black, Frank; Buffle, Jacques; Cao, Jun; Cleven, Rob; Galceran, Josep; Gunkel, Peggy; Kalis, Erwin; Kistler, David; Leeuwen, Herman P van; Martin, Michel; Noël, Stéphane; Nur, Yusuf; Odzak, Niksa; Puy, Jaume; Riemsdijk, Willem van; Sigg, Laura; Temminghoff, Erwin; Tercier-Waeber, Mary-Lou; Toepperwien, Stefanie; Town, Raewyn M; Weng, Liping; Xue, Hanbin

    2006-01-01

    Measurements of trace metal species in situ in a softwater river, a hardwater lake, and a hardwater stream were compared to the equilibrium distribution of species calculated using two models, WHAM 6, incorporating humic ion binding model VI and visual MINTEQ incorporating NICA-Donnan. Diffusive gra

  9. Comparative Study of Artificial Neural Network and ARIMA Models in Predicting Exchange Rate

    Directory of Open Access Journals (Sweden)

    karamollah Bagherifard

    2012-11-01

    Full Text Available Capital market as an organized market has an effective role in mobilizing financial resources due to have growth and economic development of countries and many countries now in the finance firms is responsible for the required credits. In the stock market, shareholders are always seeking the highest efficiency, so the stock price prediction is important for them. Since the stock market is a nonlinear system under conditions of political, economic and psychological, it is difficult to predict the correct stock price. Thus, in the present study artificial intelligence and ARIMA method has been used to predict stock prices. Multilayer Perceptron neural network and radial basis functions are two methods used in this research. Evaluation methods, selection methods and exponential smoothing methods are compared to random walk. The results showed that AI-based methods used in predicting stock performance are more accurate. Between two methods used in artificial intelligence, a method based on radial basis functions is capable to estimate stock prices in the future with higher accuracy.

  10. Model predictions of copper speciation in coastal water compared to measurements by analytical voltammetry.

    Science.gov (United States)

    Ndungu, Kuria

    2012-07-17

    Trace metal toxicity to aquatic biota is highly dependent on the metaĺs chemical speciation. Accordingly, metal speciation is being incorporated in to water quality criteria and toxicity regulations using the Biotic Ligand Model (BLM) but there are currently no BLM for biota in marine and estuarine waters. In this study, I compare copper speciation measurements in a typical coastal water made using Competitive ligand exchange-adsorptive cathodic stripping voltammetry (CLE-ACSV) to model calculations using Visual MINTEQ. Both Visual MINTEQ and BLM use similar programs to model copper interactions with dissolved organic matter-DOM (i.e., the Stockholm Humic Model and WHAM-Windermere Humic Aqueous Model, respectively). The total dissolved (14). The modeled [Cu2+] could be fitted to the experimental values better after the conditional stability constant for copper binding to fulvic acid (FA) complexes in DOM in the SHM was adjusted to account for higher concentration of strong Cu-binding sites in FA.

  11. Comparing the Predictions of two Mixed Neutralino Dark Matter Models with the Recent CDMS II Candidate Events

    CERN Document Server

    Roy, D P

    2010-01-01

    We consider two optimally mixed neutralino dark matter models, based on nonuniversal gaugino masses, which were recently proposed by us to achieve WMAP compatible relic density over a large part of the MSSM parameter space. We compare the resulting predictions for the spin-independent DM scattering cross-section with the recent CDMS II data, assuming the possibility of the two reported candidate events being signal events. For one model the predicted cross-section agrees with the putative signal over a small part of the parameter space, while for the other the agreement holds over the entire WMAP compatible parameter space of the model.

  12. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    Science.gov (United States)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  13. Comparative analysis of regression and artificial neural network models for wind speed prediction

    Science.gov (United States)

    Bilgili, Mehmet; Sahin, Besir

    2010-11-01

    In this study, wind speed was modeled by linear regression (LR), nonlinear regression (NLR) and artificial neural network (ANN) methods. A three-layer feedforward artificial neural network structure was constructed and a backpropagation algorithm was used for the training of ANNs. To get a successful simulation, firstly, the correlation coefficients between all of the meteorological variables (wind speed, ambient temperature, atmospheric pressure, relative humidity and rainfall) were calculated taking two variables in turn for each calculation. All independent variables were added to the simple regression model. Then, the method of stepwise multiple regression was applied for the selection of the “best” regression equation (model). Thus, the best independent variables were selected for the LR and NLR models and also used in the input layer of the ANN. The results obtained by all methods were compared to each other. Finally, the ANN method was found to provide better performance than the LR and NLR methods.

  14. Comparing the performance of 11 crop simulation models in predicting yield response to nitrogen fertilization

    DEFF Research Database (Denmark)

    Salo, T J; Palosuo, T; Kersebaum, K C

    2016-01-01

    , Finland. This is the largest standardized crop model inter-comparison under different levels of N supply to date. The models were calibrated using data from 2002 and 2008, of which 2008 included six N rates ranging from 0 to 150 kg N/ha. Calibration data consisted of weather, soil, phenology, leaf area...... index (LAI) and yield observations. The models were then tested against new data for 2009 and their performance was assessed and compared with both the two calibration years and the test year. For the calibration period, root mean square error between measurements and simulated grain dry matter yields...... mineralization as a function of soil temperature and moisture. Furthermore, specific weather event impacts such as low temperatures after emergence in 2009, tending to enhance tillering, and a high precipitation event just before harvest in 2008, causing possible yield penalties, were not captured by any...

  15. A comparative analysis of hazard models for predicting debris flows in Madison County, VA

    Science.gov (United States)

    Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.

    2001-01-01

    During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).

  16. A New Framework to Compare Mass-Flux Schemes Within the AROME Numerical Weather Prediction Model

    Science.gov (United States)

    Riette, Sébastien; Lac, Christine

    2016-08-01

    In the Application of Research to Operations at Mesoscale (AROME) numerical weather forecast model used in operations at Météo-France, five mass-flux schemes are available to parametrize shallow convection at kilometre resolution. All but one are based on the eddy-diffusivity-mass-flux approach, and differ in entrainment/detrainment, the updraft vertical velocity equation and the closure assumption. The fifth is based on a more classical mass-flux approach. Screen-level scores obtained with these schemes show few discrepancies and are not sufficient to highlight behaviour differences. Here, we describe and use a new experimental framework, able to compare and discriminate among different schemes. For a year, daily forecast experiments were conducted over small domains centred on the five French metropolitan radio-sounding locations. Cloud base, planetary boundary-layer height and normalized vertical profiles of specific humidity, potential temperature, wind speed and cloud condensate were compared with observations, and with each other. The framework allowed the behaviour of the different schemes in and above the boundary layer to be characterized. In particular, the impact of the entrainment/detrainment formulation, closure assumption and cloud scheme were clearly visible. Differences mainly concerned the transport intensity thus allowing schemes to be separated into two groups, with stronger or weaker updrafts. In the AROME model (with all interactions and the possible existence of compensating errors), evaluation diagnostics gave the advantage to the first group.

  17. A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery – Part I: model planning

    Directory of Open Access Journals (Sweden)

    Biagioli Bonizella

    2007-11-01

    Full Text Available Abstract Background Different methods have recently been proposed for predicting morbidity in intensive care units (ICU. The aim of the present study was to critically review a number of approaches for developing models capable of estimating the probability of morbidity in ICU after heart surgery. The study is divided into two parts. In this first part, popular models used to estimate the probability of class membership are grouped into distinct categories according to their underlying mathematical principles. Modelling techniques and intrinsic strengths and weaknesses of each model are analysed and discussed from a theoretical point of view, in consideration of clinical applications. Methods Models based on Bayes rule, k-nearest neighbour algorithm, logistic regression, scoring systems and artificial neural networks are investigated. Key issues for model design are described. The mathematical treatment of some aspects of model structure is also included for readers interested in developing models, though a full understanding of mathematical relationships is not necessary if the reader is only interested in perceiving the practical meaning of model assumptions, weaknesses and strengths from a user point of view. Results Scoring systems are very attractive due to their simplicity of use, although this may undermine their predictive capacity. Logistic regression models are trustworthy tools, although they suffer from the principal limitations of most regression procedures. Bayesian models seem to be a good compromise between complexity and predictive performance, but model recalibration is generally necessary. k-nearest neighbour may be a valid non parametric technique, though computational cost and the need for large data storage are major weaknesses of this approach. Artificial neural networks have intrinsic advantages with respect to common statistical models, though the training process may be problematical. Conclusion Knowledge of model

  18. Evaluating and comparing the ability to predict the bankruptcy prediction models of Zavgren and Springate in companies accepted in Tehran Stock Exchange

    Directory of Open Access Journals (Sweden)

    Ghodratollah Talebnia

    2016-07-01

    Full Text Available Recent bankruptcy of large companies at international level and volatilities of securities in Iran have highlighted the necessity of evaluating the financial power of companies. One of the evaluating tools is using bankruptcy prediction models. Bankruptcy prediction models are one of the tools for estimating the future condition of companies. The aim of this research is to present theoretical foundations and compare the results of investigating two models of Zavgren (1985 and Springate (1978 in Iran’s exchange market through main and adjusted coefficients according to statistical techniques of Logit and Multiple Discriminant Analysis (MDA. The data was gathered and tested from 2009 to 2013. The results indicated that the adjusted Springate Model was more efficient than other models in the bankruptcy year.

  19. Prediction Model of Cutting Parameters for Turning High Strength Steel Grade-H: Comparative Study of Regression Model versus ANFIS

    Directory of Open Access Journals (Sweden)

    Adel T. Abbas

    2017-01-01

    Full Text Available The Grade-H high strength steel is used in the manufacturing of many civilian and military products. The procedures of manufacturing these parts have several turning operations. The key factors for the manufacturing of these parts are the accuracy, surface roughness (Ra, and material removal rate (MRR. The production line of these parts contains many CNC turning machines to get good accuracy and repeatability. The manufacturing engineer should fulfill the required surface roughness value according to the design drawing from first trail (otherwise these parts will be rejected as well as keeping his eye on maximum metal removal rate. The rejection of these parts at any processing stage will represent huge problems to any factory because the processing and raw material of these parts are very expensive. In this paper the artificial neural network was used for predicting the surface roughness for different cutting parameters in CNC turning operations. These parameters were investigated to get the minimum surface roughness. In addition, a mathematical model for surface roughness was obtained from the experimental data using a regression analysis method. The experimental data are then compared with both the regression analysis results and ANFIS (Adaptive Network-based Fuzzy Inference System estimations.

  20. Molybdate transport in a chemically complex aquifer: Field measurements compared with solute-transport model predictions

    Science.gov (United States)

    Stollenwerk, K.G.

    1998-01-01

    A natural-gradient tracer test was conducted in an unconfined sand and gravel aquifer on Cape Cod, Massachusetts. Molybdate was included in the injectate to study the effects of variable groundwater chemistry on its aqueous distribution and to evaluate the reliability of laboratory experiments for identifying and quantifying reactions that control the transport of reactive solutes in groundwater. Transport of molybdate in this aquifer was controlled by adsorption. The amount adsorbed varied with aqueous chemistry that changed with depth as freshwater recharge mixed with a plume of sewage-contaminated groundwater. Molybdate adsorption was strongest near the water table where pH (5.7) and the concentration of the competing solutes phosphate (2.3 micromolar) and sulfate (86 micromolar) were low. Adsorption of molybdate decreased with depth as pH increased to 6.5, phosphate increased to 40 micromolar, and sulfate increased to 340 micromolar. A one-site diffuse-layer surface-complexation model and a two-site diffuse-layer surface-complexation model were used to simulate adsorption. Reactions and equilibrium constants for both models were determined in laboratory experiments and used in the reactive-transport model PHAST to simulate the two-dimensional transport of molybdate during the tracer test. No geochemical parameters were adjusted in the simulation to improve the fit between model and field data. Both models simulated the travel distance of the molybdate cloud to within 10% during the 2-year tracer test; however, the two-site diffuse-layer model more accurately simulated the molybdate concentration distribution within the cloud.

  1. Protein Models Comparator

    CERN Document Server

    Widera, Paweł

    2011-01-01

    The process of comparison of computer generated protein structural models is an important element of protein structure prediction. It has many uses including model quality evaluation, selection of the final models from a large set of candidates or optimisation of parameters of energy functions used in template free modelling and refinement. Although many protein comparison methods are available online on numerous web servers, their ability to handle a large scale model comparison is often very limited. Most of the servers offer only a single pairwise structural comparison, and they usually do not provide a model-specific comparison with a fixed alignment between the models. To bridge the gap between the protein and model structure comparison we have developed the Protein Models Comparator (pm-cmp). To be able to deliver the scalability on demand and handle large comparison experiments the pm-cmp was implemented "in the cloud". Protein Models Comparator is a scalable web application for a fast distributed comp...

  2. Modeling the Zeeman effect in high altitude SSMIS channels for numerical weather prediction profiles: comparing a fast model and a line-by-line model

    Directory of Open Access Journals (Sweden)

    R. Larsson

    2015-10-01

    Full Text Available We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high altitude Special Sensor Microwave Imager/Sounder channels 19–22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively and the expected profile errors at the affected altitudes (estimated to be around 5 K. For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Same channel, there is 1.2 K in average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Same channel, there is 1.3 K in average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K and smaller standard deviations (at below 0.4 K when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies causing up to ± 7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better

  3. Dopamine transporter comparative molecular modeling and binding site prediction using the LeuT(Aa) leucine transporter as a template.

    Science.gov (United States)

    Indarte, Martín; Madura, Jeffry D; Surratt, Christopher K

    2008-02-15

    Pharmacological and behavioral studies indicate that binding of cocaine and the amphetamines by the dopamine transporter (DAT) protein is principally responsible for initiating the euphoria and addiction associated with these drugs. The lack of an X-ray crystal structure for the DAT or any other member of the neurotransmitter:sodium symporter (NSS) family has hindered understanding of psychostimulant recognition at the atomic level; structural information has been obtained largely from mutagenesis and biophysical studies. The recent publication of a crystal structure for the bacterial leucine transporter LeuT(Aa), a distantly related NSS family homolog, provides for the first time a template for three-dimensional comparative modeling of NSS proteins. A novel computational modeling approach using the capabilities of the Molecular Operating Environment program MOE 2005.06 in conjunction with other comparative modeling servers generated the LeuT(Aa)-directed DAT model. Probable dopamine and amphetamine binding sites were identified within the DAT model using multiple docking approaches. Binding sites for the substrate ligands (dopamine and amphetamine) overlapped substantially with the analogous region of the LeuT(Aa) crystal structure for the substrate leucine. The docking predictions implicated DAT side chains known to be critical for high affinity ligand binding and suggest novel mutagenesis targets in elucidating discrete substrate and inhibitor binding sites. The DAT model may guide DAT ligand QSAR studies, and rational design of novel DAT-binding therapeutics.

  4. In silico Structural Prediction of E6 and E7 Proteins of Human Papillomavirus Strains by Comparative Modeling

    Directory of Open Access Journals (Sweden)

    Satish Kumar

    2012-07-01

    Full Text Available More than 200 different types of Human papillomavirus (HPV are identified, 40 transmit extensively through sexual contacts affecting the genital tract. HPV strains have been etiologically linked to vaginal, vulvar, penile, anal, oral and cervical cancer (99.7% as a result of mutations leading to cell transformations due to interference of E6 and E7 oncoproteins with p53 and pRB tumor suppressor genes respectively, besides other cellular proteins. As structures of E6 and E7 proteins are not available, the simultaneous structural analysis of E6 and E7 proteins of 50 different HPV strains was carried out in detail for prediction and validation, using bioinformatics tools. E6 and E7 proteins sequences were retrieved in FASTA format from NCBI and their structures predicted by comparative modeling using modeller9v6 software. Further, most of the HPV strains showed good stereochemistry results in most favored regions when subjected to PROCHECK analysis and subsequently each protein was validated using ProSA-web tool. The work carried out on comparing and exploring the structural variations in these oncogenic proteins might help in genome based drugs and vaccines designing, beyond their limitations.

  5. Predicting human papillomavirus vaccine uptake in young adult women: comparing the health belief model and theory of planned behavior.

    Science.gov (United States)

    Gerend, Mary A; Shepherd, Janet E

    2012-10-01

    Although theories of health behavior have guided thousands of studies, relatively few studies have compared these theories against one another. The purpose of the current study was to compare two classic theories of health behavior-the Health Belief Model (HBM) and the Theory of Planned Behavior (TPB)-in their prediction of human papillomavirus (HPV) vaccination. After watching a gain-framed, loss-framed, or control video, women (N = 739) ages 18-26 completed a survey assessing HBM and TPB constructs. HPV vaccine uptake was assessed 10 months later. Although the message framing intervention had no effect on vaccine uptake, support was observed for both the TPB and HBM. Nevertheless, the TPB consistently outperformed the HBM. Key predictors of uptake included subjective norms, self-efficacy, and vaccine cost. Despite the observed advantage of the TPB, findings revealed considerable overlap between the two theories and highlighted the importance of proximal versus distal predictors of health behavior.

  6. Comparing different approach and avoidance models of learning and personality in the prediction of work, university, and leadership outcomes.

    Science.gov (United States)

    Jackson, Chris J; Hobman, Elizabeth V; Jimmieson, Nerina L; Martin, Robin

    2009-05-01

    Jackson (2005) developed a hybrid model of personality and learning, known as the learning styles profiler (LSP) which was designed to span biological, socio-cognitive, and experiential research foci of personality and learning research. The hybrid model argues that functional and dysfunctional learning outcomes can be best understood in terms of how cognitions and experiences control, discipline, and re-express the biologically based scale of sensation-seeking. In two studies with part-time workers undertaking tertiary education (N = 137 and 58), established models of approach and avoidance from each of the three different research foci were compared with Jackson's hybrid model in their predictiveness of leadership, work, and university outcomes using self-report and supervisor ratings. Results showed that the hybrid model was generally optimal and, as hypothesized, that goal orientation was a mediator of sensation-seeking on outcomes (work performance, university performance, leader behaviours, and counterproductive work behaviour). Our studies suggest that the hybrid model has considerable promise as a predictor of work and educational outcomes as well as dysfunctional outcomes.

  7. How to develop, validate, and compare clinical prediction models involving radiological parameters: Study design and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Han, Kyung Hwa; Choi, Byoung Wook [Dept. of Radiology, and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul (Korea, Republic of); Song, Ki Jun [Dept. of Biostatistics and Medical Informatics, Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2016-06-15

    Clinical prediction models are developed to calculate estimates of the probability of the presence/occurrence or future course of a particular prognostic or diagnostic outcome from multiple clinical or non-clinical parameters. Radiologic imaging techniques are being developed for accurate detection and early diagnosis of disease, which will eventually affect patient outcomes. Hence, results obtained by radiological means, especially diagnostic imaging, are frequently incorporated into a clinical prediction model as important predictive parameters, and the performance of the prediction model may improve in both diagnostic and prognostic settings. This article explains in a conceptual manner the overall process of developing and validating a clinical prediction model involving radiological parameters in relation to the study design and statistical methods. Collection of a raw dataset; selection of an appropriate statistical model; predictor selection; evaluation of model performance using a calibration plot, Hosmer-Lemeshow test and c-index; internal and external validation; comparison of different models using c-index, net reclassification improvement, and integrated discrimination improvement; and a method to create an easy-to-use prediction score system will be addressed. This article may serve as a practical methodological reference for clinical researchers.

  8. Comparing 1D, 2D and 3D models for predicting root water uptake at the plant scale

    Science.gov (United States)

    de Willigen, Peter; van Dam, Jos; Heinen, Marius; Javaux, Mathieu

    2010-05-01

    Numerous modeling approaches exist to simulate soil water extraction by plant roots. They mainly differ in terms of dimensionality (from 1-D to 3-D) and in the degree of detail involved in the root geometry. One dimensional models consider 1-D root length density profiles and assume uniform horizontal soil water distribution and are very efficient regarding computation time. On the opposite, very detailed 3-D approaches, which consider explicitly the root architecture and the root water flow, may need more computation power and time. In between these two extreme cases, other approaches exist, which may be more accurate and less computationally demanding. Our objective is to compare different modeling approaches and check how their implicit or explicit simplifications or assumptions affect the root water uptake (RWU) predictions. Four models were subject to our comparison, all based on Richards equation. The first is a 1-D model solving Richards equation (SWAP) with the Feddes (1978) approach for RWU. The second one is also based on SWAP but with the root water uptake defined by a microscopic approach developed by de Jong van Lier (2008). The third one, FUSSIM, solves the Richards equation in 2-D based on a 2-D distribution of root length density (RLD). The fourth one is R-SWMS, a 3-D model simulating the water flow in the soil and in the roots, based on the complete root architecture description. A 45-day maize root was generated in 3-D and simplified in 2-D or 1-D RLD distributions. We simulated a constant uptake rate for 30 days with a 1-day rainfall at day 15 in three different soil types. We compared relative water uptake versus relative root length density profiles, and actual transpiration time series. On the one hand, the general trends of cumulative transpiration with time for the three soils were relatively similar for all models. On the other hand, some features like hydraulic lift are simulated by both FUSSIM and RSWMS models while other models do not

  9. Comparing large-scale hydrological model predictions with observed streamflow in the Pacific Northwest: effects of climate and groundwater

    Science.gov (United States)

    Mohammad Safeeq; Guillaume S. Mauger; Gordon E. Grant; Ivan Arismendi; Alan F. Hamlet; Se-Yeun Lee

    2014-01-01

    Assessing uncertainties in hydrologic models can improve accuracy in predicting future streamflow. Here, simulated streamflows using the Variable Infiltration Capacity (VIC) model at coarse (1/16°) and fine (1/120°) spatial resolutions were evaluated against observed streamflows from 217 watersheds. In...

  10. Soil erosion model predictions using parent material/soil texture-based parameters compared to using site-specific parameters

    Science.gov (United States)

    R. B. Foltz; W. J. Elliot; N. S. Wagenbrenner

    2011-01-01

    Forested areas disturbed by access roads produce large amounts of sediment. One method to predict erosion and, hence, manage forest roads is the use of physically based soil erosion models. A perceived advantage of a physically based model is that it can be parameterized at one location and applied at another location with similar soil texture or geological parent...

  11. Predicting human papillomavirus vaccine uptake in young adult women: Comparing the Health Belief Model and Theory of Planned Behavior

    Science.gov (United States)

    Gerend, Mary A.; Shepherd, Janet E.

    2012-01-01

    Background Although theories of health behavior have guided thousands of studies, relatively few studies have compared these theories against one another. Purpose The purpose of the current study was to compare two classic theories of health behavior—the Health Belief Model (HBM) and the Theory of Planned Behavior (TPB)—in their prediction of human papillomavirus (HPV) vaccination. Methods After watching a gain-framed, loss-framed, or control video, women (N=739) ages 18–26 completed a survey assessing HBM and TPB constructs. HPV vaccine uptake was assessed ten months later. Results Although the message framing intervention had no effect on vaccine uptake, support was observed for both the TPB and HBM. Nevertheless, the TPB consistently outperformed the HBM. Key predictors of uptake included subjective norms, self-efficacy, and vaccine cost. Conclusions Despite the observed advantage of the TPB, findings revealed considerable overlap between the two theories and highlighted the importance of proximal versus distal predictors of health behavior. PMID:22547155

  12. Genomic selection models double the accuracy of predicted breeding values for bacterial cold water disease resistance compared to a traditional pedigree-based model in rainbow trout aquaculture.

    Science.gov (United States)

    Vallejo, Roger L; Leeds, Timothy D; Gao, Guangtu; Parsons, James E; Martin, Kyle E; Evenhuis, Jason P; Fragomeni, Breno O; Wiens, Gregory D; Palti, Yniv

    2017-02-01

    Previously, we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative that enables exploitation of within-family genetic variation. We compared three GS models [single-step genomic best linear unbiased prediction (ssGBLUP), weighted ssGBLUP (wssGBLUP), and BayesB] to predict genomic-enabled breeding values (GEBV) for BCWD resistance in a commercial rainbow trout population, and compared the accuracy of GEBV to traditional estimates of breeding values (EBV) from a pedigree-based BLUP (P-BLUP) model. We also assessed the impact of sampling design on the accuracy of GEBV predictions. For these comparisons, we used BCWD survival phenotypes recorded on 7893 fish from 102 families, of which 1473 fish from 50 families had genotypes [57 K single nucleotide polymorphism (SNP) array]. Naïve siblings of the training fish (n = 930 testing fish) were genotyped to predict their GEBV and mated to produce 138 progeny testing families. In the following generation, 9968 progeny were phenotyped to empirically assess the accuracy of GEBV predictions made on their non-phenotyped parents. The accuracy of GEBV from all tested GS models were substantially higher than the P-BLUP model EBV. The highest increase in accuracy relative to the P-BLUP model was achieved with BayesB (97.2 to 108.8%), followed by wssGBLUP at iteration 2 (94.4 to 97.1%) and 3 (88.9 to 91.2%) and ssGBLUP (83.3 to 85.3%). Reducing the training sample size to n = ~1000 had no negative impact on the accuracy (0.67 to 0.72), but with n = ~500 the accuracy dropped to 0.53 to 0.61 if the training and testing fish were full-sibs, and even substantially lower, to 0.22 to 0.25, when they were not full-sibs. Using progeny performance data, we showed that the accuracy of genomic predictions is substantially higher

  13. Sequence-based prediction of protein-binding sites in DNA: comparative study of two SVM models.

    Science.gov (United States)

    Park, Byungkyu; Im, Jinyong; Tuvshinjargal, Narankhuu; Lee, Wook; Han, Kyungsook

    2014-11-01

    As many structures of protein-DNA complexes have been known in the past years, several computational methods have been developed to predict DNA-binding sites in proteins. However, its inverse problem (i.e., predicting protein-binding sites in DNA) has received much less attention. One of the reasons is that the differences between the interaction propensities of nucleotides are much smaller than those between amino acids. Another reason is that DNA exhibits less diverse sequence patterns than protein. Therefore, predicting protein-binding DNA nucleotides is much harder than predicting DNA-binding amino acids. We computed the interaction propensity (IP) of nucleotide triplets with amino acids using an extensive dataset of protein-DNA complexes, and developed two support vector machine (SVM) models that predict protein-binding nucleotides from sequence data alone. One SVM model predicts protein-binding nucleotides using DNA sequence data alone, and the other SVM model predicts protein-binding nucleotides using both DNA and protein sequences. In a 10-fold cross-validation with 1519 DNA sequences, the SVM model that uses DNA sequence data only predicted protein-binding nucleotides with an accuracy of 67.0%, an F-measure of 67.1%, and a Matthews correlation coefficient (MCC) of 0.340. With an independent dataset of 181 DNAs that were not used in training, it achieved an accuracy of 66.2%, an F-measure 66.3% and a MCC of 0.324. Another SVM model that uses both DNA and protein sequences achieved an accuracy of 69.6%, an F-measure of 69.6%, and a MCC of 0.383 in a 10-fold cross-validation with 1519 DNA sequences and 859 protein sequences. With an independent dataset of 181 DNAs and 143 proteins, it showed an accuracy of 67.3%, an F-measure of 66.5% and a MCC of 0.329. Both in cross-validation and independent testing, the second SVM model that used both DNA and protein sequence data showed better performance than the first model that used DNA sequence data. To the best of

  14. Comparative assessment for future prediction of urban water environment using WEAP model: A case study of Kathmandu, Manila and Jakarta

    Science.gov (United States)

    Kumar, Pankaj; Yoshifumi, Masago; Ammar, Rafieiemam; Mishra, Binaya; Fukushi, Ken

    2017-04-01

    Uncontrolled release of pollutants, increasing extreme weather condition, rapid urbanization and poor governance posing a serious threat to sustainable water resource management in developing urban spaces. Considering half of the world's mega-cities are in the Asia and the Pacific with 1.7 billion people do not access to improved water and sanitation, water security through its proper management is both an increasing concern and an imperative critical need. This research work strives to give a brief glimpse about predicted future water environment in Bagmati, Pasig and Ciliwung rivers from three different cities viz. Manila, Kathmandu and Jakarta respectively. Hydrological model used here to foresee the collective impacts of rapid population growth because of urbanization as well as climate change on unmet demand and water quality in near future time by 2030. All three rivers are major source of water for different usage viz. domestic, industrial, agriculture and recreation but uncontrolled withdrawal and sewerage disposal causing deterioration of water environment in recent past. Water Evaluation and Planning (WEAP) model was used to model river water quality pollution future scenarios using four indicator species i.e. Dissolved Oxygen (DO), Biochemical Oxygen Demand (BOD), Chemical Oxygen Demand (COD) and Nitrate (NO3). Result for simulated water quality as well as unmet demand for year 2030 when compared with that of reference year clearly indicates that not only water quality deteriorates but also unmet demands is increasing in future course of time. This also suggests that current initiatives and policies for water resource management are not sufficient enough and hence immediate and inclusive action through transdisciplinary research.

  15. Quantifying and comparing dynamic predictive accuracy of joint models for longitudinal marker and time-to-event in presence of censoring and competing risks

    DEFF Research Database (Denmark)

    Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie

    2015-01-01

    's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used......Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject...... psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort....

  16. Comparative Analysis of Local Control Prediction Using Different Biophysical Models for Non-Small Cell Lung Cancer Patients Undergoing Stereotactic Body Radiotherapy

    Directory of Open Access Journals (Sweden)

    Bao-Tian Huang

    2017-01-01

    Full Text Available Purpose. The consistency for predicting local control (LC data using biophysical models for stereotactic body radiotherapy (SBRT treatment of lung cancer is unclear. This study aims to compare the results calculated from different models using the treatment planning data. Materials and Methods. Treatment plans were designed for 17 patients diagnosed with primary non-small cell lung cancer (NSCLC using 5 different fraction schemes. The Martel model, Ohri model, and the Tai model were used to predict the 2-year LC value. The Gucken model, Santiago model, and the Tai model were employed to estimate the 3-year LC data. Results. We found that the employed models resulted in completely different LC prediction except for the Gucken and the Santiago models which exhibited quite similar 3-year LC data. The predicted 2-year and 3-year LC values in different models were not only associated with the dose normalization but also associated with the employed fraction schemes. The greatest difference predicted by different models was up to 15.0%. Conclusions. Our results show that different biophysical models influence the LC prediction and the difference is not only correlated to the dose normalization but also correlated to the employed fraction schemes.

  17. Somatic growth of mussels Mytilus edulis in field studies compared to predictions using BEG, DEB, and SFG models

    Science.gov (United States)

    Larsen, Poul S.; Filgueira, Ramón; Riisgård, Hans Ulrik

    2014-04-01

    Prediction of somatic growth of blue mussels, Mytilus edulis, based on the data from 2 field-growth studies of mussels in suspended net-bags in Danish waters was made by 3 models: the bioenergetic growth (BEG), the dynamic energy budget (DEB), and the scope for growth (SFG). Here, the standard BEG model has been expanded to include the temperature dependence of filtration rate and respiration and an ad hoc modification to ensure a smooth transition to zero ingestion as chlorophyll a (chl a) concentration approaches zero, both guided by published data. The first 21-day field study was conducted at nearly constant environmental conditions with a mean chl a concentration of C = 2.7 μg L- 1, and the observed monotonous growth in the dry weight of soft parts was best predicted by DEB while BEG and SFG models produced lower growth. The second 165-day field study was affected by large variations in chl a and temperature, and the observed growth varied accordingly, but nevertheless, DEB and SFG predicted monotonous growth in good agreement with the mean pattern while BEG mimicked the field data in response to observed changes in chl a concentration and temperature. The general features of the models were that DEB produced the best average predictions, SFG mostly underestimated growth, whereas only BEG was sensitive to variations in chl a concentration and temperature. DEB and SFG models rely on the calibration of the half-saturation coefficient to optimize the food ingestion function term to that of observed growth, and BEG is independent of observed actual growth as its predictions solely rely on the time history of the local chl a concentration and temperature.

  18. Somatic growth of mussels Mytilus edulis in field studies compared to predictions using BEG, DEB, and SFG models

    DEFF Research Database (Denmark)

    Larsen, Poul Scheel; Filgueira, Ramón; Riisgård, Hans Ulrik

    2014-01-01

    model has been expanded to include the temperature dependence of filtration rate and respiration and an ad hoc modification to ensure a smooth transition to zero ingestion as chlorophyll a (chl a) concentration approaches zero, both guided by published data. The first 21-day field study was conducted......Prediction of somatic growth of blue mussels, Mytilus edulis, based on the data from 2 field-growth studies of mussels in suspended net-bags in Danish waters was made by 3 models: the bioenergetic growth (BEG), the dynamic energy budget (DEB), and the scope for growth (SFG). Here, the standard BEG...... at nearly constant environmental conditions with a mean chl a concentration of C=2.7μgL−1, and the observed monotonous growth in the dry weight of soft parts was best predicted by DEB while BEG and SFG models produced lower growth. The second 165-day field study was affected by large variations in chl...

  19. Staging of liver fibrosis in chronic hepatitis B patients with a composite predictive model:A comparative study

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    AIM:To evaluate the efficacy of 6 noninvasive liver fibrosis models and to identify the most valuable model for the prediction of liver fibrosis stage in chronic hepatitis B(CHB) patients.METHODS:Seventy-eight CHB patients were consecutively enrolled in this study.Liver biopsy was performed and blood serum was obtained at admission.Histological diagnosis was made according to the METAVIR system.Significant fibrosis was defined as stage score ≥ 2,severe fibrosis as stage score ≥ 3.The diagnostic accuracy of ...

  20. When should we expect early bursts of trait evolution in comparative data? Predictions from an evolutionary food web model.

    Science.gov (United States)

    Ingram, T; Harmon, L J; Shurin, J B

    2012-09-01

    Conceptual models of adaptive radiation predict that competitive interactions among species will result in an early burst of speciation and trait evolution followed by a slowdown in diversification rates. Empirical studies often show early accumulation of lineages in phylogenetic trees, but usually fail to detect early bursts of phenotypic evolution. We use an evolutionary simulation model to assemble food webs through adaptive radiation, and examine patterns in the resulting phylogenetic trees and species' traits (body size and trophic position). We find that when foraging trade-offs result in food webs where all species occupy integer trophic levels, lineage diversity and trait disparity are concentrated early in the tree, consistent with the early burst model. In contrast, in food webs in which many omnivorous species feed at multiple trophic levels, high levels of turnover of species' identities and traits tend to eliminate the early burst signal. These results suggest testable predictions about how the niche structure of ecological communities may be reflected by macroevolutionary patterns.

  1. Wave modelling as a proxy for seagrass ecological modelling: Comparing fetch and process-based predictions for a bay and reef lagoon

    Science.gov (United States)

    Callaghan, David P.; Leon, Javier X.; Saunders, Megan I.

    2015-02-01

    The distribution, abundance, behaviour, and morphology of marine species is affected by spatial variability in the wave environment. Maps of wave metrics (e.g. significant wave height Hs, peak energy wave period Tp, and benthic wave orbital velocity URMS) are therefore useful for predictive ecological models of marine species and ecosystems. A number of techniques are available to generate maps of wave metrics, with varying levels of complexity in terms of input data requirements, operator knowledge, and computation time. Relatively simple "fetch-based" models are generated using geographic information system (GIS) layers of bathymetry and dominant wind speed and direction. More complex, but computationally expensive, "process-based" models are generated using numerical models such as the Simulating Waves Nearshore (SWAN) model. We generated maps of wave metrics based on both fetch-based and process-based models and asked whether predictive performance in models of benthic marine habitats differed. Predictive models of seagrass distribution for Moreton Bay, Southeast Queensland, and Lizard Island, Great Barrier Reef, Australia, were generated using maps based on each type of wave model. For Lizard Island, performance of the process-based wave maps was significantly better for describing the presence of seagrass, based on Hs, Tp, and URMS. Conversely, for the predictive model of seagrass in Moreton Bay, based on benthic light availability and Hs, there was no difference in performance using the maps of the different wave metrics. For predictive models where wave metrics are the dominant factor determining ecological processes it is recommended that process-based models be used. Our results suggest that for models where wave metrics provide secondarily useful information, either fetch- or process-based models may be equally useful.

  2. Comparative Protein Structure Modeling Using MODELLER.

    Science.gov (United States)

    Webb, Benjamin; Sali, Andrej

    2016-06-20

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. © 2016 by John Wiley & Sons, Inc.

  3. Comparing translational population-PBPK modelling of brain microdialysis with bottom-up prediction of brain-to-plasma distribution in rat and human.

    Science.gov (United States)

    Ball, Kathryn; Bouzom, François; Scherrmann, Jean-Michel; Walther, Bernard; Declèves, Xavier

    2014-11-01

    The prediction of brain extracellular fluid (ECF) concentrations in human is a potentially valuable asset during drug development as it can provide the pharmacokinetic input for pharmacokinetic-pharmacodynamic models. This study aimed to compare two translational modelling approaches that can be applied at the preclinical stage of development in order to simulate human brain ECF concentrations. A population-PBPK model of the central nervous system was developed based on brain microdialysis data, and the model parameters were translated to their corresponding human values to simulate ECF and brain tissue concentration profiles. In parallel, the PBPK modelling software Simcyp was used to simulate human brain tissue concentrations, via the bottom-up prediction of brain tissue distribution using two different sets of mechanistic tissue composition-based equations. The population-PBPK and bottom-up approaches gave similar predictions of total brain concentrations in both rat and human, while only the population-PBPK model was capable of accurately simulating the rat ECF concentrations. The choice of PBPK model must therefore depend on the purpose of the modelling exercise, the in vitro and in vivo data available and knowledge of the mechanisms governing the membrane permeability and distribution of the drug.

  4. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  5. Predictive models in urology.

    Science.gov (United States)

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  6. A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery – Part II: an illustrative example

    Directory of Open Access Journals (Sweden)

    Giomarelli Pierpaolo

    2007-11-01

    Full Text Available Abstract Background Popular predictive models for estimating morbidity probability after heart surgery are compared critically in a unitary framework. The study is divided into two parts. In the first part modelling techniques and intrinsic strengths and weaknesses of different approaches were discussed from a theoretical point of view. In this second part the performances of the same models are evaluated in an illustrative example. Methods Eight models were developed: Bayes linear and quadratic models, k-nearest neighbour model, logistic regression model, Higgins and direct scoring systems and two feed-forward artificial neural networks with one and two layers. Cardiovascular, respiratory, neurological, renal, infectious and hemorrhagic complications were defined as morbidity. Training and testing sets each of 545 cases were used. The optimal set of predictors was chosen among a collection of 78 preoperative, intraoperative and postoperative variables by a stepwise procedure. Discrimination and calibration were evaluated by the area under the receiver operating characteristic curve and Hosmer-Lemeshow goodness-of-fit test, respectively. Results Scoring systems and the logistic regression model required the largest set of predictors, while Bayesian and k-nearest neighbour models were much more parsimonious. In testing data, all models showed acceptable discrimination capacities, however the Bayes quadratic model, using only three predictors, provided the best performance. All models showed satisfactory generalization ability: again the Bayes quadratic model exhibited the best generalization, while artificial neural networks and scoring systems gave the worst results. Finally, poor calibration was obtained when using scoring systems, k-nearest neighbour model and artificial neural networks, while Bayes (after recalibration and logistic regression models gave adequate results. Conclusion Although all the predictive models showed acceptable

  7. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.

  8. Predicting dynamics and rheology of blood flow: A comparative study of multiscale and low-dimensional models of red blood cells

    Science.gov (United States)

    Pan, Wenxiao; Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em

    2011-01-01

    We compare the predictive capability of two mathematical models for red blood cells (RBCs) focusing on blood flow in capillaries and arterioles. Both RBC models as well as their corresponding blood flows are based on the dissipative particle dynamics (DPD) method, a coarse-grained molecular dynamics approach. The first model employs a multiscale description of the RBC (MS-RBC), with its membrane represented by hundreds or even thousands of DPD-particles connected by springs into a triangular network in combination with out-of-plane elastic bending resistance. Extra dissipation within the network accounts for membrane viscosity, while the characteristic biconcave RBC shape is achieved by imposition of constraints for constant membrane area and constant cell volume. The second model is based on a low-dimensional description (LD-RBC) constructed as a closed torus-like ring of only 10 large DPD colloidal particles. They are connected into a ring by worm-like chain (WLC) springs combined with bending resistance. The LD-RBC model can be fitted to represent the entire range of nonlinear elastic deformations as measured by optical-tweezers for healthy and for infected RBCs in malaria. MS-RBCs suspensions model the dynamics and rheology of blood flow accurately for any vessel size but this approach is computationally expensive for vessel diameters above 100 microns. Surprisingly, the much more economical suspensions of LD-RBCs also capture the blood flow dynamics and rheology accurately except for small-size vessels comparable to RBC diameter. In particular, the LD-RBC suspensions are shown to properly capture the experimental data for the apparent viscosity of blood and its cell-free layer (CFL) in tube flow. Taken together, these findings suggest a hierarchical approach in modeling blood flow in the arterial tree, whereby the MS-RBC model should be employed for capillaries and arterioles below 100 microns, the LD-RBC model for arterioles, and the continuum description for

  9. Armodafinil and modafinil in patients with excessive sleepiness associated with shift work disorder: a pharmacokinetic/pharmacodynamic model for predicting and comparing their concentration-effect relationships.

    Science.gov (United States)

    Darwish, Mona; Bond, Mary; Ezzet, Farkad

    2012-09-01

    Armodafinil, the longer lasting R-isomer of racemic modafinil, improves wakefulness in patients with excessive sleepiness associated with shift work disorder (SWD). Pharmacokinetic studies suggest that armodafinil achieves higher plasma concentrations than modafinil late in a dose interval following equal oral doses. Pooled Multiple Sleep Latency Test (MSLT) data from 2 randomized, double-blind, placebo-controlled trials in 463 patients with SWD, 1 with armodafinil 150 mg/d and 1 with modafinil 200 mg/d (both administered around 2200 h before night shifts), were used to build a pharmacokinetic/pharmacodynamic model. Predicted plasma drug concentrations were obtained by developing and applying a population pharmacokinetic model using nonlinear mixed-effects modeling. Armodafinil 200 mg produced a plasma concentration above the EC(50) (4.6 µg/mL) for 9 hours, whereas modafinil 200 mg did not exceed the EC(50). Consequently, armodafinil produced greater increases in predicted placebo-subtracted MSLT times of 0.5-1 minute (up to 10 hours after dosing) compared with modafinil. On a milligram-to-milligram basis, armodafinil 200 mg consistently increased wakefulness more than modafinil 200 mg, including times late in the 8-hour shift.

  10. A Comparative Analysis of Reynolds-Averaged Navier-Stokes Model Predictions for Rayleigh-Taylor Instability and Mixing with Constant and Complex Accelerations

    Science.gov (United States)

    Schilling, Oleg

    2016-11-01

    Two-, three- and four-equation, single-velocity, multicomponent Reynolds-averaged Navier-Stokes (RANS) models, based on the turbulent kinetic energy dissipation rate or lengthscale, are used to simulate At = 0 . 5 Rayleigh-Taylor turbulent mixing with constant and complex accelerations. The constant acceleration case is inspired by the Cabot and Cook (2006) DNS, and the complex acceleration cases are inspired by the unstable/stable and unstable/neutral cases simulated using DNS (Livescu, Wei & Petersen 2011) and the unstable/stable/unstable case simulated using ILES (Ramaprabhu, Karkhanis & Lawrie 2013). The four-equation models couple equations for the mass flux a and negative density-specific volume correlation b to the K- ɛ or K- L equations, while the three-equation models use a two-fluid algebraic closure for b. The lengthscale-based models are also applied with no buoyancy production in the L equation to explore the consequences of neglecting this term. Predicted mixing widths, turbulence statistics, fields, and turbulent transport equation budgets are compared among these models to identify similarities and differences in the turbulence production, dissipation and diffusion physics represented by the closures used in these models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  11. Nominal model predictive control

    OpenAIRE

    Grüne, Lars

    2013-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  12. Nominal Model Predictive Control

    OpenAIRE

    Grüne, Lars

    2014-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  13. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  14. Prediction of bending moment resistance of screw connected joints in plywood members using regression models and compare with that commercial medium density fiberboard (MDF and particleboard

    Directory of Open Access Journals (Sweden)

    Sadegh Maleki

    2014-11-01

    Full Text Available The study aimed at predicting bending moment resistance plywood of screw (coarse and fine threads joints using regression models. Thickness of the member was 19mm and compared with medium density fiberboard (MDF and particleboard with 18mm thicknesses. Two types of screws including coarse and fine thread drywall screw with nominal diameters of 6, 8 and 10mm and 3.5, 4 and 5 cm length respectively and sheet metal screw with diameters of 8 and 10 and length of 4 cm were used. The results of the study have shown that bending moment resistance of screw was increased by increasing of screws diameter and penetrating depth. Screw Length was found to have a larger influence on bending moment resistance than screw diameter. Bending moment resistance with coarse thread drywall screws was higher than those of fine thread drywall screws. The highest bending moment resistance (71.76 N.m was observed in joints made with coarse screw which were 5 mm in diameter and 28 mm in depth of penetration. The lowest bending moment resistance (12.08 N.m was observed in joints having fine screw with 3.5 mm diameter and 9 mm penetrations. Furthermore, bending moment resistance in plywood was higher than those of medium density fiberboard (MDF and particleboard. Finally, it has been found that the ultimate bending moment resistance of plywood joint can be predicted following formula Wc = 0.189×D0.726×P0.577 for coarse thread drywall screws and Wf = 0.086×D0.942×P0.704 for fine ones according to diameter and penetrating depth. The analysis of variance of the experimental and predicted data showed that the developed models provide a fair approximation of actual experimental measurements.

  15. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  16. Pedestrian Path Prediction with Recursive Bayesian Filters: A Comparative Study

    NARCIS (Netherlands)

    Schneider, N.; Gavrila, D.M.

    2013-01-01

    In the context of intelligent vehicles, we perform a comparative study on recursive Bayesian filters for pedestrian path prediction at short time horizons (< 2s). We consider Extended Kalman Filters (EKF) based on single dynamical models and Interacting Multiple Models (IMM) combining several such

  17. Pedestrian Path Prediction with Recursive Bayesian Filters: A Comparative Study

    NARCIS (Netherlands)

    Schneider, N.; Gavrila, D.M.

    2013-01-01

    In the context of intelligent vehicles, we perform a comparative study on recursive Bayesian filters for pedestrian path prediction at short time horizons (< 2s). We consider Extended Kalman Filters (EKF) based on single dynamical models and Interacting Multiple Models (IMM) combining several such b

  18. Predicting dynamics and rheology of blood flow: A comparative study of multiscale and low-dimensional models of red blood cells.

    Science.gov (United States)

    Pan, Wenxiao; Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George Em

    2011-09-01

    We compare the predictive capability of two mathematical models for red blood cells (RBCs) focusing on blood flow in capillaries and arterioles. Both RBC models as well as their corresponding blood flows are based on the dissipative particle dynamics (DPD) method, a coarse-grained molecular dynamics approach. The first model employs a multiscale description of the RBC (MS-RBC), with its membrane represented by hundreds or even thousands of DPD-particles connected by springs into a triangular network in combination with out-of-plane elastic bending resistance. Extra dissipation within the network accounts for membrane viscosity, while the characteristic biconcave RBC shape is achieved by imposition of constraints for constant membrane area and constant cell volume. The second model is based on a low-dimensional description (LD-RBC) constructed as a closed torus-like ring of only 10 large DPD colloidal particles. They are connected into a ring by worm-like chain (WLC) springs combined with bending resistance. The LD-RBC model can be fitted to represent the entire range of nonlinear elastic deformations as measured by optical-tweezers for healthy and for infected RBCs in malaria. MS-RBCs suspensions model the dynamics and rheology of blood flow accurately for any vessel size but this approach is computationally expensive for vessel diameters above 100μm. Surprisingly, the much more economical suspensions of LD-RBCs also capture the blood flow dynamics and rheology accurately except for small-size vessels comparable to RBC diameter. In particular, the LD-RBC suspensions are shown to properly capture the experimental data for the apparent viscosity of blood and its cell-free layer (CFL) in tube flow. Taken together, these findings suggest a hierarchical approach in modeling blood flow in the arterial tree, whereby the MS-RBC model should be employed for capillaries and arterioles below 100μm, the LD-RBC model for arterioles, and the continuum description for arteries.

  19. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  20. A prospective study comparing the predictions of doctors versus models for treatment outcome of lung cancer patients: A step toward individualized care and shared decision making

    NARCIS (Netherlands)

    C. Oberije (Cary); G.I. Nalbantov (Georgi); A.T. den Dekker (Alexander); L. Boersma (Liesbeth); J.H. Borger (Jacques); B. Reymen (Bart); A. van Baardwijk (Angela); R. Wanders (Rinus); D.K.M. de Ruysscher (Dirk); E.W. Steyerberg (Ewout); A.M.C. Dingemans (Anne-Marie); P. Lambin (Philippe)

    2014-01-01

    textabstractBackground: Decision Support Systems, based on statistical prediction models, have the potential to change the way medicine is being practiced, but their application is currently hampered by the astonishing lack of impact studies. Showing the theoretical benefit of using these models cou

  1. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  2. Comparing root architectural models

    Science.gov (United States)

    Schnepf, Andrea; Javaux, Mathieu; Vanderborght, Jan

    2017-04-01

    Plant roots play an important role in several soil processes (Gregory 2006). Root architecture development determines the sites in soil where roots provide input of carbon and energy and take up water and solutes. However, root architecture is difficult to determine experimentally when grown in opaque soil. Thus, root architectural models have been widely used and been further developed into functional-structural models that are able to simulate the fate of water and solutes in the soil-root system (Dunbabin et al. 2013). Still, a systematic comparison of the different root architectural models is missing. In this work, we focus on discrete root architecture models where roots are described by connected line segments. These models differ (a) in their model concepts, such as the description of distance between branches based on a prescribed distance (inter-nodal distance) or based on a prescribed time interval. Furthermore, these models differ (b) in the implementation of the same concept, such as the time step size, the spatial discretization along the root axes or the way stochasticity of parameters such as root growth direction, growth rate, branch spacing, branching angles are treated. Based on the example of two such different root models, the root growth module of R-SWMS and RootBox, we show the impact of these differences on simulated root architecture and aggregated information computed from this detailed simulation results, taking into account the stochastic nature of those models. References Dunbabin, V.M., Postma, J.A., Schnepf, A., Pagès, L., Javaux, M., Wu, L., Leitner, D., Chen, Y.L., Rengel, Z., Diggle, A.J. Modelling root-soil interactions using three-dimensional models of root growth, architecture and function (2013) Plant and Soil, 372 (1-2), pp. 93 - 124. Gregory (2006) Roots, rhizosphere and soil: the route to a better understanding of soil science? European Journal of Soil Science 57: 2-12.

  3. The Best Prediction Model for Trauma Outcomes of the Current Korean Population: a Comparative Study of Three Injury Severity Scoring Systems

    Directory of Open Access Journals (Sweden)

    Kyoungwon Jung

    2016-08-01

    Full Text Available Background: Injury severity scoring systems that quantify and predict trauma outcomes have not been established in Korea. This study was designed to determine the best system for use in the Korean trauma population. Methods: We collected and analyzed the data from trauma patients admitted to our institution from January 2010 to December 2014. Injury Severity Score (ISS, Revised Trauma Score (RTS, and Trauma and Injury Severity Score (TRISS were calculated based on the data from the enrolled patients. Area under the receiver operating characteristic (ROC curve (AUC for the prediction ability of each scoring system was obtained, and a pairwise comparison of ROC curves was performed. Additionally, the cut-off values were estimated to predict mortality, and the corresponding accuracy, positive predictive value, and negative predictive value were obtained. Results: A total of 7,120 trauma patients (6,668 blunt and 452 penetrating injuries were enrolled in this study. The AUCs of ISS, RTS, and TRISS were 0.866, 0.894, and 0.942, respectively, and the prediction ability of the TRISS was significantly better than the others (p < 0.001, respectively. The cut-off value of the TRISS was 0.9082, with a sensitivity of 81.9% and specificity of 92.0%; mortality was predicted with an accuracy of 91.2%; its positive predictive value was the highest at 46.8%. Conclusions: The results of our study were based on the data from one institution and suggest that the TRISS is the best prediction model of trauma outcomes in the current Korean population. Further study is needed with more data from multiple centers in Korea.

  4. A comparative study on improved Arrhenius-type and artificial neural network models to predict high-temperature flow behaviors in 20MnNiMo alloy.

    Science.gov (United States)

    Quan, Guo-zheng; Yu, Chun-tang; Liu, Ying-ying; Xia, Yu-feng

    2014-01-01

    The stress-strain data of 20MnNiMo alloy were collected from a series of hot compressions on Gleeble-1500 thermal-mechanical simulator in the temperature range of 1173 ∼ 1473 K and strain rate range of 0.01 ∼ 10 s(-1). Based on the experimental data, the improved Arrhenius-type constitutive model and the artificial neural network (ANN) model were established to predict the high temperature flow stress of as-cast 20MnNiMo alloy. The accuracy and reliability of the improved Arrhenius-type model and the trained ANN model were further evaluated in terms of the correlation coefficient (R), the average absolute relative error (AARE), and the relative error (η). For the former, R and AARE were found to be 0.9954 and 5.26%, respectively, while, for the latter, 0.9997 and 1.02%, respectively. The relative errors (η) of the improved Arrhenius-type model and the ANN model were, respectively, in the range of -39.99% ∼ 35.05% and -3.77% ∼ 16.74%. As for the former, only 16.3% of the test data set possesses η-values within ± 1%, while, as for the latter, more than 79% possesses. The results indicate that the ANN model presents a higher predictable ability than the improved Arrhenius-type constitutive model.

  5. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  6. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....

  7. Predictive Models for Music

    OpenAIRE

    Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy

    2008-01-01

    Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...

  8. Comparing nonsynergistic gamma models with interaction models to predict growth of emetic Bacillus cereus when using combinations of pH and individual undissociated acids as growth-limiting factors.

    Science.gov (United States)

    Biesta-Peters, Elisabeth G; Reij, Martine W; Gorris, Leon G M; Zwietering, Marcel H

    2010-09-01

    A combination of multiple hurdles to limit microbial growth is frequently applied in foods to achieve an overall level of protection. Quantification of hurdle technology aims at identifying synergistic or multiplicative effects and is still being developed. The gamma hypothesis states that inhibitory environmental factors aiming at limiting microbial growth rates combine in a multiplicative manner rather than synergistically. Its validity was tested here with respect to the use of pH and various concentrations of undissociated acids, i.e., acetic, lactic, propionic, and formic acids, to control growth of Bacillus cereus in brain heart infusion broth. The key growth parameter considered was the maximum specific growth rate, mu(max), as observed by determination of optical density. A variety of models from the literature describing the effects of various pH values and undissociated acid concentrations on mu(max) were fitted to experimental data sets and compared based on a predefined set of selection criteria, and the best models were selected. The cardinal model developed by Rosso (for pH dependency) and the model developed by Luong (for undissociated acid) were found to provide the best fit and were combined in a gamma model with good predictive performance. The introduction of synergy factors into the models was not able to improve the quality of the prediction. On the contrary, inclusion of synergy factors led to an overestimation of the growth boundary, with the inherent possibility of leading to underestimation of the risk under the conditions tested in this research.

  9. How to Establish Clinical Prediction Models

    Directory of Open Access Journals (Sweden)

    Yong-ho Lee

    2016-03-01

    Full Text Available A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.

  10. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...

  11. Comparing models of Red Knot population dynamics

    Science.gov (United States)

    McGowan, Conor

    2015-01-01

    Predictive population modeling contributes to our basic scientific understanding of population dynamics, but can also inform management decisions by evaluating alternative actions in virtual environments. Quantitative models mathematically reflect scientific hypotheses about how a system functions. In Delaware Bay, mid-Atlantic Coast, USA, to more effectively manage horseshoe crab (Limulus polyphemus) harvests and protect Red Knot (Calidris canutus rufa) populations, models are used to compare harvest actions and predict the impacts on crab and knot populations. Management has been chiefly driven by the core hypothesis that horseshoe crab egg abundance governs the survival and reproduction of migrating Red Knots that stopover in the Bay during spring migration. However, recently, hypotheses proposing that knot dynamics are governed by cyclical lemming dynamics garnered some support in data analyses. In this paper, I present alternative models of Red Knot population dynamics to reflect alternative hypotheses. Using 2 models with different lemming population cycle lengths and 2 models with different horseshoe crab effects, I project the knot population into the future under environmental stochasticity and parametric uncertainty with each model. I then compare each model's predictions to 10 yr of population monitoring from Delaware Bay. Using Bayes' theorem and model weight updating, models can accrue weight or support for one or another hypothesis of population dynamics. With 4 models of Red Knot population dynamics and only 10 yr of data, no hypothesis clearly predicted population count data better than another. The collapsed lemming cycle model performed best, accruing ~35% of the model weight, followed closely by the horseshoe crab egg abundance model, which accrued ~30% of the weight. The models that predicted no decline or stable populations (i.e. the 4-yr lemming cycle model and the weak horseshoe crab effect model) were the most weakly supported.

  12. Measures to summarize and compare the predictive capacity of markers.

    Science.gov (United States)

    Gu, Wen; Pepe, Margaret

    2009-10-01

    The predictive capacity of a marker in a population can be described using the population distribution of risk (Huang et al. 2007; Pepe et al. 2008a; Stern 2008). Virtually all standard statistical summaries of predictability and discrimination can be derived from it (Gail and Pfeiffer 2005). The goal of this paper is to develop methods for making inference about risk prediction markers using summary measures derived from the risk distribution. We describe some new clinically motivated summary measures and give new interpretations to some existing statistical measures. Methods for estimating these summary measures are described along with distribution theory that facilitates construction of confidence intervals from data. We show how markers and, more generally, how risk prediction models, can be compared using clinically relevant measures of predictability. The methods are illustrated by application to markers of lung function and nutritional status for predicting subsequent onset of major pulmonary infection in children suffering from cystic fibrosis. Simulation studies show that methods for inference are valid for use in practice.

  13. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  14. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  15. Modelling, controlling, predicting blackouts

    CERN Document Server

    Wang, Chengwei; Baptista, Murilo S

    2016-01-01

    The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...

  16. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...

  18. Comparing different dynamic stall models

    Energy Technology Data Exchange (ETDEWEB)

    Holierhoek, J.G. [Unit Wind Energy, Energy research Centre of the Netherlands, ZG, Petten (Netherlands); De Vaal, J.B.; Van Zuijlen, A.H.; Bijl, H. [Aerospace Engineering, Delft University of Technology, Delft (Netherlands)

    2012-07-16

    The dynamic stall phenomenon and its importance for load calculations and aeroelastic simulations is well known. Different models exist to model the effect of dynamic stall; however, a systematic comparison is still lacking. To investigate if one is performing better than another, three models are used to simulate the Ohio State University measurements and a set of data from the National Aeronautics and Space Administration Ames experimental study of dynamic stall and compare results. These measurements were at conditions and for aerofoils that are typical for wind turbines, and the results are publicly available. The three selected dynamic stall models are the ONERA model, the Beddoes-Leishman model and the Snel model. The simulations show that there are still significant differences between measurements and models and that none of the models is significantly better in all cases than the other models. Especially in the deep stall regime, the accuracy of each of the dynamic stall models is limited.

  19. Internal rib structure can be predicted using mathematical models: An anatomic study comparing the chest to a shell dome with application to understanding fractures.

    Science.gov (United States)

    Casha, Aaron R; Camilleri, Liberato; Manché, Alexander; Gatt, Ruben; Attard, Daphne; Gauci, Marilyn; Camilleri-Podesta, Marie-Therese; Mcdonald, Stuart; Grima, Joseph N

    2015-11-01

    The human rib cage resembles a masonry dome in shape. Masonry domes have a particular construction that mimics stress distribution. Rib cortical thickness and bone density were analyzed to determine whether the morphology of the rib cage is sufficiently similar to a shell dome for internal rib structure to be predicted mathematically. A finite element analysis (FEA) simulation was used to measure stresses on the internal and external surfaces of a chest-shaped dome. Inner and outer rib cortical thickness and bone density were measured in the mid-axillary lines of seven cadaveric rib cages using computerized tomography scanning. Paired t tests and Pearson correlation were used to relate cortical thickness and bone density to stress. FEA modeling showed that the stress was 82% higher on the internal than the external surface, with a gradual decrease in internal and external wall stresses from the base to the apex. The inner cortex was more radio-dense, P rib level. The internal anatomical features of ribs, including the inner and outer cortical thicknesses and bone densities, are similar to the stress distribution in dome-shaped structures modeled using FEA computer simulations of a thick-walled dome pressure vessel. Fixation of rib fractures should include the stronger internal cortex. © 2015 Wiley Periodicals, Inc.

  20. Comparative analysis of QSAR models for predicting pK(a) of organic oxygen acids and nitrogen bases from molecular structure.

    Science.gov (United States)

    Yu, Haiying; Kühne, Ralph; Ebert, Ralf-Uwe; Schüürmann, Gerrit

    2010-11-22

    For 1143 organic compounds comprising 580 oxygen acids and 563 nitrogen bases that cover more than 17 orders of experimental pK(a) (from -5.00 to 12.23), the pK(a) prediction performances of ACD, SPARC, and two calibrations of a semiempirical quantum chemical (QC) AM1 approach have been analyzed. The overall root-mean-square errors (rms) for the acids are 0.41, 0.58 (0.42 without ortho-substituted phenols with intramolecular H-bonding), and 0.55 and for the bases are 0.65, 0.70, 1.17, and 1.27 for ACD, SPARC, and both QC methods, respectively. Method-specific performances are discussed in detail for six acid subsets (phenols and aromatic and aliphatic carboxylic acids with different substitution patterns) and nine base subsets (anilines, primary, secondary and tertiary amines, meta/para-substituted and ortho-substituted pyridines, pyrimidines, imidazoles, and quinolines). The results demonstrate an overall better performance for acids than for bases but also a substantial variation across subsets. For the overall best-performing ACD, rms ranges from 0.12 to 1.11 and 0.40 to 1.21 pK(a) units for the acid and base subsets, respectively. With regard to the squared correlation coefficient r², the results are 0.86 to 0.96 (acids) and 0.79 to 0.95 (bases) for ACD, 0.77 to 0.95 (acids) and 0.85 to 0.97 (bases) for SPARC, and 0.64 to 0.87 (acids) and 0.43 to 0.83 (bases) for the QC methods, respectively. Attention is paid to structural and method-specific causes for observed pitfalls. The significant subset dependence of the prediction performances suggests a consensus modeling approach.

  1. Predictive toxicology of cobalt ferrite nanoparticles: comparative in-vitro study of different cellular models using methods of knowledge discovery from data.

    Science.gov (United States)

    Horev-Azaria, Limor; Baldi, Giovanni; Beno, Delila; Bonacchi, Daniel; Golla-Schindler, Ute; Kirkpatrick, James C; Kolle, Susanne; Landsiedel, Robert; Maimon, Oded; Marche, Patrice N; Ponti, Jessica; Romano, Roni; Rossi, François; Sommer, Dieter; Uboldi, Chiara; Unger, Ronald E; Villiers, Christian; Korenstein, Rafi

    2013-07-29

    Cobalt-ferrite nanoparticles (Co-Fe NPs) are attractive for nanotechnology-based therapies. Thus, exploring their effect on viability of seven different cell lines representing different organs of the human body is highly important. The toxicological effects of Co-Fe NPs were studied by in-vitro exposure of A549 and NCIH441 cell-lines (lung), precision-cut lung slices from rat, HepG2 cell-line (liver), MDCK cell-line (kidney), Caco-2 TC7 cell-line (intestine), TK6 (lymphoblasts) and primary mouse dendritic-cells. Toxicity was examined following exposure to Co-Fe NPs in the concentration range of 0.05 -1.2 mM for 24 and 72 h, using Alamar blue, MTT and neutral red assays. Changes in oxidative stress were determined by a dichlorodihydrofluorescein diacetate based assay. Data analysis and predictive modeling of the obtained data sets were executed by employing methods of Knowledge Discovery from Data with emphasis on a decision tree model (J48). Different dose-response curves of cell viability were obtained for each of the seven cell lines upon exposure to Co-Fe NPs. Increase of oxidative stress was induced by Co-Fe NPs and found to be dependent on the cell type. A high linear correlation (R2=0.97) was found between the toxicity of Co-Fe NPs and the extent of ROS generation following their exposure to Co-Fe NPs. The algorithm we applied to model the observed toxicity belongs to a type of supervised classifier. The decision tree model yielded the following order with decrease of the ranking parameter: NP concentrations (as the most influencing parameter), cell type (possessing the following hierarchy of cell sensitivity towards viability decrease: TK6 > Lung slices > NCIH441 > Caco-2 = MDCK > A549 > HepG2 = Dendritic) and time of exposure, where the highest-ranking parameter (NP concentration) provides the highest information gain with respect to toxicity. The validity of the chosen decision tree model J48 was established by yielding a higher accuracy than that

  2. Comparing Three Data Mining Methods to Predict Kidney Transplant Survival

    Science.gov (United States)

    Shahmoradi, Leila; Langarizadeh, Mostafa; Pourmand, Gholamreza; fard, Ziba Aghsaei; Borhani, Alireza

    2016-01-01

    Introduction: One of the most important complications of post-transplant is rejection. Analyzing survival is one of the areas of medical prognosis and data mining, as an effective approach, has the capacity of analyzing and estimating outcomes in advance through discovering appropriate models among data. The present study aims at comparing the effectiveness of C5.0 algorithms, neural network and C&RTree to predict kidney transplant survival before transplant. Method: To detect factors effective in predicting transplant survival, information needs analysis was performed via a researcher-made questionnaire. A checklist was prepared and data of 513 kidney disease patient files were extracted from Sina Urology Research Center. Following CRISP methodology for data mining, IBM SPSS Modeler 14.2, C5.0, C&RTree algorithms and neural network were used. Results: Body Mass Index (BMI), cause of renal dysfunction and duration of dialysis were evaluated in all three models as the most effective factors in transplant survival. C5.0 algorithm with the highest validity (96.77%) was the first in estimating kidney transplant survival in patients followed by C&RTree (83.7%) and neural network (79.5%) models. Conclusion: Among the three models, C5.0 algorithm was the top model with high validity that confirms its strength in predicting survival. The most effective kidney transplant survival factors were detected in this study; therefore, duration of transplant survival (year) can be determined considering the regulations set for a new sample with specific characteristics. PMID:28163356

  3. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...

  4. Incorporating the Johnson-Cook Constitutive Model and a Soft Computational Approach for Predicting the High-Temperature Flow Behavior of Sn-5Sb Solder Alloy: A Comparative Study for Processing Map Development

    Science.gov (United States)

    Vafaeenezhad, H.; Seyedein, S. H.; Aboutalebi, M. R.; Eivani, A. R.

    2016-09-01

    The high-temperature flow behavior of Sn-5Sb lead-free solder alloy has been investigated using isothermal hot compression experiments at 298 K to 400 K and strain rate between 0.0005 s-1 and 0.01 s-1. The flow stress under these test conditions was modeled using constitutive equations based on the Johnson-Cook (J-C) model and an artificial neural network (ANN). Three input factors, i.e., temperature, strain rate, and true strain, were incorporated into the network, and the flow stress was considered as the system output. One hidden layer was adopted in the simulations. Furthermore, a comparative study was carried out on the potential of the two proposed models to characterize the high-temperature flow behavior of this alloy. The capability of the models was assessed by comparing the simulation predictions using a correlation coefficient (R 2). The stresses predicted by both models presented good agreement with experimental results. In addition, it was found that the ANN model could predict the high-temperature deformation more precisely over the whole temperature and strain rate ranges. However, this is strongly dependent on the availability of extensive, high-quality data and characteristic variables.

  5. Incorporating the Johnson-Cook Constitutive Model and a Soft Computational Approach for Predicting the High-Temperature Flow Behavior of Sn-5Sb Solder Alloy: A Comparative Study for Processing Map Development

    Science.gov (United States)

    Vafaeenezhad, H.; Seyedein, S. H.; Aboutalebi, M. R.; Eivani, A. R.

    2017-01-01

    The high-temperature flow behavior of Sn-5Sb lead-free solder alloy has been investigated using isothermal hot compression experiments at 298 K to 400 K and strain rate between 0.0005 s-1 and 0.01 s-1. The flow stress under these test conditions was modeled using constitutive equations based on the Johnson-Cook (J-C) model and an artificial neural network (ANN). Three input factors, i.e., temperature, strain rate, and true strain, were incorporated into the network, and the flow stress was considered as the system output. One hidden layer was adopted in the simulations. Furthermore, a comparative study was carried out on the potential of the two proposed models to characterize the high-temperature flow behavior of this alloy. The capability of the models was assessed by comparing the simulation predictions using a correlation coefficient ( R 2). The stresses predicted by both models presented good agreement with experimental results. In addition, it was found that the ANN model could predict the high-temperature deformation more precisely over the whole temperature and strain rate ranges. However, this is strongly dependent on the availability of extensive, high-quality data and characteristic variables.

  6. A Comparative Study on Johnson Cook, Modified Zerilli-Armstrong and Arrhenius-Type Constitutive Models to Predict High-Temperature Flow Behavior of Ti-6Al-4V Alloy in α + β Phase

    Science.gov (United States)

    Cai, Jun; Wang, Kuaishe; Han, Yingying

    2016-03-01

    True stress and true strain values obtained from isothermal compression tests over a wide temperature range from 1,073 to 1,323 K and a strain rate range from 0.001 to 1 s-1 were employed to establish the constitutive equations based on Johnson Cook, modified Zerilli-Armstrong (ZA) and strain-compensated Arrhenius-type models, respectively, to predict the high-temperature flow behavior of Ti-6Al-4V alloy in α + β phase. Furthermore, a comparative study has been made on the capability of the three models to represent the elevated temperature flow behavior of Ti-6Al-4V alloy. Suitability of the three models was evaluated by comparing both the correlation coefficient R and the average absolute relative error (AARE). The results showed that the Johnson Cook model is inadequate to provide good description of flow behavior of Ti-6Al-4V alloy in α + β phase domain, while the predicted values of modified ZA model and the strain-compensated Arrhenius-type model could agree well with the experimental values except under some deformation conditions. Meanwhile, the modified ZA model could track the deformation behavior more accurately than other model throughout the entire temperature and strain rate range.

  7. Predictive models of forest dynamics.

    Science.gov (United States)

    Purves, Drew; Pacala, Stephen

    2008-06-13

    Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.

  8. Mining Education Data to Predict Student's Retention: A comparative Study

    CERN Document Server

    Yadav, Surjeet Kumar; Pal, Saurabh

    2012-01-01

    The main objective of higher education is to provide quality education to students. One way to achieve highest level of quality in higher education system is by discovering knowledge for prediction regarding enrolment of students in a course. This paper presents a data mining project to generate predictive models for student retention management. Given new records of incoming students, these predictive models can produce short accurate prediction lists identifying students who tend to need the support from the student retention program most. This paper examines the quality of the predictive models generated by the machine learning algorithms. The results show that some of the machines learning algorithms are able to establish effective predictive models from the existing student retention data.

  9. Precision Plate Plan View Pattern Predictive Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yang; YANG Quan; HE An-rui; WANG Xiao-chen; ZHANG Yun

    2011-01-01

    According to the rolling features of plate mill, a 3D elastic-plastic FEM (finite element model) based on full restart method of ANSYS/LS-DYNA was established to study the inhomogeneous plastic deformation of multipass plate rolling. By analyzing the simulation results, the difference of head and tail ends predictive models was found and modified. According to the numerical simulation results of 120 different kinds of conditions, precision plate plan view pattern predictive model was established. Based on these models, the sizing MAS (mizushima automatic plan view pattern control system) method was designed and used on a 2 800 mm plate mill. Comparing the rolled plates with and without PVPP (plan view pattern predictive) model, the reduced width deviation indicates that the olate !olan view Dattern predictive model is preeise.

  10. Comparative genomics boosts target prediction for bacterial small RNAs.

    Science.gov (United States)

    Wright, Patrick R; Richter, Andreas S; Papenfort, Kai; Mann, Martin; Vogel, Jörg; Hess, Wolfgang R; Backofen, Rolf; Georg, Jens

    2013-09-10

    Small RNAs (sRNAs) constitute a large and heterogeneous class of bacterial gene expression regulators. Much like eukaryotic microRNAs, these sRNAs typically target multiple mRNAs through short seed pairing, thereby acting as global posttranscriptional regulators. In some bacteria, evidence for hundreds to possibly more than 1,000 different sRNAs has been obtained by transcriptome sequencing. However, the experimental identification of possible targets and, therefore, their confirmation as functional regulators of gene expression has remained laborious. Here, we present a strategy that integrates phylogenetic information to predict sRNA targets at the genomic scale and reconstructs regulatory networks upon functional enrichment and network analysis (CopraRNA, for Comparative Prediction Algorithm for sRNA Targets). Furthermore, CopraRNA precisely predicts the sRNA domains for target recognition and interaction. When applied to several model sRNAs, CopraRNA revealed additional targets and functions for the sRNAs CyaR, FnrS, RybB, RyhB, SgrS, and Spot42. Moreover, the mRNAs gdhA, lrp, marA, nagZ, ptsI, sdhA, and yobF-cspC were suggested as regulatory hubs targeted by up to seven different sRNAs. The verification of many previously undetected targets by CopraRNA, even for extensively investigated sRNAs, demonstrates its advantages and shows that CopraRNA-based analyses can compete with experimental target prediction approaches. A Web interface allows high-confidence target prediction and efficient classification of bacterial sRNAs.

  11. Comparing Predictions and Outcomes : Theory and Application to Income Changes

    NARCIS (Netherlands)

    Das, J.W.M.; Dominitz, J.; van Soest, A.H.O.

    1997-01-01

    Household surveys often elicit respondents' intentions or predictions of future outcomes. The survey questions may ask respondents to choose among a selection of (ordered) response categories. If panel data or repeated cross-sections are available, predictions may be compared with realized outcomes.

  12. A comprehensive comparison of comparative RNA structure prediction approaches

    DEFF Research Database (Denmark)

    Gardner, P. P.; Giegerich, R.

    2004-01-01

    Background An increasing number of researchers have released novel RNA structure analysis and prediction algorithms for comparative approaches to structure prediction. Yet, independent benchmarking of these algorithms is rarely performed as is now common practice for protein-folding, gene-finding...

  13. Genetic network models: a comparative study

    Science.gov (United States)

    van Someren, Eugene P.; Wessels, Lodewyk F. A.; Reinders, Marcel J. T.

    2001-06-01

    Currently, the need arises for tools capable of unraveling the functionality of genes based on the analysis of microarray measurements. Modeling genetic interactions by means of genetic network models provides a methodology to infer functional relationships between genes. Although a wide variety of different models have been introduced so far, it remains, in general, unclear what the strengths and weaknesses of each of these approaches are and where these models overlap and differ. This paper compares different genetic modeling approaches that attempt to extract the gene regulation matrix from expression data. A taxonomy of continuous genetic network models is proposed and the following important characteristics are suggested and employed to compare the models: inferential power; predictive power; robustness; consistency; stability and computational cost. Where possible, synthetic time series data are employed to investigate some of these properties. The comparison shows that although genetic network modeling might provide valuable information regarding genetic interactions, current models show disappointing results on simple artificial problems. For now, the simplest models are favored because they generalize better, but more complex models will probably prevail once their bias is more thoroughly understood and their variance is better controlled.

  14. Predictions by the multimedia environmental fate model SimpleBox compared to field data: Intermedia concentration ratios of two phthalate esters

    NARCIS (Netherlands)

    Struijs J; Peijnenburg WJGM; ECO

    2003-01-01

    The multimedia environmental fate model SimpleBox is applied to compute steady-state concentration ratios with the aim to harmonize environmetal quality objectives of air, water, sediment and soil. In 1995 the Dutch Health Council recommended validation of the model. Several activities were initiate

  15. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  16. Predictive QSAR modeling of phosphodiesterase 4 inhibitors.

    Science.gov (United States)

    Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr

    2012-02-01

    A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Intertidal beach slope predictions compared to field data

    NARCIS (Netherlands)

    Madsen, A.J.; Plant, N.G.

    2001-01-01

    This paper presents a test of a very simple model for predicting beach slope changes. The model assumes that these changes are a function of both the incident wave conditions and the beach slope itself. Following other studies, we hypothesized that the beach slope evolves towards an equilibrium

  18. Intertidal beach slope predictions compared to field data

    NARCIS (Netherlands)

    Madsen, A.J.; Plant, N.G.

    2001-01-01

    This paper presents a test of a very simple model for predicting beach slope changes. The model assumes that these changes are a function of both the incident wave conditions and the beach slope itself. Following other studies, we hypothesized that the beach slope evolves towards an equilibrium valu

  19. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    R. G. SILVA

    1999-03-01

    Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.

  20. Risk assessment models in genetics clinic for array comparative genomic hybridization: Clinical information can be used to predict the likelihood of an abnormal result in patients.

    Science.gov (United States)

    Marano, Rachel M; Mercurio, Laura; Kanter, Rebecca; Doyle, Richard; Abuelo, Dianne; Morrow, Eric M; Shur, Natasha

    2013-03-01

    Array comparative genomic hybridization (aCGH) testing can diagnose chromosomal microdeletions and duplications too small to be detected by conventional cytogenetic techniques. We need to consider which patients are more likely to receive a diagnosis from aCGH testing versus patients that have lower likelihood and may benefit from broader genome wide scanning. We retrospectively reviewed charts of a population of 200 patients, 117 boys and 83 girls, who underwent aCGH testing in Genetics Clinic at Rhode Island hospital between 1 January/2008 and 31 December 2010. Data collected included sex, age at initial clinical presentation, aCGH result, history of seizures, autism, dysmorphic features, global developmental delay/intellectual disability, hypotonia and failure to thrive. aCGH analysis revealed abnormal results in 34 (17%) and variants of unknown significance in 24 (12%). Patients with three or more clinical diagnoses had a 25.0% incidence of abnormal aCGH findings, while patients with two or fewer clinical diagnoses had a 12.5% incidence of abnormal aCGH findings. Currently, we provide families with a range of 10-30% of a diagnosis with aCGH testing. With increased clinical complexity, patients have an increased probability of having an abnormal aCGH result. With this, we can provide individualized risk estimates for each patient.

  1. PREDICT : model for prediction of survival in localized prostate cancer

    NARCIS (Netherlands)

    Kerkmeijer, Linda G W; Monninkhof, Evelyn M.; van Oort, Inge M.; van der Poel, Henk G.; de Meerleer, Gert; van Vulpen, Marco

    2016-01-01

    Purpose: Current models for prediction of prostate cancer-specific survival do not incorporate all present-day interventions. In the present study, a pre-treatment prediction model for patients with localized prostate cancer was developed.Methods: From 1989 to 2008, 3383 patients were treated with I

  2. Statistical assessment of predictive modeling uncertainty

    Science.gov (United States)

    Barzaghi, Riccardo; Marotta, Anna Maria

    2017-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

  3. Evolution and physics in comparative protein structure modeling.

    Science.gov (United States)

    Fiser, András; Feig, Michael; Brooks, Charles L; Sali, Andrej

    2002-06-01

    From a physical perspective, the native structure of a protein is a consequence of physical forces acting on the protein and solvent atoms during the folding process. From a biological perspective, the native structure of proteins is a result of evolution over millions of years. Correspondingly, there are two types of protein structure prediction methods, de novo prediction and comparative modeling. We review comparative protein structure modeling and discuss the incorporation of physical considerations into the modeling process. A good starting point for achieving this aim is provided by comparative modeling by satisfaction of spatial restraints. Incorporation of physical considerations is illustrated by an inclusion of solvation effects into the modeling of loops.

  4. Minimalist models for proteins: a comparative analysis.

    Science.gov (United States)

    Tozzini, Valentina

    2010-08-01

    The last decade has witnessed a renewed interest in the coarse-grained (CG) models for biopolymers, also stimulated by the needs of modern molecular biology, dealing with nano- to micro-sized bio-molecular systems and larger than microsecond timescale. This combination of size and timescale is, in fact, hard to access by atomic-based simulations. Coarse graining the system is a route to be followed to overcome these limits, but the ways of practically implementing it are many and different, making the landscape of CG models very vast and complex. In this paper, the CG models are reviewed and their features, applications and performances compared. This analysis, restricted to proteins, focuses on the minimalist models, namely those reducing at minimum the number of degrees of freedom without losing the possibility of explicitly describing the secondary structures. This class includes models using a single or a few interacting centers (beads) for each amino acid. From this analysis several issues emerge. The difficulty in building these models resides in the need for combining transferability/predictive power with the capability of accurately reproducing the structures. It is shown that these aspects could be optimized by accurately choosing the force field (FF) terms and functional forms, and combining different parameterization procedures. In addition, in spite of the variety of the minimalist models, regularities can be found in the parameters values and in FF terms. These are outlined and schematically presented with the aid of a generic phase diagram of the polypeptide in the parameter space and, hopefully, could serve as guidelines for the development of minimalist models incorporating the maximum possible level of predictive power and structural accuracy.

  5. Predictive Modeling of Cardiac Ischemia

    Science.gov (United States)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  6. ENSO Prediction using Vector Autoregressive Models

    Science.gov (United States)

    Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.

    2013-12-01

    A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.

  7. Numerical weather prediction model tuning via ensemble prediction system

    Science.gov (United States)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  8. Prediction of peptide bonding affinity: kernel methods for nonlinear modeling

    CERN Document Server

    Bergeron, Charles; Sundling, C Matthew; Krein, Michael; Katt, Bill; Sukumar, Nagamani; Breneman, Curt M; Bennett, Kristin P

    2011-01-01

    This paper presents regression models obtained from a process of blind prediction of peptide binding affinity from provided descriptors for several distinct datasets as part of the 2006 Comparative Evaluation of Prediction Algorithms (COEPRA) contest. This paper finds that kernel partial least squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS, and that the incorporation of transferable atom equivalent features improves predictive capability.

  9. A Comparative Study of Three Machine Learning Methods for Software Fault Prediction

    Institute of Scientific and Technical Information of China (English)

    WANG Qi; ZHU Jie; YU Bo

    2005-01-01

    The contribution of this paper is comparing three popular machine learning methods for software fault prediction. They are classification tree, neural network and case-based reasoning. First, three different classifiers are built based on these three different approaches. Second, the three different classifiers utilize the same product metrics as predictor variables to identify the fault-prone components. Third, the predicting results are compared on two aspects, how good prediction capabilities these models are, and how the models support understanding a process represented by the data.

  10. Prediction using patient comparison vs. modeling: a case study for mortality prediction.

    Science.gov (United States)

    Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter

    2016-08-01

    Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.

  11. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...

  12. Pressure prediction model for compression garment design.

    Science.gov (United States)

    Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q

    2010-01-01

    Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.

  13. Predictive Model Assessment for Count Data

    Science.gov (United States)

    2007-09-05

    critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002

  14. Strontium-90 Biokinetics from Simulated Wound Intakes in Non-human Primates Compared with Combined Model Predictions from National Council on Radiation Protection and Measurements Report 156 and International Commission on Radiological Protection Publication 67.

    Science.gov (United States)

    Allen, Mark B; Brey, Richard R; Gesell, Thomas; Derryberry, Dewayne; Poudel, Deepesh

    2016-01-01

    This study had a goal to evaluate the predictive capabilities of the National Council on Radiation Protection and Measurements (NCRP) wound model coupled to the International Commission on Radiological Protection (ICRP) systemic model for 90Sr-contaminated wounds using non-human primate data. Studies were conducted on 13 macaque (Macaca mulatta) monkeys, each receiving one-time intramuscular injections of 90Sr solution. Urine and feces samples were collected up to 28 d post-injection and analyzed for 90Sr activity. Integrated Modules for Bioassay Analysis (IMBA) software was configured with default NCRP and ICRP model transfer coefficients to calculate predicted 90Sr intake via the wound based on the radioactivity measured in bioassay samples. The default parameters of the combined models produced adequate fits of the bioassay data, but maximum likelihood predictions of intake were overestimated by a factor of 1.0 to 2.9 when bioassay data were used as predictors. Skeletal retention was also over-predicted, suggesting an underestimation of the excretion fraction. Bayesian statistics and Monte Carlo sampling were applied using IMBA to vary the default parameters, producing updated transfer coefficients for individual monkeys that improved model fit and predicted intake and skeletal retention. The geometric means of the optimized transfer rates for the 11 cases were computed, and these optimized sample population parameters were tested on two independent monkey cases and on the 11 monkeys from which the optimized parameters were derived. The optimized model parameters did not improve the model fit in most cases, and the predicted skeletal activity produced improvements in three of the 11 cases. The optimized parameters improved the predicted intake in all cases but still over-predicted the intake by an average of 50%. The results suggest that the modified transfer rates were not always an improvement over the default NCRP and ICRP model values.

  15. Scaling and predictability in stock markets: a comparative study.

    Science.gov (United States)

    Zhang, Huishu; Wei, Jianrong; Huang, Jiping

    2014-01-01

    Most people who invest in stock markets want to be rich, thus, many technical methods have been created to beat the market. If one knows the predictability of the price series in different markets, it would be easier for him/her to make the technical analysis, at least to some extent. Here we use one of the most basic sold-and-bought trading strategies to establish the profit landscape, and then calculate the parameters to characterize the strength of predictability. According to the analysis of scaling of the profit landscape, we find that the Chinese individual stocks are harder to predict than US ones, and the individual stocks are harder to predict than indexes in both Chinese stock market and US stock market. Since the Chinese (US) stock market is a representative of emerging (developed) markets, our comparative study on the markets of these two countries is of potential value not only for conducting technical analysis, but also for understanding physical mechanisms of different kinds of markets in terms of scaling.

  16. Scaling and predictability in stock markets: a comparative study.

    Directory of Open Access Journals (Sweden)

    Huishu Zhang

    Full Text Available Most people who invest in stock markets want to be rich, thus, many technical methods have been created to beat the market. If one knows the predictability of the price series in different markets, it would be easier for him/her to make the technical analysis, at least to some extent. Here we use one of the most basic sold-and-bought trading strategies to establish the profit landscape, and then calculate the parameters to characterize the strength of predictability. According to the analysis of scaling of the profit landscape, we find that the Chinese individual stocks are harder to predict than US ones, and the individual stocks are harder to predict than indexes in both Chinese stock market and US stock market. Since the Chinese (US stock market is a representative of emerging (developed markets, our comparative study on the markets of these two countries is of potential value not only for conducting technical analysis, but also for understanding physical mechanisms of different kinds of markets in terms of scaling.

  17. Nonlinear chaotic model for predicting storm surges

    NARCIS (Netherlands)

    Siek, M.; Solomatine, D.P.

    This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables.

  18. Evaluation of Fast-Time Wake Vortex Prediction Models

    Science.gov (United States)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  19. Predictive modeling by the cerebellum improves proprioception.

    Science.gov (United States)

    Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J

    2013-09-04

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.

  20. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...

  1. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “any fall” and “recurrent falls.” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  2. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  3. Comparative Analysis of Dayside Reconnection Models in Global Magnetosphere Simulations

    CERN Document Server

    Komar, C M; Cassak, P A

    2015-01-01

    We test and compare a number of existing models predicting the location of magnetic reconnection at Earth's dayside magnetopause for various solar wind conditions. We employ robust image processing techniques to determine the locations where each model predicts reconnection to occur. The predictions are then compared to the magnetic separators, the magnetic field lines separating different magnetic topologies. The predictions are tested in distinct high-resolution simulations with interplanetary magnetic field (IMF) clock angles ranging from 30 to 165 degrees in global magnetohydrodynamic simulations using the three-dimensional Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code with a uniform resistivity, although the described techniques can be generally applied to any self-consistent magnetosphere code. Additional simulations are carried out to test location model dependence on IMF strength and dipole tilt. We find that most of the models match large portions of the magnetic separators wh...

  4. Predicting Fault-Prone Modules: A Comparative Study

    Science.gov (United States)

    Jia, Hao; Shu, Fengdi; Yang, Ye; Wang, Qing

    Offshore and outsourced software development is a rapidly increasing trend in global software business environment. Predicting fault-prone modules in outsourced software product may allow both parties to establish mutually satisfactory, cost-effective testing strategies and product acceptance criteria, especially in iterative transitions. In this paper, based on industrial software releases data, we conduct an empirical study to compare ten classifiers over eight sets of code attributes, and provide recommendations to aid both the client and vendor to assess the products’ quality through defect prediction. Overall, a generally high accuracy is observed, which confirms the usefulness of the metric-based classification. Furthermore, two classification techniques, Random Forest and Bayesian Belief Network, outperform the others in terms of predictive accuracy; in more detail, the former is the most cost-effective and the latter is of the lowest fault-prone module escaping rate. Our study also concludes that code metrics including size, traditional complexity, and object-oriented complexity perform fairly well.

  5. Evaluating and comparing algorithms for respiratory motion prediction

    Science.gov (United States)

    Ernst, F.; Dürichen, R.; Schlaefer, A.; Schweikard, A.

    2013-06-01

    In robotic radiosurgery, it is necessary to compensate for systematic latencies arising from target tracking and mechanical constraints. This compensation is usually achieved by means of an algorithm which computes the future target position. In most scientific works on respiratory motion prediction, only one or two algorithms are evaluated on a limited amount of very short motion traces. The purpose of this work is to gain more insight into the real world capabilities of respiratory motion prediction methods by evaluating many algorithms on an unprecedented amount of data. We have evaluated six algorithms, the normalized least mean squares (nLMS), recursive least squares (RLS), multi-step linear methods (MULIN), wavelet-based multiscale autoregression (wLMS), extended Kalman filtering, and ε-support vector regression (SVRpred) methods, on an extensive database of 304 respiratory motion traces. The traces were collected during treatment with the CyberKnife (Accuray, Inc., Sunnyvale, CA, USA) and feature an average length of 71 min. Evaluation was done using a graphical prediction toolkit, which is available to the general public, as is the data we used. The experiments show that the nLMS algorithm—which is one of the algorithms currently used in the CyberKnife—is outperformed by all other methods. This is especially true in the case of the wLMS, the SVRpred, and the MULIN algorithms, which perform much better. The nLMS algorithm produces a relative root mean square (RMS) error of 75% or less (i.e., a reduction in error of 25% or more when compared to not doing prediction) in only 38% of the test cases, whereas the MULIN and SVRpred methods reach this level in more than 77%, the wLMS algorithm in more than 84% of the test cases. Our work shows that the wLMS algorithm is the most accurate algorithm and does not require parameter tuning, making it an ideal candidate for clinical implementation. Additionally, we have seen that the structure of a patient

  6. Data Mining Applications: A comparative Study for Predicting Student's performance

    CERN Document Server

    Yadav, Surjeet Kumar; Pal, Saurabh

    2012-01-01

    Knowledge Discovery and Data Mining (KDD) is a multidisciplinary area focusing upon methodologies for extracting useful knowledge from data and there are several useful KDD tools to extracting the knowledge. This knowledge can be used to increase the quality of education. But educational institution does not use any knowledge discovery process approach on these data. Data mining can be used for decision making in educational system. A decision tree classifier is one of the most widely used supervised learning methods used for data exploration based on divide & conquer technique. This paper discusses use of decision trees in educational data mining. Decision tree algorithms are applied on students' past performance data to generate the model and this model can be used to predict the students' performance. It helps earlier in identifying the dropouts and students who need special attention and allow the teacher to provide appropriate advising/counseling.

  7. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing p

  8. Childhood asthma prediction models: a systematic review.

    Science.gov (United States)

    Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup

    2015-12-01

    Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.

  9. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  10. PREDICTING THE INTENTION TO USE INTERNET – A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Slaven Brumec

    2006-06-01

    Full Text Available This article focuses on an application of the Triandis Model in researching Internet usage and the intention to use Internet. Unlike other TAM-based studies undertaken to date, the Triandis Model offers a sociological account of interaction between the various factors, particularly attitude, intention, and behavior. The technique of Structural Equation Modeling was used to assess the impact those factors have on intention to use the Internet in accordance with the relationships posited by the Triandis Model. The survey was administered to Croatian undergraduate students at and employed individuals. The survey results are compared to the results of a similar survey that was carried out by two universities in Hong Kong.

  11. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  12. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  13. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  14. Comparing the Discrete and Continuous Logistic Models

    Science.gov (United States)

    Gordon, Sheldon P.

    2008-01-01

    The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)

  15. Comparing the Discrete and Continuous Logistic Models

    Science.gov (United States)

    Gordon, Sheldon P.

    2008-01-01

    The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)

  16. Predicting RNA secondary structure by the comparative approach: how to select the homologous sequences

    Directory of Open Access Journals (Sweden)

    Tahi Fariza

    2007-11-01

    Full Text Available Abstract Background The secondary structure of an RNA must be known before the relationship between its structure and function can be determined. One way to predict the secondary structure of an RNA is to identify covarying residues that maintain the pairings (Watson-Crick, Wobble and non-canonical pairings. This "comparative approach" consists of identifying mutations from homologous sequence alignments. The sequences must covary enough for compensatory mutations to be revealed, but comparison is difficult if they are too different. Thus the choice of homologous sequences is critical. While many possible combinations of homologous sequences may be used for prediction, only a few will give good structure predictions. This can be due to poor quality alignment in stems or to the variability of certain sequences. This problem of sequence selection is currently unsolved. Results This paper describes an algorithm, SSCA, which measures the suitability of sequences for the comparative approach. It is based on evolutionary models with structure constraints, particularly those on sequence variations and stem alignment. We propose three models, based on different constraints on sequence alignments. We show the results of the SSCA algorithm for predicting the secondary structure of several RNAs. SSCA enabled us to choose sets of homologous sequences that gave better predictions than arbitrarily chosen sets of homologous sequences. Conclusion SSCA is an algorithm for selecting combinations of RNA homologous sequences suitable for secondary structure predictions with the comparative approach.

  17. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...

  18. Worthing Physiological Score vs Revised Trauma Score in Outcome Prediction of Trauma patients; a Comparative Study

    Science.gov (United States)

    Nakhjavan-Shahraki, Babak; Yousefifard, Mahmoud; Hajighanbari, Mohammad Javad; Karimi, Parviz; Baikpour, Masoud; Mirzay Razaz, Jalaledin; Yaseri, Mehdi; Shahsavari, Kavous; Mahdizadeh, Fatemeh; Hosseini, Mostafa

    2017-01-01

    Introduction: Awareness about the outcome of trauma patients in the emergency department (ED) has become a topic of interest. Accordingly, the present study aimed to compare the rapid trauma score (RTS) and worthing physiological scoring system (WPSS) in predicting in-hospital mortality and poor outcome of trauma patients. Methods: In this comparative study trauma patients brought to five EDs in different cities of Iran during the year 2016 were included. After data collection, discriminatory power and calibration of the models were assessed and compared using STATA 11. Results: 2148 patients with the mean age of 39.50±17.27 years were included (75.56% males). The AUC of RTS and WPSS models for prediction of mortality were 0.86 (95% CI: 0.82-0.90) and 0.91 (95% CI: 0.87-0.94), respectively (p=0.006). RTS had a sensitivity of 71.54 (95% CI: 62.59-79.13) and a specificity of 97.38 (95% CI: 96.56-98.01) in prediction of mortality. These measures for the WPSS were 87.80 (95% CI: 80.38-92.78) and 83.45 (95% CI: 81.75-85.04), respectively. The AUC of RTS and WPSS in predicting poor outcome were 0.81 (95% CI: 0.77-0.85) and 0.89 (95% CI: 0.85-0.92), respectively (p<0.0001). Conclusion: The findings showed a higher prognostic value for the WPSS model in predicting mortality and severe disabilities in trauma patients compared to the RTS model. Both models had good overall performance in prediction of mortality and poor outcome. PMID:28286838

  19. Two criteria for evaluating risk prediction models.

    Science.gov (United States)

    Pfeiffer, R M; Gail, M H

    2011-09-01

    We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.

  20. Massive Predictive Modeling using Oracle R Enterprise

    CERN Document Server

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  1. Partition Model-Based 99mTc-MAA SPECT/CT Predictive Dosimetry Compared with 90Y TOF PET/CT Posttreatment Dosimetry in Radioembolization of Hepatocellular Carcinoma: A Quantitative Agreement Comparison.

    Science.gov (United States)

    Gnesin, Silvano; Canetti, Laurent; Adib, Salim; Cherbuin, Nicolas; Silva Monteiro, Marina; Bize, Pierre; Denys, Alban; Prior, John O; Baechler, Sebastien; Boubaker, Ariane

    2016-11-01

    (90)Y-microsphere selective internal radiation therapy (SIRT) is a valuable treatment in unresectable hepatocellular carcinoma (HCC). Partition-model predictive dosimetry relies on differential tumor-to-nontumor perfusion evaluated on pretreatment (99m)Tc-macroaggregated albumin (MAA) SPECT/CT. The aim of this study was to evaluate agreement between the predictive dosimetry of (99m)Tc-MAA SPECT/CT and posttreatment dosimetry based on (90)Y time-of-flight (TOF) PET/CT. We compared the (99m)Tc-MAA SPECT/CT results for 27 treatment sessions (25 HCC patients, 41 tumors) with (90)Y SIRT (7 glass spheres, 20 resin spheres) and the posttreatment (90)Y TOF PET/CT results. Three-dimensional voxelized dose maps were computed from the (99m)Tc-MAA SPECT/CT and (90)Y TOF PET/CT data. Mean absorbed dose ([Formula: see text]) was evaluated to compute the predicted-to-actual dose ratio ([Formula: see text]) in tumor volumes (TVs) and nontumor volumes (NTVs) for glass and resin spheres. The Lin concordance ([Formula: see text]) was used to measure accuracy ([Formula: see text]) and precision (ρ). Administered activity ranged from 0.8 to 1.9 GBq for glass spheres and from 0.6 to 3.4 GBq for resin spheres, and the respective TVs ranged from 2 to 125 mL and from 6 to 1,828 mL. The mean dose [Formula: see text] was 240 Gy for glass and 122 Gy for resin in TVs and 72 Gy for glass and 47 Gy for resin in NTVs. [Formula: see text] was 1.46 ± 0.58 (0.65-2.53) for glass and 1.16 ± 0.41 (0.54-2.54) for resin, and the respective values for [Formula: see text] were 0.88 ± 0.15 (0.56-1.00) and 0.86 ± 0.2 (0.58-1.35). DR variability was substantially lower in NTVs than in TVs. The Lin concordance between [Formula: see text] and [Formula: see text] (resin) was significantly better for tumors larger than 150 mL than for tumors 150 mL or smaller ([Formula: see text] = 0.93 and [Formula: see text] = 0.95 vs. [Formula: see text] = 0.57 and [Formula: see text] = 0.93; P < 0.05). In (90)Y

  2. Comparative Study of Different Methods for the Prediction of Drug-Polymer Solubility

    DEFF Research Database (Denmark)

    Knopp, Matthias Manne; Tajber, Lidia; Tian, Yiwei;

    2015-01-01

    In this study, a comparison of different methods to predict drug–polymer solubility was carried out on binary systems consisting of five model drugs (paracetamol, chloramphenicol, celecoxib, indomethacin, and felodipine) and polyvinylpyrrolidone/vinyl acetate copolymers (PVP/VA) of different...... monomer weight ratios. The drug–polymer solubility at 25 °C was predicted using the Flory–Huggins model, from data obtained at elevated temperature using thermal analysis methods based on the recrystallization of a supersaturated amorphous solid dispersion and two variations of the melting point...... depression method. These predictions were compared with the solubility in the low molecular weight liquid analogues of the PVP/VA copolymer (N-vinylpyrrolidone and vinyl acetate). The predicted solubilities at 25 °C varied considerably depending on the method used. However, the three thermal analysis methods...

  3. Compensatory versus noncompensatory models for predicting consumer preferences

    Directory of Open Access Journals (Sweden)

    Anja Dieckmann

    2009-04-01

    Full Text Available Standard preference models in consumer research assume that people weigh and add all attributes of the available options to derive a decision, while there is growing evidence for the use of simplifying heuristics. Recently, a greedoid algorithm has been developed (Yee, Dahan, Hauser and Orlin, 2007; Kohli and Jedidi, 2007 to model lexicographic heuristics from preference data. We compare predictive accuracies of the greedoid approach and standard conjoint analysis in an online study with a rating and a ranking task. The lexicographic model derived from the greedoid algorithm was better at predicting ranking compared to rating data, but overall, it achieved lower predictive accuracy for hold-out data than the compensatory model estimated by conjoint analysis. However, a considerable minority of participants was better predicted by lexicographic strategies. We conclude that the new algorithm will not replace standard tools for analyzing preferences, but can boost the study of situational and individual differences in preferential choice processes.

  4. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  8. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  9. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  13. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  14. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  15. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  16. Assessment of performance of survival prediction models for cancer prognosis

    Directory of Open Access Journals (Sweden)

    Chen Hung-Chia

    2012-07-01

    Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

  17. The regional prediction model of PM10 concentrations for Turkey

    Science.gov (United States)

    Güler, Nevin; Güneri İşçi, Öznur

    2016-11-01

    This study is aimed to predict a regional model for weekly PM10 concentrations measured air pollution monitoring stations in Turkey. There are seven geographical regions in Turkey and numerous monitoring stations at each region. Predicting a model conventionally for each monitoring station requires a lot of labor and time and it may lead to degradation in quality of prediction when the number of measurements obtained from any õmonitoring station is small. Besides, prediction models obtained by this way only reflect the air pollutant behavior of a small area. This study uses Fuzzy C-Auto Regressive Model (FCARM) in order to find a prediction model to be reflected the regional behavior of weekly PM10 concentrations. The superiority of FCARM is to have the ability of considering simultaneously PM10 concentrations measured monitoring stations in the specified region. Besides, it also works even if the number of measurements obtained from the monitoring stations is different or small. In order to evaluate the performance of FCARM, FCARM is executed for all regions in Turkey and prediction results are compared to statistical Autoregressive (AR) Models predicted for each station separately. According to Mean Absolute Percentage Error (MAPE) criteria, it is observed that FCARM provides the better predictions with a less number of models.

  18. A Course in... Model Predictive Control.

    Science.gov (United States)

    Arkun, Yaman; And Others

    1988-01-01

    Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)

  19. Comparative dynamics in a health investment model.

    Science.gov (United States)

    Eisenring, C

    1999-10-01

    The method of comparative dynamics fully exploits the inter-temporal structure of optimal control models. I derive comparative dynamic results in a simplified demand for health model. The effect of a change in the depreciation rate on the optimal paths for health capital and investment in health is studied by use of a phase diagram.

  20. Equivalency and unbiasedness of grey prediction models

    Institute of Scientific and Technical Information of China (English)

    Bo Zeng; Chuan Li; Guo Chen; Xianjun Long

    2015-01-01

    In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction mo-dels, the equivalence and unbiasedness of grey prediction mo-dels are analyzed and verified. The results show that al the grey prediction models that are strictly derived from x(0)(k) +az(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homoge-neous exponential sequence can be accomplished. However, the models derived from dx(1)/dt+ax(1) =b are only close to those derived from x(0)(k)+az(1)(k)=b provided that|a|has to satisfy|a| < 0.1; neither could the unbiased simulation for the homoge-neous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.

  1. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  2. Loss Given Default Modelling: Comparative Analysis

    OpenAIRE

    Yashkir, Olga; Yashkir, Yuriy

    2013-01-01

    In this study we investigated several most popular Loss Given Default (LGD) models (LSM, Tobit, Three-Tiered Tobit, Beta Regression, Inflated Beta Regression, Censored Gamma Regression) in order to compare their performance. We show that for a given input data set, the quality of the model calibration depends mainly on the proper choice (and availability) of explanatory variables (model factors), but not on the fitting model. Model factors were chosen based on the amplitude of their correlati...

  3. Hybrid modeling and prediction of dynamical systems

    Science.gov (United States)

    Lloyd, Alun L.; Flores, Kevin B.

    2017-01-01

    Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642

  4. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.

  5. Property predictions using microstructural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.G. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)]. E-mail: wangk2@rpi.edu; Guo, Z. [Sente Software Ltd., Surrey Technology Centre, 40 Occam Road, Guildford GU2 7YG (United Kingdom); Sha, W. [Metals Research Group, School of Civil Engineering, Architecture and Planning, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom); Glicksman, M.E. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States); Rajan, K. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)

    2005-07-15

    Precipitation hardening in an Fe-12Ni-6Mn maraging steel during overaging is quantified. First, applying our recent kinetic model of coarsening [Phys. Rev. E, 69 (2004) 061507], and incorporating the Ashby-Orowan relationship, we link quantifiable aspects of the microstructures of these steels to their mechanical properties, including especially the hardness. Specifically, hardness measurements allow calculation of the precipitate size as a function of time and temperature through the Ashby-Orowan relationship. Second, calculated precipitate sizes and thermodynamic data determined with Thermo-Calc[copyright] are used with our recent kinetic coarsening model to extract diffusion coefficients during overaging from hardness measurements. Finally, employing more accurate diffusion parameters, we determined the hardness of these alloys independently from theory, and found agreement with experimental hardness data. Diffusion coefficients determined during overaging of these steels are notably higher than those found during the aging - an observation suggesting that precipitate growth during aging and precipitate coarsening during overaging are not controlled by the same diffusion mechanism.

  6. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  7. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  8. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  9. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  10. Comparing measured and modeled firn compaction rates in Greenland

    Science.gov (United States)

    Stevens, C.; MacFerrin, M. J.; Waddington, E. D.; Vo, H.; Yoon, M.

    2015-12-01

    Quantifying the mass balance of the Greenland and Antarctic ice sheets using satellite and/or airborne altimetry requires a firn-densification model to correct for firn-air content and transient firn-thickness changes. We have developed the Community Firn Model (CFM) that allows users to run firn-densification physics from a suite of published models. Here, we use the CFM to compare model-predicted firn depth-density profiles and compaction rates with observed profiles and compaction rates collected from a network of in situ strain gauges at eight different sites in Greenland. Additionally, we use regional-climate-model output to force the CFM and compare the depth-density profiles and compaction rates predicted by the different models. Many of the models were developed using a steady-state assumption and were tuned for the dry-snow zone. Our results demonstrate the challenges of using these models to simulate firn density in Greenland's expanding wet firn and percolation zones, and they help quantify the uncertainty in firn-density model predictions. Next-generation firn models are incorporating more physics (e.g. meltwater percolation and grain growth), and field measurements are essential to inform continuing development of these new models.

  11. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  12. NBC Hazard Prediction Model Capability Analysis

    Science.gov (United States)

    1999-09-01

    Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented

  13. A thermodynamic model to predict wax formation in petroleum fluids

    Energy Technology Data Exchange (ETDEWEB)

    Coutinho, J.A.P. [Universidade de Aveiro (Portugal). Dept. de Quimica. Centro de Investigacao em Quimica]. E-mail: jcoutinho@dq.ua.pt; Pauly, J.; Daridon, J.L. [Universite de Pau et des Pays de l' Adour, Pau (France). Lab. des Fluides Complexes

    2001-12-01

    Some years ago the authors proposed a model for the non-ideality of the solid phase, based on the Predictive Local Composition concept. This was first applied to the Wilson equation and latter extended to NRTL and UNIQUAC models. Predictive UNIQUAC proved to be extraordinarily successful in predicting the behaviour of both model and real hydrocarbon fluids at low temperatures. This work illustrates the ability of Predictive UNIQUAC in the description of the low temperature behaviour of petroleum fluids. It will be shown that using Predictive UNIQUAC in the description of the solid phase non-ideality a complete prediction of the low temperature behaviour of synthetic paraffin solutions, fuels and crude oils is achieved. The composition of both liquid and solid phases, the amount of crystals formed and the cloud points are predicted within the accuracy of the experimental data. The extension of Predictive UNIQUAC to high pressures, by coupling it with an EOS/G{sup E} model based on the SRK EOS used with the LCVM mixing rule, is proposed and predictions of phase envelopes for live oils are compared with experimental data. (author)

  14. A THERMODYNAMIC MODEL TO PREDICT WAX FORMATION IN PETROLEUM FLUIDS

    Directory of Open Access Journals (Sweden)

    J.A.P. Coutinho

    2001-12-01

    Full Text Available Some years ago the authors proposed a model for the non-ideality of the solid phase, based on the Predictive Local Composition concept. This was first applied to the Wilson equation and latter extended to NRTL and UNIQUAC models. Predictive UNIQUAC proved to be extraordinarily successful in predicting the behaviour of both model and real hydrocarbon fluids at low temperatures. This work illustrates the ability of Predictive UNIQUAC in the description of the low temperature behaviour of petroleum fluids. It will be shown that using Predictive UNIQUAC in the description of the solid phase non-ideality a complete prediction of the low temperature behaviour of synthetic paraffin solutions, fuels and crude oils is achieved. The composition of both liquid and solid phases, the amount of crystals formed and the cloud points are predicted within the accuracy of the experimental data. The extension of Predictive UNIQUAC to high pressures, by coupling it with an EOS/G E model based on the SRK EOS used with the LCVM mixing rule, is proposed and predictions of phase envelopes for live oils are compared with experimental data.

  15. Major models used in comparative management

    OpenAIRE

    Ioan Constantin Dima; Codruta Dura

    2001-01-01

    Comparative management literature emphasizes the following models: Farmer-Richman Model (based on the assumption that environment represents the main factor whom influence upon management is decisive); Rosalie Tung Model (using the following variables:environment,or extra-organisational variables, intra-organisational variables, personal and result variables); Child Model (including the three determinative domains-contingency, culture and economic system-threated as items objectively connecte...

  16. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  17. Model Predictive Control of a Wave Energy Converter

    DEFF Research Database (Denmark)

    Andersen, Palle; Pedersen, Tom Søndergård; Nielsen, Kirsten Mølgaard;

    2015-01-01

    In this paper reactive control and Model Predictive Control (MPC) for a Wave Energy Converter (WEC) are compared. The analysis is based on a WEC from Wave Star A/S designed as a point absorber. The model predictive controller uses wave models based on the dominating sea states combined with a model...... connecting undisturbed wave sequences to sequences of torque. Losses in the conversion from mechanical to electrical power are taken into account in two ways. Conventional reactive controllers are tuned for each sea state with the assumption that the converter has the same efficiency back and forth. MPC...

  18. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...

  19. Modelling Chemical Reasoning to Predict Reactions

    CERN Document Server

    Segler, Marwin H S

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180,000 randomly selected binary reactions. We show that our data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-) discovering novel transformations (even including transition-metal catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph, and because each single reaction prediction is typically ac...

  20. Predictive modeling for EBPC in EBDW

    Science.gov (United States)

    Zimmermann, Rainer; Schulz, Martin; Hoppe, Wolfgang; Stock, Hans-Jürgen; Demmerle, Wolfgang; Zepka, Alex; Isoyan, Artak; Bomholt, Lars; Manakli, Serdar; Pain, Laurent

    2009-10-01

    We demonstrate a flow for e-beam proximity correction (EBPC) to e-beam direct write (EBDW) wafer manufacturing processes, demonstrating a solution that covers all steps from the generation of a test pattern for (experimental or virtual) measurement data creation, over e-beam model fitting, proximity effect correction (PEC), and verification of the results. We base our approach on a predictive, physical e-beam simulation tool, with the possibility to complement this with experimental data, and the goal of preparing the EBPC methods for the advent of high-volume EBDW tools. As an example, we apply and compare dose correction and geometric correction for low and high electron energies on 1D and 2D test patterns. In particular, we show some results of model-based geometric correction as it is typical for the optical case, but enhanced for the particularities of e-beam technology. The results are used to discuss PEC strategies, with respect to short and long range effects.

  1. Comparative study of Financial Time Series Prediction by Artificial Neural Network with Gradient Descent Learning

    CERN Document Server

    Ghosh, Arka

    2011-01-01

    Financial forecasting is an example of a signal processing problem which is challenging due to Small sample sizes, high noise, non-stationarity, and non-linearity,but fast forecasting of stock market price is very important for strategic business planning.Present study is aimed to develop a comparative predictive model with Feedforward Multilayer Artificial Neural Network & Recurrent Time Delay Neural Network for the Financial Timeseries Prediction.This study is developed with the help of historical stockprice dataset made available by GoogleFinance.To develop this prediction model Backpropagation method with Gradient Descent learning has been implemented.Finally the Neural Net, learned with said algorithm is found to be skillful predictor for non-stationary noisy Financial Timeseries.

  2. Comparative study to predict dipeptidyl peptidase IV inhibitory activity of β-amino amide scaffold

    Directory of Open Access Journals (Sweden)

    S Patil

    2015-01-01

    Full Text Available Comparative study was performed on 34 β-amino amide derivatives as dipeptidyl peptidase IV inhibitors in order to determine their structural requirement to enhance the antidiabetic activities. Hologram quantitative structure activity relationships models utilized specialized fragment fingerprints (hologram length 353 which showed good predictivity with cross-validated q 2 and conventional r 2 values of 0.971 and 0.971, respectively. Models were validated and optimized by a test set of eight compounds and gave satisfactory predictive ability. Hologram quantitative structure activity relationships maps were helpful in prediction of the structural features of the ligands to account for the activity in terms of positively and negatively contributing towards activity. The information obtained from maps could be effectively use as a guiding tool for further structure modifications and synthesis of new potent antidiabetic agents.

  3. Predictive error analysis for a water resource management model

    Science.gov (United States)

    Gallagher, Mark; Doherty, John

    2007-02-01

    SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

  4. Models for short term malaria prediction in Sri Lanka

    Directory of Open Access Journals (Sweden)

    Galappaththy Gawrie NL

    2008-05-01

    Full Text Available Abstract Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed.

  5. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  6. A Predictive Model of High Shear Thrombus Growth.

    Science.gov (United States)

    Mehrabadi, Marmar; Casa, Lauren D C; Aidun, Cyrus K; Ku, David N

    2016-08-01

    The ability to predict the timescale of thrombotic occlusion in stenotic vessels may improve patient risk assessment for thrombotic events. In blood contacting devices, thrombosis predictions can lead to improved designs to minimize thrombotic risks. We have developed and validated a model of high shear thrombosis based on empirical correlations between thrombus growth and shear rate. A mathematical model was developed to predict the growth of thrombus based on the hemodynamic shear rate. The model predicts thrombus deposition based on initial geometric and fluid mechanic conditions, which are updated throughout the simulation to reflect the changing lumen dimensions. The model was validated by comparing predictions against actual thrombus growth in six separate in vitro experiments: stenotic glass capillary tubes (diameter = 345 µm) at three shear rates, the PFA-100(®) system, two microfluidic channel dimensions (heights = 300 and 82 µm), and a stenotic aortic graft (diameter = 5.5 mm). Comparison of the predicted occlusion times to experimental results shows excellent agreement. The model is also applied to a clinical angiography image to illustrate the time course of thrombosis in a stenotic carotid artery after plaque cap rupture. Our model can accurately predict thrombotic occlusion time over a wide range of hemodynamic conditions.

  7. Personalized Predictive Modeling and Risk Factor Identification using Patient Similarity.

    Science.gov (United States)

    Ng, Kenney; Sun, Jimeng; Hu, Jianying; Wang, Fei

    2015-01-01

    Personalized predictive models are customized for an individual patient and trained using information from similar patients. Compared to global models trained on all patients, they have the potential to produce more accurate risk scores and capture more relevant risk factors for individual patients. This paper presents an approach for building personalized predictive models and generating personalized risk factor profiles. A locally supervised metric learning (LSML) similarity measure is trained for diabetes onset and used to find clinically similar patients. Personalized risk profiles are created by analyzing the parameters of the trained personalized logistic regression models. A 15,000 patient data set, derived from electronic health records, is used to evaluate the approach. The predictive results show that the personalized models can outperform the global model. Cluster analysis of the risk profiles show groups of patients with similar risk factors, differences in the top risk factors for different groups of patients and differences between the individual and global risk factors.

  8. Is it Worth Comparing Different Bankruptcy Models?

    Directory of Open Access Journals (Sweden)

    Miroslava Dolejšová

    2015-01-01

    Full Text Available The aim of this paper is to compare the performance of small enterprises in the Zlín and Olomouc Regions. These enterprises were assessed using the Altman Z-Score model, the IN05 model, the Zmijewski model and the Springate model. The batch selected for this analysis included 16 enterprises from the Zlín Region and 16 enterprises from the Olomouc Region. Financial statements subjected to the analysis are from 2006 and 2010. The statistical data analysis was performed using the one-sample z-test for proportions and the paired t-test. The outcomes of the evaluation run using the Altman Z-Score model, the IN05 model and the Springate model revealed the enterprises to be financially sound, but the Zmijewski model identified them as being insolvent. The one-sample z-test for proportions confirmed that at least 80% of these enterprises show a sound financial condition. A comparison of all models has emphasized the substantial difference produced by the Zmijewski model. The paired t-test showed that the financial performance of small enterprises had remained the same during the years involved. It is recommended that small enterprises assess their financial performance using two different bankruptcy models. They may wish to combine the Zmijewski model with any bankruptcy model (the Altman Z-Score model, the IN05 model or the Springate model to ensure a proper method of analysis.

  9. Genetic models of homosexuality: generating testable predictions

    OpenAIRE

    Gavrilets, Sergey; Rice, William R.

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality inclu...

  10. Doses from aquatic pathways in CSA-N288.1: deterministic and stochastic predictions compared

    Energy Technology Data Exchange (ETDEWEB)

    Chouhan, S.L.; Davis, P

    2002-04-01

    The conservatism and uncertainty in the Canadian Standards Association (CSA) model for calculating derived release limits (DRLs) for aquatic emissions of radionuclides from nuclear facilities was investigated. The model was run deterministically using the recommended default values for its parameters, and its predictions were compared with the distributed doses obtained by running the model stochastically. Probability density functions (PDFs) for the model parameters for the stochastic runs were constructed using data reported in the literature and results from experimental work done by AECL. The default values recommended for the CSA model for some parameters were found to be lower than the central values of the PDFs in about half of the cases. Doses (ingestion, groundshine and immersion) calculated as the median of 400 stochastic runs were higher than the deterministic doses predicted using the CSA default values of the parameters for more than half (85 out of the 163) of the cases. Thus, the CSA model is not conservative for calculating DRLs for aquatic radionuclide emissions, as it was intended to be. The output of the stochastic runs was used to determine the uncertainty in the CSA model predictions. The uncertainty in the total dose was high, with the 95% confidence interval exceeding an order of magnitude for all radionuclides. A sensitivity study revealed that total ingestion doses to adults predicted by the CSA model are sensitive primarily to water intake rates, bioaccumulation factors for fish and marine biota, dietary intakes of fish and marine biota, the fraction of consumed food arising from contaminated sources, the irrigation rate, occupancy factors and the sediment solid/liquid distribution coefficient. To improve DRL models, further research into aquatic exposure pathways should concentrate on reducing the uncertainty in these parameters. The PDFs given here can he used by other modellers to test and improve their models and to ensure that DRLs

  11. Wind farm production prediction - The Zephyr model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)

    2002-06-01

    This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)

  12. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  13. An evaluation of mathematical models for predicting skin permeability.

    Science.gov (United States)

    Lian, Guoping; Chen, Longjian; Han, Lujia

    2008-01-01

    A number of mathematical models have been proposed for predicting skin permeability, mostly empirical and very few are deterministic. Early empirical models use simple lipophilicity parameters. The recent trend is to use more complicated molecular structure descriptors. There has been much debate on which models best predict skin permeability. This article evaluates various mathematical models using a comprehensive experimental dataset of skin permeability for 124 chemical compounds compiled from various sources. Of the seven models compared, the deterministic model of Mitragotri gives the best prediction. The simple quantitative structure permeability relationships (QSPR) model of Potts and Guy gives the second best prediction. The two models have many features in common. Both assume the lipid matrix as the pathway of transdermal permeation. Both use octanol-water partition coefficient and molecular size. Even the mathematical formulae are similar. All other empirical QSPR models that use more complicated molecular structure descriptors fail to provide satisfactory prediction. The molecular structure descriptors in the more complicated QSPR models are empirically related to skin permeation. The mechanism on how these descriptors affect transdermal permeation is not clear. Mathematically it is an ill-defined approach to use many colinearly related parameters rather than fewer independent parameters in multi-linear regression.

  14. Catalytic cracking models developed for predictive control purposes

    Directory of Open Access Journals (Sweden)

    Dag Ljungqvist

    1993-04-01

    Full Text Available The paper deals with state-space modeling issues in the context of model-predictive control, with application to catalytic cracking. Emphasis is placed on model establishment, verification and online adjustment. Both the Fluid Catalytic Cracking (FCC and the Residual Catalytic Cracking (RCC units are discussed. Catalytic cracking units involve complex interactive processes which are difficult to operate and control in an economically optimal way. The strong nonlinearities of the FCC process mean that the control calculation should be based on a nonlinear model with the relevant constraints included. However, the model can be simple compared to the complexity of the catalytic cracking plant. Model validity is ensured by a robust online model adjustment strategy. Model-predictive control schemes based on linear convolution models have been successfully applied to the supervisory dynamic control of catalytic cracking units, and the control can be further improved by the SSPC scheme.

  15. Comparing repetition-based melody segmentation models

    NARCIS (Netherlands)

    Rodríguez López, M.E.; de Haas, Bas; Volk, Anja

    2014-01-01

    This paper reports on a comparative study of computational melody segmentation models based on repetition detection. For the comparison we implemented five repetition-based segmentation models, and subsequently evaluated their capacity to automatically find melodic phrase boundaries in a corpus of 2

  16. Predictive model for segmented poly(urea

    Directory of Open Access Journals (Sweden)

    Frankl P.

    2012-08-01

    Full Text Available Segmented poly(urea has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM – a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.

  17. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  18. A Multistep Chaotic Model for Municipal Solid Waste Generation Prediction.

    Science.gov (United States)

    Song, Jingwei; He, Jiaying

    2014-08-01

    In this study, a univariate local chaotic model is proposed to make one-step and multistep forecasts for daily municipal solid waste (MSW) generation in Seattle, Washington. For MSW generation prediction with long history data, this forecasting model was created based on a nonlinear dynamic method called phase-space reconstruction. Compared with other nonlinear predictive models, such as artificial neural network (ANN) and partial least square-support vector machine (PLS-SVM), and a commonly used linear seasonal autoregressive integrated moving average (sARIMA) model, this method has demonstrated better prediction accuracy from 1-step ahead prediction to 14-step ahead prediction assessed by both mean absolute percentage error (MAPE) and root mean square error (RMSE). Max error, MAPE, and RMSE show that chaotic models were more reliable than the other three models. As chaotic models do not involve random walk, their performance does not vary while ANN and PLS-SVM make different forecasts in each trial. Moreover, this chaotic model was less time consuming than ANN and PLS-SVM models.

  19. Preoperative prediction model of outcome after cholecystectomy for symptomatic gallstones

    DEFF Research Database (Denmark)

    Borly, L; Anderson, I B; Bardram, Linda

    1999-01-01

    BACKGROUND: After cholecystectomy for symptomatic gallstone disease 20%-30% of the patients continue to have abdominal pain. The aim of this study was to investigate whether preoperative variables could predict the symptomatic outcome after cholecystectomy. METHODS: One hundred and two patients...... and sonography evaluated gallbladder motility, gallstones, and gallbladder volume. Preoperative variables in patients with or without postcholecystectomy pain were compared statistically, and significant variables were combined in a logistic regression model to predict the postoperative outcome. RESULTS: Eighty...

  20. Comparing predictions of nitrogen and green house gas fluxes in response to changes in live stock, land cover and land management using models at a national, European and global scale

    NARCIS (Netherlands)

    Vries, de W.; Kros, J.; Voogd, J.C.H.; Lesschen, J.P.; Stehfest, E.; Bouwman, A.F.

    2009-01-01

    In this study we compared three relatively simple process based models, developed for the national scale (INITIATOR2), European scale (MITERRA) and global scale (IMAGE). A comparison was made of NH3 , N2O, NOx and CH4 emissions, while making a distinction between housing systems, grazing and manure/

  1. Hidden Markov models for prediction of protein features

    DEFF Research Database (Denmark)

    Bystroff, Christopher; Krogh, Anders

    2008-01-01

    Hidden Markov Models (HMMs) are an extremely versatile statistical representation that can be used to model any set of one-dimensional discrete symbol data. HMMs can model protein sequences in many ways, depending on what features of the protein are represented by the Markov states. For protein...... structure prediction, states have been chosen to represent either homologous sequence positions, local or secondary structure types, or transmembrane locality. The resulting models can be used to predict common ancestry, secondary or local structure, or membrane topology by applying one of the two standard...... algorithms for comparing a sequence to a model. In this chapter, we review those algorithms and discuss how HMMs have been constructed and refined for the purpose of protein structure prediction....

  2. Experimental study on prediction model for maximum rebound ratio

    Institute of Scientific and Technical Information of China (English)

    LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong

    2007-01-01

    The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.

  3. Comparing various multi-component global heliosphere models

    CERN Document Server

    Müller, H -R; Heerikhuisen, J; Izmodenov, V V; Scherer, K; Alexashov, D; Fahr, H -J

    2008-01-01

    Modeling of the global heliosphere seeks to investigate the interaction of the solar wind with the partially ionized local interstellar medium. Models that treat neutral hydrogen self-consistently and in great detail, together with the plasma, but that neglect magnetic fields, constitute a sub-category within global heliospheric models. There are several different modeling strategies used for this sub-category in the literature. Differences and commonalities in the modeling results from different strategies are pointed out. Plasma-only models and fully self-consistent models from four research groups, for which the neutral species is modeled with either one, three, or four fluids, or else kinetically, are run with the same boundary parameters and equations. They are compared to each other with respect to the locations of key heliospheric boundary locations and with respect to the neutral hydrogen content throughout the heliosphere. In many respects, the models' predictions are similar. In particular, the loca...

  4. A comparative study of S/MAR prediction tools

    Directory of Open Access Journals (Sweden)

    Koentges Georgy

    2007-03-01

    Full Text Available Abstract Background S/MARs are regions of the DNA that are attached to the nuclear matrix. These regions are known to affect substantially the expression of genes. The computer prediction of S/MARs is a highly significant task which could contribute to our understanding of chromatin organisation in eukaryotic cells, the number and distribution of boundary elements, and the understanding of gene regulation in eukaryotic cells. However, while a number of S/MAR predictors have been proposed, their accuracy has so far not come under scrutiny. Results We have selected S/MARs with sufficient experimental evidence and used these to evaluate existing methods of S/MAR prediction. Our main results are: 1. all existing methods have little predictive power, 2. a simple rule based on AT-percentage is generally competitive with other methods, 3. in practice, the different methods will usually identify different sub-sequences as S/MARs, 4. more research on the H-Rule would be valuable. Conclusion A new insight is needed to design a method which will predict S/MARs well. Our data, including the control data, has been deposited as additional material and this may help later researchers test new predictors.

  5. Comparative Analysis of Data Mining Techniques for Malaysian Rainfall Prediction

    Directory of Open Access Journals (Sweden)

    Suhaila Zainudin

    2016-12-01

    Full Text Available Climate change prediction analyses the behaviours of weather for a specific time. Rainfall forecasting is a climate change task where specific features such as humidity and wind will be used to predict rainfall in specific locations. Rainfall prediction can be achieved using classification task under Data Mining. Different techniques lead to different performances depending on rainfall data representation including representation for long term (months patterns and short-term (daily patterns. Selecting an appropriate technique for a specific duration of rainfall is a challenging task. This study analyses multiple classifiers such as Naïve Bayes, Support Vector Machine, Decision Tree, Neural Network and Random Forest for rainfall prediction using Malaysian data. The dataset has been collected from multiple stations in Selangor, Malaysia. Several pre-processing tasks have been applied in order to resolve missing values and eliminating noise. The experimental results show that with small training data (10% from 1581 instances Random Forest correctly classified 1043 instances. This is the strength of an ensemble of trees in Random Forest where a group of classifiers can jointly beat a single classifier.

  6. Kindergarten Prediction of Reading Skills: A Longitudinal Comparative Analysis

    Science.gov (United States)

    Schatschneider, Christopher; Fletcher, Jack M.; Francis, David J.; Carlson, Coleen D.; Foorman, Barbara R.

    2004-01-01

    There is considerable focus in public policy on screening children for reading difficulties. Sixty years of research have not resolved questions of what constructs assessed in kindergarten best predict subsequent reading outcomes. This study assessed the relative importance of multiple measures obtained in a kindergarten sample for the prediction…

  7. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-02-01

    Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  8. Calibrated predictions for multivariate competing risks models.

    Science.gov (United States)

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  9. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  10. Multithreaded comparative RNA secondary structure prediction using stochastic context-free grammars

    Directory of Open Access Journals (Sweden)

    Værum Morten

    2011-04-01

    Full Text Available Abstract Background The prediction of the structure of large RNAs remains a particular challenge in bioinformatics, due to the computational complexity and low levels of accuracy of state-of-the-art algorithms. The pfold model couples a stochastic context-free grammar to phylogenetic analysis for a high accuracy in predictions, but the time complexity of the algorithm and underflow errors have prevented its use for long alignments. Here we present PPfold, a multithreaded version of pfold, which is capable of predicting the structure of large RNA alignments accurately on practical timescales. Results We have distributed both the phylogenetic calculations and the inside-outside algorithm in PPfold, resulting in a significant reduction of runtime on multicore machines. We have addressed the floating-point underflow problems of pfold by implementing an extended-exponent datatype, enabling PPfold to be used for large-scale RNA structure predictions. We have also improved the user interface and portability: alongside standalone executable and Java source code of the program, PPfold is also available as a free plugin to the CLC Workbenches. We have evaluated the accuracy of PPfold using BRaliBase I tests, and demonstrated its practical use by predicting the secondary structure of an alignment of 24 complete HIV-1 genomes in 65 minutes on an 8-core machine and identifying several known structural elements in the prediction. Conclusions PPfold is the first parallelized comparative RNA structure prediction algorithm to date. Based on the pfold model, PPfold is capable of fast, high-quality predictions of large RNA secondary structures, such as the genomes of RNA viruses or long genomic transcripts. The techniques used in the parallelization of this algorithm may be of general applicability to other bioinformatics algorithms.

  11. Modelling language evolution: Examples and predictions.

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  12. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  13. Global Solar Dynamo Models: Simulations and Predictions

    Indian Academy of Sciences (India)

    Mausumi Dikpati; Peter A. Gilman

    2008-03-01

    Flux-transport type solar dynamos have achieved considerable success in correctly simulating many solar cycle features, and are now being used for prediction of solar cycle timing and amplitude.We first define flux-transport dynamos and demonstrate how they work. The essential added ingredient in this class of models is meridional circulation, which governs the dynamo period and also plays a crucial role in determining the Sun’s memory about its past magnetic fields.We show that flux-transport dynamo models can explain many key features of solar cycles. Then we show that a predictive tool can be built from this class of dynamo that can be used to predict mean solar cycle features by assimilating magnetic field data from previous cycles.

  14. Model Predictive Control of Sewer Networks

    Science.gov (United States)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.

    2017-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.

  15. DKIST Polarization Modeling and Performance Predictions

    Science.gov (United States)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  16. Modelling Chemical Reasoning to Predict Reactions

    OpenAIRE

    Segler, Marwin H. S.; Waller, Mark P.

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outpe...

  17. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert; Knox, James

    2016-01-01

    Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  18. Raman Model Predicting Hardness of Covalent Crystals

    OpenAIRE

    Zhou, Xiang-Feng; Qian, Quang-Rui; Sun, Jian; Tian, Yongjun; Wang, Hui-Tian

    2009-01-01

    Based on the fact that both hardness and vibrational Raman spectrum depend on the intrinsic property of chemical bonds, we propose a new theoretical model for predicting hardness of a covalent crystal. The quantitative relationship between hardness and vibrational Raman frequencies deduced from the typical zincblende covalent crystals is validated to be also applicable for the complex multicomponent crystals. This model enables us to nondestructively and indirectly characterize the hardness o...

  19. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    Science.gov (United States)

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  20. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts

  1. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  2. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  3. Prediction modelling for population conviction data

    NARCIS (Netherlands)

    Tollenaar, N.

    2017-01-01

    In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.

  4. A Predictive Model for MSSW Student Success

    Science.gov (United States)

    Napier, Angela Michele

    2011-01-01

    This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…

  5. Predictability of extreme values in geophysical models

    NARCIS (Netherlands)

    Sterk, A.E.; Holland, M.P.; Rabassa, P.; Broer, H.W.; Vitolo, R.

    2012-01-01

    Extreme value theory in deterministic systems is concerned with unlikely large (or small) values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical model

  6. A revised prediction model for natural conception

    NARCIS (Netherlands)

    Bensdorp, A.J.; Steeg, J.W. van der; Steures, P.; Habbema, J.D.; Hompes, P.G.; Bossuyt, P.M.; Veen, F. van der; Mol, B.W.; Eijkemans, M.J.; Kremer, J.A.M.; et al.,

    2017-01-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis

  7. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  8. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  9. Leptogenesis in minimal predictive seesaw models

    CERN Document Server

    Björkeroth, Fredrik; Varzielas, Ivo de Medeiros; King, Stephen F

    2015-01-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\

  10. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Giebel, G.; Landberg, L. [Risoe National Lab., Roskilde (Denmark); Madsen, H.; Nielsen, H.A. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  11. Nonconvex model predictive control for commercial refrigeration

    Science.gov (United States)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  12. Comparing and Using Occupation-Focused Models.

    Science.gov (United States)

    Wong, Su Ren; Fisher, Gail

    2015-01-01

    As health care moves toward understanding the importance of function, participation and occupation, occupational therapists would be well served to use occupation-focused theories to guide intervention. Most therapists understand that applying occupation-focused models supports best practice, but many do not routinely use these models. Barriers to application of theory include lack of understanding of the models and limited strategies to select and apply them for maximum client benefit. The aim of this article is to compare occupation-focused models and provide recommendations on how to choose and combine these models in practice; and to provide a systematic approach for integrating occupation-focused models with frames of reference to guide assessment and intervention.

  13. Comparing coefficients of nested nonlinear probability models

    DEFF Research Database (Denmark)

    Kohler, Ulrich; Karlson, Kristian Bernt; Holm, Anders

    2011-01-01

    In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general decomposi......In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general...... decomposition method that is unaffected by the rescaling or attenuation bias that arise in cross-model comparisons in nonlinear models. It recovers the degree to which a control variable, Z, mediates or explains the relationship between X and a latent outcome variable, Y*, underlying the nonlinear probability...

  14. Specialized Language Models using Dialogue Predictions

    CERN Document Server

    Popovici, C; Popovici, Cosmin; Baggia, Paolo

    1996-01-01

    This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...

  15. Comparative Study of Path Loss Models in Different Environments

    Directory of Open Access Journals (Sweden)

    Tilotma Yadav,

    2011-04-01

    Full Text Available By using propagation path models to estimate the received signal level as a function of distance, it becomes possible to predict the SNR for a mobile communication system. Both theoretical andmeasurement-based propagation models indicate that average received signal power decreases logarithmically with distance. For comparative analysis we use Okumura’s model, Hata model, COST231 Extension to Hata model,ECEC-33 model,SUI model along with the practical data. Most of these models are based on a systematic interpretation of theoretical data service area like urban(Built-up city or largetown crowded with large buildings, suburban (having some obstacles near the mobile radio car, but still not very congested and rural (No obstacles like tall trees or buildings like farm-land, rice field, open fields in INDIA at 900MHz & 1800MHz frequency .

  16. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  17. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  18. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...

  19. Comparison of Linear Prediction Models for Audio Signals

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  20. Comparison of Linear Prediction Models for Audio Signals

    Directory of Open Access Journals (Sweden)

    van Waterschoot Toon

    2008-01-01

    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  1. Comparison of tropospheric scintillation prediction models of the Indonesian climate

    Science.gov (United States)

    Chen, Cheng Yee; Singh, Mandeep Jit

    2014-12-01

    Tropospheric scintillation is a phenomenon that will cause signal degradation in satellite communication with low fade margin. Few studies of scintillation have been conducted in tropical regions. To analyze tropospheric scintillation, we obtain data from a satellite link installed at Bandung, Indonesia, at an elevation angle of 64.7° and a frequency of 12.247 GHz from 1999 to 2000. The data are processed and compared with the predictions of several well-known scintillation prediction models. From the analysis, we found that the ITU-R model gives the lowest error rate when predicting the scintillation intensity for fade at 4.68%. However, the model should be further tested using data from higher-frequency bands, such as the K and Ka bands, to verify the accuracy of the model.

  2. Development of Interpretable Predictive Models for BPH and Prostate Cancer

    Science.gov (United States)

    Bermejo, Pablo; Vivo, Alicia; Tárraga, Pedro J; Rodríguez-Montes, JA

    2015-01-01

    BACKGROUND Traditional methods for deciding whether to recommend a patient for a prostate biopsy are based on cut-off levels of stand-alone markers such as prostate-specific antigen (PSA) or any of its derivatives. However, in the last decade we have seen the increasing use of predictive models that combine, in a non-linear manner, several predictives that are better able to predict prostate cancer (PC), but these fail to help the clinician to distinguish between PC and benign prostate hyperplasia (BPH) patients. We construct two new models that are capable of predicting both PC and BPH. METHODS An observational study was performed on 150 patients with PSA ≥3 ng/mL and age >50 years. We built a decision tree and a logistic regression model, validated with the leave-one-out methodology, in order to predict PC or BPH, or reject both. RESULTS Statistical dependence with PC and BPH was found for prostate volume (P-value < 0.001), PSA (P-value < 0.001), international prostate symptom score (IPSS; P-value < 0.001), digital rectal examination (DRE; P-value < 0.001), age (P-value < 0.002), antecedents (P-value < 0.006), and meat consumption (P-value < 0.08). The two predictive models that were constructed selected a subset of these, namely, volume, PSA, DRE, and IPSS, obtaining an area under the ROC curve (AUC) between 72% and 80% for both PC and BPH prediction. CONCLUSION PSA and volume together help to build predictive models that accurately distinguish among PC, BPH, and patients without any of these pathologies. Our decision tree and logistic regression models outperform the AUC obtained in the compared studies. Using these models as decision support, the number of unnecessary biopsies might be significantly reduced. PMID:25780348

  3. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  4. Gas explosion prediction using CFD models

    Energy Technology Data Exchange (ETDEWEB)

    Niemann-Delius, C.; Okafor, E. [RWTH Aachen Univ. (Germany); Buhrow, C. [TU Bergakademie Freiberg Univ. (Germany)

    2006-07-15

    A number of CFD models are currently available to model gaseous explosions in complex geometries. Some of these tools allow the representation of complex environments within hydrocarbon production plants. In certain explosion scenarios, a correction is usually made for the presence of buildings and other complexities by using crude approximations to obtain realistic estimates of explosion behaviour as can be found when predicting the strength of blast waves resulting from initial explosions. With the advance of computational technology, and greater availability of computing power, computational fluid dynamics (CFD) tools are becoming increasingly available for solving such a wide range of explosion problems. A CFD-based explosion code - FLACS can, for instance, be confidently used to understand the impact of blast overpressures in a plant environment consisting of obstacles such as buildings, structures, and pipes. With its porosity concept representing geometry details smaller than the grid, FLACS can represent geometry well, even when using coarse grid resolutions. The performance of FLACS has been evaluated using a wide range of field data. In the present paper, the concept of computational fluid dynamics (CFD) and its application to gas explosion prediction is presented. Furthermore, the predictive capabilities of CFD-based gaseous explosion simulators are demonstrated using FLACS. Details about the FLACS-code, some extensions made to FLACS, model validation exercises, application, and some results from blast load prediction within an industrial facility are presented. (orig.)

  5. Genetic models of homosexuality: generating testable predictions.

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-12-22

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.

  6. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. A Study On Distributed Model Predictive Consensus

    CERN Document Server

    Keviczky, Tamas

    2008-01-01

    We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.

  8. Prediction models of prevalent radiographic vertebral fractures among older men.

    Science.gov (United States)

    Schousboe, John T; Rosen, Harold R; Vokes, Tamara J; Cauley, Jane A; Cummings, Steven R; Nevitt, Michael C; Black, Dennis M; Orwoll, Eric S; Kado, Deborah M; Ensrud, Kristine E

    2014-01-01

    No studies have compared how well different prediction models discriminate older men who have a radiographic prevalent vertebral fracture (PVFx) from those who do not. We used area under receiver operating characteristic curves and a net reclassification index to compare how well regression-derived prediction models and nonregression prediction tools identify PVFx among men age ≥65 yr with femoral neck T-score of -1.0 or less enrolled in the Osteoporotic Fractures in Men Study. The area under receiver operating characteristic for a model with age, bone mineral density, and historical height loss (HHL) was 0.682 compared with 0.692 for a complex model with age, bone mineral density, HHL, prior non-spine fracture, body mass index, back pain, grip strength, smoking, and glucocorticoid use (p values for difference in 5 bootstrapped samples 0.14-0.92). This complex model, using a cutpoint prevalence of 5%, correctly reclassified only a net 5.7% (p = 0.13) of men as having or not having a PVFx compared with a simple criteria list (age ≥ 80 yr, HHL >4 cm, or glucocorticoid use). In conclusion, simple criteria identify older men with PVFx and regression-based models. Future research to identify additional risk factors that more accurately identify older men with PVFx is needed.

  9. A predictive model of music preference using pairwise comparisons

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Gallego, Javier Saez; Larsen, Jan

    2012-01-01

    Music recommendation is an important aspect of many streaming services and multi-media systems, however, it is typically based on so-called collaborative filtering methods. In this paper we consider the recommendation task from a personal viewpoint and examine to which degree music preference can...... be elicited and predicted using simple and robust queries such as pairwise comparisons. We propose to model - and in turn predict - the pairwise music preference using a very flexible model based on Gaussian Process priors for which we describe the required inference. We further propose a specific covariance...... function and evaluate the predictive performance on a novel dataset. In a recommendation style setting we obtain a leave-one-out accuracy of 74% compared to 50% with random predictions, showing potential for further refinement and evaluation....

  10. Support vector regression model for complex target RCS predicting

    Institute of Scientific and Technical Information of China (English)

    Wang Gu; Chen Weishi; Miao Jungang

    2009-01-01

    The electromagnetic scattering computation has developed rapidly for many years; some computing problems for complex and coated targets cannot be solved by using the existing theory and computing models. A computing model based on data is established for making up the insufficiency of theoretic models. Based on the "support vector regression method", which is formulated on the principle of minimizing a structural risk, a data model to predicate the unknown radar cross section of some appointed targets is given. Comparison between the actual data and the results of this predicting model based on support vector regression method proved that the support vector regression method is workable and with a comparative precision.

  11. Performance model to predict overall defect density

    Directory of Open Access Journals (Sweden)

    J Venkatesh

    2012-08-01

    Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.

  12. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  13. A CHAID Based Performance Prediction Model in Educational Data Mining

    Directory of Open Access Journals (Sweden)

    R. Bhaskaran

    2010-01-01

    Full Text Available The performance in higher secondary school education in India is a turning point in the academic lives of all students. As this academic performance is influenced by many factors, it is essential to develop predictive data mining model for students' performance so as to identify the slow learners and study the influence of the dominant factors on their academic performance. In the present investigation, a survey cum experimental methodology was adopted to generate a database and it was constructed from a primary and a secondary source. While the primary data was collected from the regular students, the secondary data was gathered from the school and office of the Chief Educational Officer (CEO. A total of 1000 datasets of the year 2006 from five different schools in three different districts of Tamilnadu were collected. The raw data was preprocessed in terms of filling up missing values, transforming values in one form into another and relevant attribute/ variable selection. As a result, we had 772 student records, which were used for CHAID prediction model construction. A set of prediction rules were extracted from CHIAD prediction model and the efficiency of the generated CHIAD prediction model was found. The accuracy of the present model was compared with other model and it has been found to be satisfactory.

  14. Predicting nucleosome positioning using a duration Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Widom Jonathan

    2010-06-01

    Full Text Available Abstract Background The nucleosome is the fundamental packing unit of DNAs in eukaryotic cells. Its detailed positioning on the genome is closely related to chromosome functions. Increasing evidence has shown that genomic DNA sequence itself is highly predictive of nucleosome positioning genome-wide. Therefore a fast software tool for predicting nucleosome positioning can help understanding how a genome's nucleosome organization may facilitate genome function. Results We present a duration Hidden Markov model for nucleosome positioning prediction by explicitly modeling the linker DNA length. The nucleosome and linker models trained from yeast data are re-scaled when making predictions for other species to adjust for differences in base composition. A software tool named NuPoP is developed in three formats for free download. Conclusions Simulation studies show that modeling the linker length distribution and utilizing a base composition re-scaling method both improve the prediction of nucleosome positioning regarding sensitivity and false discovery rate. NuPoP provides a user-friendly software tool for predicting the nucleosome occupancy and the most probable nucleosome positioning map for genomic sequences of any size. When compared with two existing methods, NuPoP shows improved performance in sensitivity.

  15. Seasonal Predictability in a Model Atmosphere.

    Science.gov (United States)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  16. Three-model ensemble wind prediction in southern Italy

    Science.gov (United States)

    Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo

    2016-03-01

    Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.

  17. Formability prediction for AHSS materials using damage models

    Science.gov (United States)

    Amaral, R.; Santos, Abel D.; José, César de Sá; Miranda, Sara

    2017-05-01

    Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches.

  18. A kinetic model for predicting biodegradation.

    Science.gov (United States)

    Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O

    2007-01-01

    Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.

  19. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  20. Compare about meaurement and calculation on Prediction of Jiangsu Coastal Areas’ Population Development Based on the Three Kinds of Models%基于三种模型的江苏沿海人口比较测算

    Institute of Scientific and Technical Information of China (English)

    王亮

    2013-01-01

      本文运用马尔萨斯人口模型、Logistic 增长模型和线性函数三种理论模型,结合江苏省第六次全国人口普查研究统计数据对江苏沿海2012—2020年的人口发展规模进行预测,研究江苏沿海人口演变特征,预测未来人口增长率及规模,结果显示:基于三种模型运行的五种结果差异较大,根据江苏沿海的实际情况和前人研究结果,确认江苏沿海到2020年的总人口为2125.47万人。研究结果以期能为江苏沿海的科学发展提供参考依据。%With applying three theoretical models: Malthus population model,Logistic growth and linear function, sixth state census data and resource and environmental indices,the paper predicts the population between 2012 and 2020 in Jiangsu coastal areas.The five results based on the three different kinds of models are different each other. Based on the current situa-tion about Jiangsu coastal areas and research results previously,the paper makes sure that the population numbers in Jiangsu coastal areas are 21254.7 thousand of the year 2020. The results hope to provide reference basis for this region’s scientific development.

  1. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  2. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  3. A Novel Trigger Model for Sales Prediction with Data Mining Techniques

    Directory of Open Access Journals (Sweden)

    Wenjie Huang

    2015-05-01

    Full Text Available Previous research on sales prediction has always used a single prediction model. However, no single model can perform the best for all kinds of merchandise. Accurate prediction results for just one commodity are meaningless to sellers. A general prediction for all commodities is needed. This paper illustrates a novel trigger system that can match certain kinds of commodities with a prediction model to give better prediction results for different kinds of commodities. We find some related factors for classification. Several classical prediction models are included as basic models for classification. We compared the results of the trigger model with other single models. The results show that the accuracy of the trigger model is better than that of a single model. This has implications for business in that sellers can utilize the proposed system to effectively predict the sales of several commodities.

  4. A new thermal comfort approach comparing adaptive and PMV models

    Energy Technology Data Exchange (ETDEWEB)

    Orosa, Jose A. [Universidade da Coruna, Departamento de Energia y P. M. Paseo de Ronda, n :51, 15011. A Coruna (Spain); Oliveira, Armando C. [Universidade do Porto, Faculdade de Engenharia, New Energy Tec. Unit. Rua Dr Roberto Frias, 4200-465 Porto (Portugal)

    2011-03-15

    In buildings with heating, ventilation, and air-conditioning (HVAC), the Predicted Mean Vote index (PMV) was successful at predicting comfort conditions, whereas in naturally ventilated buildings, only adaptive models provide accurate predictions. On the other hand, permeable coverings can be considered as a passive control method of indoor conditions and, consequently, have implications in the perception of indoor air quality, local thermal comfort, and energy savings. These energy savings were measured in terms of the set point temperature established in accordance with adaptive methods. Problems appear when the adaptive model suggests the same neutral temperature for ambiences with the same indoor temperature but different relative humidities. In this paper, a new design of the PMV model is described to compare the neutral temperature to real indoor conditions. Results showed that this new PMV model tends to overestimate thermal neutralities but with a lower value than Fanger's PMV index. On the other hand, this new PMV model considers indoor relative humidity, showing a clear differentiation of indoor ambiences in terms of it, unlike adaptive models. Finally, spaces with permeable coverings present indoor conditions closer to thermal neutrality, with corresponding energy savings. (author)

  5. Probabilistic prediction models for aggregate quarry siting

    Science.gov (United States)

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  6. Predicting Footbridge Response using Stochastic Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing s...... as it pinpoints which decisions to be concerned about when the goal is to predict footbridge response. The studies involve estimating footbridge responses using Monte-Carlo simulations and focus is on estimating vertical structural response to single person loading....

  7. Nonconvex Model Predictive Control for Commercial Refrigeration

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp

    2013-01-01

    is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...

  8. Comparing numerically exact and modelled static friction

    Directory of Open Access Journals (Sweden)

    Krengel Dominik

    2017-01-01

    Full Text Available Currently there exists no mechanically consistent “numerically exact” implementation of static and dynamic Coulomb friction for general soft particle simulations with arbitrary contact situations in two or three dimension, but only along one dimension. We outline a differential-algebraic equation approach for a “numerically exact” computation of friction in two dimensions and compare its application to the Cundall-Strack model in some test cases.

  9. Predicting nucleic acid binding interfaces from structural models of proteins.

    Science.gov (United States)

    Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael

    2012-02-01

    The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However, the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three-dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared with patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. Copyright © 2011 Wiley Periodicals, Inc.

  10. Predictive In Vivo Models for Oncology.

    Science.gov (United States)

    Behrens, Diana; Rolff, Jana; Hoffmann, Jens

    2016-01-01

    Experimental oncology research and preclinical drug development both substantially require specific, clinically relevant in vitro and in vivo tumor models. The increasing knowledge about the heterogeneity of cancer requested a substantial restructuring of the test systems for the different stages of development. To be able to cope with the complexity of the disease, larger panels of patient-derived tumor models have to be implemented and extensively characterized. Together with individual genetically engineered tumor models and supported by core functions for expression profiling and data analysis, an integrated discovery process has been generated for predictive and personalized drug development.Improved “humanized” mouse models should help to overcome current limitations given by xenogeneic barrier between humans and mice. Establishment of a functional human immune system and a corresponding human microenvironment in laboratory animals will strongly support further research.Drug discovery, systems biology, and translational research are moving closer together to address all the new hallmarks of cancer, increase the success rate of drug development, and increase the predictive value of preclinical models.

  11. Constructing predictive models of human running.

    Science.gov (United States)

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  12. Statistical Seasonal Sea Surface based Prediction Model

    Science.gov (United States)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  13. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...... preprocessing [Dau et al., 1997. J. Acoust. Soc. Am. 102, 2892-2905] with a simple central stage that describes the similarity of the test signal with the corresponding reference signal at a level of the internal representation of the signals. The model was compared with previous approaches, whereby a speech...... in noise experiment was used for training and an ideal binary mask experiment was used for evaluation. All three models were able to capture the trends in the speech in noise training data well, but the proposed model provides a better prediction of the binary mask test data, particularly when the binary...

  14. A framework for evaluating forest landscape model predictions using empirical data and knowledge

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson; William D. Dijak; Qia. Wang

    2014-01-01

    Evaluation of forest landscape model (FLM) predictions is indispensable to establish the credibility of predictions. We present a framework that evaluates short- and long-term FLM predictions at site and landscape scales. Site-scale evaluation is conducted through comparing raster cell-level predictions with inventory plot data whereas landscape-scale evaluation is...

  15. Comparative Analysis of Measured and Predicted Shrinkage Strain in Concrete

    Directory of Open Access Journals (Sweden)

    Kossakowski P. G.

    2014-06-01

    Full Text Available The article discusses the issues related to concrete shrinkage. The basic information on the phenomenon is presented as well as the factors that determine the contraction are pointed out and the stages of the process are described. The guidance for estimating the shrinkage strain is given according to Eurocode standard PN-EN 1992-1-1:2008. The results of studies of the samples shrinkage strain of concrete C25/30 are presented with a comparative analysis of the results estimated by the guidelines of the standard according to PN-EN 1992-1- 1:2008

  16. Comparative analysis of Goodwin's business cycle models

    Science.gov (United States)

    Antonova, A. O.; Reznik, S.; Todorov, M. D.

    2016-10-01

    We compare the behavior of solutions of Goodwin's business cycle equation in the form of neutral delay differential equation with fixed delay (NDDE model) and in the form of the differential equations of 3rd, 4th and 5th orders (ODE model's). Such ODE model's (Taylor series expansion of NDDE in powers of θ) are proposed in N. Dharmaraj and K. Vela Velupillai [6] for investigation of the short periodic sawthooth oscillations in NDDE. We show that the ODE's of 3rd, 4th and 5th order may approximate the asymptotic behavior of only main Goodwin's mode, but not the sawthooth modes. If the order of the Taylor series expansion exceeds 5, then the approximate ODE becomes unstable independently of time lag θ.

  17. Community monitoring for youth violence surveillance: testing a prediction model.

    Science.gov (United States)

    Henry, David B; Dymnicki, Allison; Kane, Candice; Quintana, Elena; Cartland, Jenifer; Bromann, Kimberly; Bhatia, Shaun; Wisnieski, Elise

    2014-08-01

    Predictive epidemiology is an embryonic field that involves developing informative signatures for disorder and tracking them using surveillance methods. Through such efforts assistance can be provided to the planning and implementation of preventive interventions. Believing that certain minor crimes indicative of gang activity are informative signatures for the emergence of serious youth violence in communities, in this study we aim to predict outbreaks of violence in neighborhoods from pre-existing levels and changes in reports of minor offenses. We develop a prediction equation that uses publicly available neighborhood-level data on disorderly conduct, vandalism, and weapons violations to predict neighborhoods likely to have increases in serious violent crime. Data for this study were taken from the Chicago Police Department ClearMap reporting system, which provided data on index and non-index crimes for each of the 844 Chicago census tracts. Data were available in three month segments for a single year (fall 2009, winter, spring, and summer 2010). Predicted change in aggravated battery and overall violent crime correlated significantly with actual change. The model was evaluated by comparing alternative models using randomly selected training and test samples, producing favorable results with reference to overfitting, seasonal variation, and spatial autocorrelation. A prediction equation based on winter and spring levels of the predictors had area under the curve ranging from .65 to .71 for aggravated battery, and .58 to .69 for overall violent crime. We discuss future development of such a model and its potential usefulness in violence prevention and community policing.

  18. The ARIC predictive model reliably predicted risk of type II diabetes in Asian populations

    Directory of Open Access Journals (Sweden)

    Chin Calvin

    2012-04-01

    Full Text Available Abstract Background Identification of high-risk individuals is crucial for effective implementation of type 2 diabetes mellitus prevention programs. Several studies have shown that multivariable predictive functions perform as well as the 2-hour post-challenge glucose in identifying these high-risk individuals. The performance of these functions in Asian populations, where the rise in prevalence of type 2 diabetes mellitus is expected to be the greatest in the next several decades, is relatively unknown. Methods Using data from three Asian populations in Singapore, we compared the performance of three multivariate predictive models in terms of their discriminatory power and calibration quality: the San Antonio Health Study model, Atherosclerosis Risk in Communities model and the Framingham model. Results The San Antonio Health Study and Atherosclerosis Risk in Communities models had better discriminative powers than using only fasting plasma glucose or the 2-hour post-challenge glucose. However, the Framingham model did not perform significantly better than fasting glucose or the 2-hour post-challenge glucose. All published models suffered from poor calibration. After recalibration, the Atherosclerosis Risk in Communities model achieved good calibration, the San Antonio Health Study model showed a significant lack of fit in females and the Framingham model showed a significant lack of fit in both females and males. Conclusions We conclude that adoption of the ARIC model for Asian populations is feasible and highly recommended when local prospective data is unavailable.

  19. Lightweight ZERODUR: Validation of Mirror Performance and Mirror Modeling Predictions

    Science.gov (United States)

    Hull, Tony; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA's XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2 m diameter, f/1.2988% lightweighted SCHOTT lightweighted ZERODUR(TradeMark) mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR(TradeMark). In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response(dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR(TradeMark) mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS). Summarize the outcome of NASA's XRCF tests and model validations

  20. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  1. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  2. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  3. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  4. Modeling and Prediction of Krueger Device Noise

    Science.gov (United States)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  5. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  6. Comparative analysis of used car price evaluation models

    Science.gov (United States)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  7. An overview of comparative modelling and resources dedicated to large-scale modelling of genome sequences.

    Science.gov (United States)

    Lam, Su Datt; Das, Sayoni; Sillitoe, Ian; Orengo, Christine

    2017-08-01

    Computational modelling of proteins has been a major catalyst in structural biology. Bioinformatics groups have exploited the repositories of known structures to predict high-quality structural models with high efficiency at low cost. This article provides an overview of comparative modelling, reviews recent developments and describes resources dedicated to large-scale comparative modelling of genome sequences. The value of subclustering protein domain superfamilies to guide the template-selection process is investigated. Some recent cases in which structural modelling has aided experimental work to determine very large macromolecular complexes are also cited.

  8. Comparing spatial and temporal transferability of hydrological model parameters

    Science.gov (United States)

    Patil, Sopan D.; Stieglitz, Marc

    2015-06-01

    Operational use of hydrological models requires the transfer of calibrated parameters either in time (for streamflow forecasting) or space (for prediction at ungauged catchments) or both. Although the effects of spatial and temporal parameter transfer on catchment streamflow predictions have been well studied individually, a direct comparison of these approaches is much less documented. Here, we compare three different schemes of parameter transfer, viz., temporal, spatial, and spatiotemporal, using a spatially lumped hydrological model called EXP-HYDRO at 294 catchments across the continental United States. Results show that the temporal parameter transfer scheme performs best, with lowest decline in prediction performance (median decline of 4.2%) as measured using the Kling-Gupta efficiency metric. More interestingly, negligible difference in prediction performance is observed between the spatial and spatiotemporal parameter transfer schemes (median decline of 12.4% and 13.9% respectively). We further demonstrate that the superiority of temporal parameter transfer scheme is preserved even when: (1) spatial distance between donor and receiver catchments is reduced, or (2) temporal lag between calibration and validation periods is increased. Nonetheless, increase in the temporal lag between calibration and validation periods reduces the overall performance gap between the three parameter transfer schemes. Results suggest that spatiotemporal transfer of hydrological model parameters has the potential to be a viable option for climate change related hydrological studies, as envisioned in the "trading space for time" framework. However, further research is still needed to explore the relationship between spatial and temporal aspects of catchment hydrological variability.

  9. Consumer Choice Prediction: Artificial Neural Networks versus Logistic Models

    Directory of Open Access Journals (Sweden)

    Christopher Gan

    2005-01-01

    Full Text Available Conventional econometric models, such as discriminant analysis and logistic regression have been used to predict consumer choice. However, in recent years, there has been a growing interest in applying artificial neural networks (ANN to analyse consumer behaviour and to model the consumer decision-making process. The purpose of this paper is to empirically compare the predictive power of the probability neural network (PNN, a special class of neural networks and a MLFN with a logistic model on consumers’ choices between electronic banking and non-electronic banking. Data for this analysis was obtained through a mail survey sent to 1,960 New Zealand households. The questionnaire gathered information on the factors consumers’ use to decide between electronic banking versus non-electronic banking. The factors include service quality dimensions, perceived risk factors, user input factors, price factors, service product characteristics and individual factors. In addition, demographic variables including age, gender, marital status, ethnic background, educational qualification, employment, income and area of residence are considered in the analysis. Empirical results showed that both ANN models (MLFN and PNN exhibit a higher overall percentage correct on consumer choice predictions than the logistic model. Furthermore, the PNN demonstrates to be the best predictive model since it has the highest overall percentage correct and a very low percentage error on both Type I and Type II errors.

  10. Optimal feedback scheduling of model predictive controllers

    Institute of Scientific and Technical Information of China (English)

    Pingfang ZHOU; Jianying XIE; Xiaolong DENG

    2006-01-01

    Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FSCBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.

  11. Objective calibration of numerical weather prediction models

    Science.gov (United States)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  12. Nuclear shadowing in deep-inelastic scattering on nuclei: Comparing predictions of three unitarization schemes

    Science.gov (United States)

    Carvalho, F.; Gonçalves, V. P.; Navarra, F. S.; de Oliveira, E. G.

    2013-06-01

    The measurement of the nuclear structure functions F2A(x,Q2) and FLA(x,Q2) at the future electron-ion collider will be of great relevance to understanding the origin of nuclear shadowing and to probe gluon saturation effects. Currently there are several phenomenological models, based on very distinct approaches, which describe the scarce experimental data quite successfully. One of main uncertainties comes from the schemes used to include the effects associated with the multiple scatterings and to unitarize the cross section. In this paper we compare the predictions of three distinct unitarization schemes of the nuclear structure function F2A that use the same theoretical input to describe the projectile-nucleon interaction. In particular, we consider as input the predictions of the color glass condensate formalism, which reproduce the inclusive and diffractive ep HERA data. Our results demonstrate that experimental analysis of F2A is able to discriminate between the unitarization schemes.

  13. Prediction models from CAD models of 3D objects

    Science.gov (United States)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  14. Model predictive control of MSMPR crystallizers

    Science.gov (United States)

    Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc

    2005-02-01

    A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.

  15. An Anisotropic Hardening Model for Springback Prediction

    Science.gov (United States)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  16. Predictive Model of Energy Consumption in Beer Production

    Directory of Open Access Journals (Sweden)

    Tiecheng Pu

    2013-07-01

    Full Text Available The predictive model of energy consumption is presented based on subtractive clustering and Adaptive-Network-Based Fuzzy Inference System (for short ANFIS in the beer production. Using the subtractive clustering on the historical data of energy consumption, the limit of artificial experience is conquered while confirming the number of fuzzy rules. The parameters of the fuzzy inference system are acquired by the structure of adaptive network and hybrid on-line learning algorithm. The method can predict and guide the energy consumption of the factual production process. The reducing consumption scheme is provided based on the actual situation of the enterprise. Finally, using concrete examples verified the feasibility of this method comparing with the Radial Basis Functions (for short RBF neural network predictive model.

  17. Predictions of titanium alloy properties using thermodynamic modeling tools

    Science.gov (United States)

    Zhang, F.; Xie, F.-Y.; Chen, S.-L.; Chang, Y. A.; Furrer, D.; Venkatesh, V.

    2005-12-01

    Thermodynamic modeling tools have become essential in understanding the effect of alloy chemistry on the final microstructure of a material. Implementation of such tools to improve titanium processing via parameter optimization has resulted in significant cost savings through the elimination of shop/laboratory trials and tests. In this study, a thermodynamic modeling tool developed at CompuTherm, LLC, is being used to predict β transus, phase proportions, phase chemistries, partitioning coefficients, and phase boundaries of multicomponent titanium alloys. This modeling tool includes Pandat, software for multicomponent phase equilibrium calculations, and PanTitanium, a thermodynamic database for titanium alloys. Model predictions are compared with experimental results for one α-β alloy (Ti-64) and two near-β alloys (Ti-17 and Ti-10-2-3). The alloying elements, especially the interstitial elements O, N, H, and C, have been shown to have a significant effect on the β transus temperature, and are discussed in more detail herein.

  18. Comparing frailty measures in their ability to predict adverse outcome among older residents of assisted living

    Directory of Open Access Journals (Sweden)

    Hogan David B

    2012-09-01

    Full Text Available Abstract Background Few studies have directly compared the competing approaches to identifying frailty in more vulnerable older populations. We examined the ability of two versions of a frailty index (43 vs. 83 items, the Cardiovascular Health Study (CHS frailty criteria, and the CHESS scale to accurately predict the occurrence of three outcomes among Assisted Living (AL residents followed over one year. Methods The three frailty measures and the CHESS scale were derived from assessment items completed among 1,066 AL residents (aged 65+ participating in the Alberta Continuing Care Epidemiological Studies (ACCES. Adjusted risks of one-year mortality, hospitalization and long-term care placement were estimated for those categorized as frail or pre-frail compared with non-frail (or at high/intermediate vs. low risk on CHESS. The area under the ROC curve (AUC was calculated for select models to assess the predictive accuracy of the different frailty measures and CHESS scale in relation to the three outcomes examined. Results Frail subjects defined by the three approaches and those at high risk for decline on CHESS showed a statistically significant increased risk for death and long-term care placement compared with those categorized as either not frail or at low risk for decline. The risk estimates for hospitalization associated with the frailty measures and CHESS were generally weaker with one of the frailty indices (43 items showing no significant association. For death and long-term care placement, the addition of frailty (however derived or CHESS significantly improved on the AUC obtained with a model including only age, sex and co-morbidity, though the magnitude of improvement was sometimes small. The different frailty/risk models did not differ significantly from each other in predicting mortality or hospitalization; however, one of the frailty indices (83 items showed significantly better performance over the other measures in predicting long

  19. The Prediction Model of Dam Uplift Pressure Based on Random Forest

    Science.gov (United States)

    Li, Xing; Su, Huaizhi; Hu, Jiang

    2017-09-01

    The prediction of the dam uplift pressure is of great significance in the dam safety monitoring. Based on the comprehensive consideration of various factors, 18 parameters are selected as the main factors affecting the prediction of uplift pressure, use the actual monitoring data of uplift pressure as the evaluation factors for the prediction model, based on the random forest algorithm and support vector machine to build the dam uplift pressure prediction model to predict the uplift pressure of the dam, and the predict performance of the two models were compared and analyzed. At the same time, based on the established random forest prediction model, the significance of each factor is analyzed, and the importance of each factor of the prediction model is calculated by the importance function. Results showed that: (1) RF prediction model can quickly and accurately predict the uplift pressure value according to the influence factors, the average prediction accuracy is above 96%, compared with the support vector machine (SVM) model, random forest model has better robustness, better prediction precision and faster convergence speed, and the random forest model is more robust to missing data and unbalanced data. (2) The effect of water level on uplift pressure is the largest, and the influence of rainfall on the uplift pressure is the smallest compared with other factors.

  20. Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages

    Science.gov (United States)

    Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart

    2016-01-01

    Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint. PMID:27994544

  1. Estudio Comparativo de Modelos Moleculares del Agua en la Predicción de Propiedades de Saturación A Comparative Study of Molecular Models for Water in the Prediction of Saturation Properties

    Directory of Open Access Journals (Sweden)

    A Elías-Domínguez

    2004-01-01

    Full Text Available Se presenta un estudio comparativo de cinco modelos moleculares diferentes para el agua evaluando el equilibrio de fases en la saturación mediante el método de simulación Monte Carlo en el ensamble de Gibbs. Los modelos moleculares comparados contienen tres o cuatro sitios con cargas. Se calculan las densidades de coexistencia líquido y vapor así como la presión de saturación a diferentes temperaturas. El punto crítico y las entalpías de vaporización se obtienen a partir de los resultados de la simulación. La comparación de los resultados de la simulación con datos experimentales obtenidos de la literatura muestra una buena concordancia.This work presents a comparative study of five different molecular models for water, evaluating the phase equilibrium at saturation by the Monte Carlo simulation method in the Gibbs ensemble. The molecular models contain either three or four sites with charges. The coexistence liquid and vapor densities as well as the saturation pressure at different temperatures are calculated. The critical point and the enthalpies of vaporization are both obtained from the results of simulation. The comparison between the results of simulation and experimental data taken from literature shows good agreement.

  2. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    load shifting capabilities of the units that adapts to the given price predictions. We furthermore evaluated control performance in terms of economic savings for different control strategies and forecasts. Chapter 5 describes and compares the proposed large-scale Aggregator control strategies....... Aggregators are assumed to play an important role in the future Smart Grid and coordinate a large portfolio of units. The developed economic MPC controllers interfaces each unit directly to an Aggregator. We developed several MPC-based aggregation strategies that coordinates the global behavior of a portfolio...

  3. Predictive modelling of ferroelectric tunnel junctions

    Science.gov (United States)

    Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.

    2016-05-01

    Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.

  4. Simple predictions from multifield inflationary models.

    Science.gov (United States)

    Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C

    2014-04-25

    We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.

  5. Accuracy of stereolithographically printed digital models compared to plaster models.

    Science.gov (United States)

    Camardella, Leonardo Tavares; Vilella, Oswaldo V; van Hezel, Marleen M; Breuning, Karel H

    2017-03-30

    This study compared the accuracy of plaster models from alginate impressions and printed models from intraoral scanning. A total of 28 volunteers were selected and alginate impressions and intraoral scans were used to make plaster models and digital models of their dentition, respectively. The digital models were printed using a stereolithographic (SLA) 3D printer with a horseshoe-shaped design. Two calibrated examiners measured distances on the plaster and printed models with a digital caliper. The paired t test was used to determine intraobserver error and compare the measurements. The Pearson correlation coefficient was used to evaluate the reliability of measurements for each model type. The measurements on plaster models and printed models show some significant differences in tooth dimensions and interarch parameters, but these differences were not clinically relevant, except for the transversal measurements. The upper and lower intermolar distances on the printed models were statistically significant and clinically relevant smaller. Printed digital models with the SLA 3D printer studied, with a horseshoe-shaped base made from intraoral scans cannot replace conventional plaster models from alginate impressions in orthodontics for diagnosis and treatment planning because of their clinically relevant transversal contraction.

  6. The predictive performance and stability of six species distribution models.

    Science.gov (United States)

    Duan, Ren-Yan; Kong, Xiao-Quan; Huang, Min-Yi; Fan, Wei-Yi; Wang, Zhi-Gao

    2014-01-01

    Predicting species' potential geographical range by species distribution models (SDMs) is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs. We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis) and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials). We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values. The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (pMAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM) had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points). According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  7. The predictive performance and stability of six species distribution models.

    Directory of Open Access Journals (Sweden)

    Ren-Yan Duan

    Full Text Available Predicting species' potential geographical range by species distribution models (SDMs is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs.We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials. We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values.The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (p<0.05, while the associated standard deviations and coefficients of variation were larger for BIOCLIM and DOMAIN trials (p<0.05, and the 99% confidence intervals for AUC and Kappa values were narrower for MAHAL, RF, MAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points.According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  8. Predictions of models for environmental radiological assessment

    Energy Technology Data Exchange (ETDEWEB)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa, E-mail: suelip@ird.gov.br, E-mail: dejanira@irg.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Servico de Avaliacao de Impacto Ambiental, Rio de Janeiro, RJ (Brazil); Mahler, Claudio Fernando [Coppe. Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro (UFRJ) - Programa de Engenharia Civil, RJ (Brazil)

    2011-07-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  9. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...

  10. A Modified Model Predictive Control Scheme

    Institute of Scientific and Technical Information of China (English)

    Xiao-Bing Hu; Wen-Hua Chen

    2005-01-01

    In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.

  11. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....

  12. Explicit model predictive control accuracy analysis

    OpenAIRE

    Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano

    2015-01-01

    Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. repre...

  13. Numerical Weather Prediction (NWP) and hybrid ARMA/ANN model to predict global radiation

    CERN Document Server

    Voyant, Cyril; Paoli, Christophe; Nivet, Marie Laure

    2012-01-01

    We propose in this paper an original technique to predict global radiation using a hybrid ARMA/ANN model and data issued from a numerical weather prediction model (ALADIN). We particularly look at the Multi-Layer Perceptron. After optimizing our architecture with ALADIN and endogenous data previously made stationary and using an innovative pre-input layer selection method, we combined it to an ARMA model from a rule based on the analysis of hourly data series. This model has been used to forecast the hourly global radiation for five places in Mediterranean area. Our technique outperforms classical models for all the places. The nRMSE for our hybrid model ANN/ARMA is 14.9% compared to 26.2% for the na\\"ive persistence predictor. Note that in the stand alone ANN case the nRMSE is 18.4%. Finally, in order to discuss the reliability of the forecaster outputs, a complementary study concerning the confidence interval of each prediction is proposed

  14. Critical conceptualism in environmental modeling and prediction.

    Science.gov (United States)

    Christakos, G

    2003-10-15

    Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.

  15. The SIR Epidemiology Model in Predicting Herd Immunity

    Directory of Open Access Journals (Sweden)

    Joanna Nicho

    2010-01-01

    Full Text Available The Simple Epidemic Model uses three states to describe the spread of an infection: the susceptible (S, the infected (I, and the recovered (R. This model follows the trend of an infection over time and can predict whether an infection will spread. Using this model, epidemiologists may calculate the percentage of the population that needs to be vaccinated in order to provide a population immunity from a disease. This study will compare the vaccination percentage required for herd immunity to measles, mumps, and rubella against the current percentage of vaccinated individuals.

  16. Modeling and predicting page-view dynamics on Wikipedia

    CERN Document Server

    Thij, Marijn ten; Laniado, David; Kaltenbrunner, Andreas

    2012-01-01

    The simplicity of producing and consuming online content makes it difficult to estimate how much attention will be devoted from Internet users to any given content. This work presents a general overview of temporal patterns in the access to content on a huge collaborative platform. We propose a model for predicting the popularity of promoted content, inspired by the analysis of the page-view dynamics on Wikipedia. Compared to previous studies, the observed popularity patterns are more complex; however, our model uses just few parameters to fully describe them. The model is validated through empirical measurements.

  17. Nonlinear Economic Model Predictive Control Strategy for Active Smart Buildings

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.

    2016-01-01

    Nowadays, the development of advanced and innovative intelligent control techniques for energy management in buildings is a key issue within the smart grid topic. A nonlinear economic model predictive control (EMPC) scheme, based on the branch-and-bound tree search used as optimization algorithm...... for solving the nonconvex optimization problem is proposed in this paper. A simulation using the nonlinear model-based controller to control the temperature levels of an intelligent office building (PowerFlexHouse) is addressed. Its performance is compared with a linear model-based controller. The nonlinear...

  18. Meteorological Drought Prediction Using a Multi-Model Ensemble Approach

    Science.gov (United States)

    Chen, L.; Mo, K. C.; Zhang, Q.; Huang, J.

    2013-12-01

    shorter lead months, the ensemble SPI forecast skill is comparable to that based on persistence. The spread of SPI forecasts among models is small, and the predictive skill comes from the observations appended to the P forecasts. For longer lead months, model forecasts contribute to the meteorological drought predictability. The ensemble SPI forecasts have higher skill than those based on persistence and individual models.

  19. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  20. Fuzzy regression modeling for tool performance prediction and degradation detection.

    Science.gov (United States)

    Li, X; Er, M J; Lim, B S; Zhou, J H; Gan, O P; Rutkowski, L

    2010-10-01

    In this paper, the viability of using Fuzzy-Rule-Based Regression Modeling (FRM) algorithm for tool performance and degradation detection is investigated. The FRM is developed based on a multi-layered fuzzy-rule-based hybrid system with Multiple Regression Models (MRM) embedded into a fuzzy logic inference engine that employs Self Organizing Maps (SOM) for clustering. The FRM converts a complex nonlinear problem to a simplified linear format in order to further increase the accuracy in prediction and rate of convergence. The efficacy of the proposed FRM is tested through a case study - namely to predict the remaining useful life of a ball nose milling cutter during a dry machining process of hardened tool steel with a hardness of 52-54 HRc. A comparative study is further made between four predictive models using the same set of experimental data. It is shown that the FRM is superior as compared with conventional MRM, Back Propagation Neural Networks (BPNN) and Radial Basis Function Networks (RBFN) in terms of prediction accuracy and learning speed.

  1. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared to the p......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from...

  2. A prediction model for ocular damage - Experimental validation.

    Science.gov (United States)

    Heussner, Nico; Vagos, Márcia; Spitzer, Martin S; Stork, Wilhelm

    2015-08-01

    With the increasing number of laser applications in medicine and technology, accidental as well as intentional exposure of the human eye to laser sources has become a major concern. Therefore, a prediction model for ocular damage (PMOD) is presented within this work and validated for long-term exposure. This model is a combination of a raytracing model with a thermodynamical model of the human and an application which determines the thermal damage by the implementation of the Arrhenius integral. The model is based on our earlier work and is here validated against temperature measurements taken with porcine eye samples. For this validation, three different powers were used: 50mW, 100mW and 200mW with a spot size of 1.9mm. Also, the measurements were taken with two different sensing systems, an infrared camera and a fibre optic probe placed within the tissue. The temperatures were measured up to 60s and then compared against simulations. The measured temperatures were found to be in good agreement with the values predicted by the PMOD-model. To our best knowledge, this is the first model which is validated for both short-term and long-term irradiations in terms of temperature and thus demonstrates that temperatures can be accurately predicted within the thermal damage regime. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Predictive modeling of respiratory tumor motion for real-time prediction of baseline shifts

    Science.gov (United States)

    Balasubramanian, A.; Shamsuddin, R.; Prabhakaran, B.; Sawant, A.

    2017-03-01

    Baseline shifts in respiratory patterns can result in significant spatiotemporal changes in patient anatomy (compared to that captured during simulation), in turn, causing geometric and dosimetric errors in the administration of thoracic and abdominal radiotherapy. We propose predictive modeling of the tumor motion trajectories for predicting a baseline shift ahead of its occurrence. The key idea is to use the features of the tumor motion trajectory over a 1 min window, and predict the occurrence of a baseline shift in the 5 s that immediately follow (lookahead window). In this study, we explored a preliminary trend-based analysis with multi-class annotations as well as a more focused binary classification analysis. In both analyses, a number of different inter-fraction and intra-fraction training strategies were studied, both offline as well as online, along with data sufficiency and skew compensation for class imbalances. The performance of different training strategies were compared across multiple machine learning classification algorithms, including nearest neighbor, Naïve Bayes, linear discriminant and ensemble Adaboost. The prediction performance is evaluated using metrics such as accuracy, precision, recall and the area under the curve (AUC) for repeater operating characteristics curve. The key results of the trend-based analysis indicate that (i) intra-fraction training strategies achieve highest prediction accuracies (90.5-91.4%) (ii) the predictive modeling yields lowest accuracies (50-60%) when the training data does not include any information from the test patient; (iii) the prediction latencies are as low as a few hundred milliseconds, and thus conducive for real-time prediction. The binary classification performance is promising, indicated by high AUCs (0.96-0.98). It also confirms the utility of prior data from previous patients, and also the necessity of training the classifier on some initial data from the new patient for reasonable

  4. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...... presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  5. A COMPACT MODEL FOR PREDICTING ROAD TRAFFIC NOISE

    Directory of Open Access Journals (Sweden)

    R. Golmohammadi ، M. Abbaspour ، P. Nassiri ، H. Mahjub

    2009-07-01

    Full Text Available Noise is one of the most important sources of pollution in the metropolitan areas. The recognition of road traffic noise as one of the main sources of environmental pollution has led to develop models that enable us to predict noise level from fundamental variables. Traffic noise prediction models are required as aids in the design of roads and sometimes in the assessment of existing, or envisaged changes in, traffic noise conditions. The purpose of this study was to design a prediction road traffic noise model from traffic variables and conditions of transportation in Iran.This paper is the result of a research conducted in the city of Hamadan with the ultimate objective of setting up a traffic noise model based on the traffic conditions of Iranian cities. Noise levels and other variables have been measured in 282 samples to develop a statistical regression model based on A-weighted equivalent noise level for Iranian road condition. The results revealed that the average LAeq in all stations was 69.04± 4.25 dB(A, the average speed of vehicles was 44.57±11.46 km/h and average traffic load was 1231.9 ± 910.2 V/h.The developed model has seven explanatory entrance variables in order to achieve a high regression coefficient (R2=0.901. Comparing means of predicted and measuring equivalent sound pressure level (LAeq showed small difference less than -0.42 dB(A and -0.77 dB(A for Tehran and Hamadan cities, respectively. The suggested road traffic noise model can be effectively used as a decision support tool for predicting equivalent sound pressure level index in the cities of Iran.

  6. A predictive fitness model for influenza

    Science.gov (United States)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  7. Predictive Model of Radiative Neutrino Masses

    CERN Document Server

    Babu, K S

    2013-01-01

    We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z_4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: The hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with \\delta_{CP} = \\pi; and the effective mass in neutrinoless double beta decay lies in a narrow range, m_{\\beta \\beta} = (17.6 - 18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tan\\beta, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The non-standard neutral Higgs bosons, if t...

  8. Recent Advances in Explicit Multiparametric Nonlinear Model Predictive Control

    KAUST Repository

    Domínguez, Luis F.

    2011-01-19

    In this paper we present recent advances in multiparametric nonlinear programming (mp-NLP) algorithms for explicit nonlinear model predictive control (mp-NMPC). Three mp-NLP algorithms for NMPC are discussed, based on which novel mp-NMPC controllers are derived. The performance of the explicit controllers are then tested and compared in a simulation example involving the operation of a continuous stirred-tank reactor (CSTR). © 2010 American Chemical Society.

  9. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...

  10. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  11. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  12. An evaluation of prior influence on the predictive ability of Bayesian model averaging.

    Science.gov (United States)

    St-Louis, Véronique; Clayton, Murray K; Pidgeon, Anna M; Radeloff, Volker C

    2012-03-01

    Model averaging is gaining popularity among ecologists for making inference and predictions. Methods for combining models include Bayesian model averaging (BMA) and Akaike's Information Criterion (AIC) model averaging. BMA can be implemented with different prior model weights, including the Kullback-Leibler prior associated with AIC model averaging, but it is unclear how the prior model weight affects model results in a predictive context. Here, we implemented BMA using the Bayesian Information Criterion (BIC) approximation to Bayes factors for building predictive models of bird abundance and occurrence in the Chihuahuan Desert of New Mexico. We examined how model predictive ability differed across four prior model weights, and how averaged coefficient estimates, standard errors and coefficients' posterior probabilities varied for 16 bird species. We also compared the predictive ability of BMA models to a best single-model approach. Overall, Occam's prior of parsimony provided the best predictive models. In general, the Kullback-Leibler prior, however, favored complex models of lower predictive ability. BMA performed better than a best single-model approach independently of the prior model weight for 6 out of 16 species. For 6 other species, the choice of the prior model weight affected whether BMA was better than the best single-model approach. Our results demonstrate that parsimonious priors may be favorable over priors that favor complexity for making predictions. The approach we present has direct applications in ecology for better predicting patterns of species' abundance and occurrence.

  13. A New Model for Prediction of Mean Liquid Circulating Velocity in Bubble Columns

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A new model without any fitting parameter for estimating the mean liquid recirculating velocity has been derived from previous work directly. The prediction agrees with experimental data reasonably well. Accurency of prediction from the new model is comparable with the models reported in the literature. However, the new model has a potential capability to predict the average liquid recirculation velocity at elevated pressure bubble columns since n and c is developed under pressure. However this needs to be further tested experimentally.

  14. Prediction of blast-induced air overpressure: a hybrid AI-based predictive model.

    Science.gov (United States)

    Jahed Armaghani, Danial; Hajihassani, Mohsen; Marto, Aminaton; Shirani Faradonbeh, Roohollah; Mohamad, Edy Tonnizam

    2015-11-01

    Blast operations in the vicinity of residential areas usually produce significant environmental problems which may cause severe damage to the nearby areas. Blast-induced air overpressure (AOp) is one of the most important environmental impacts of blast operations which needs to be predicted to minimize the potential risk of damage. This paper presents an artificial neural network (ANN) optimized by the imperialist competitive algorithm (ICA) for the prediction of AOp induced by quarry blasting. For this purpose, 95 blasting operations were precisely monitored in a granite quarry site in Malaysia and AOp values were recorded in each operation. Furthermore, the most influential parameters on AOp, including the maximum charge per delay and the distance between the blast-face and monitoring point, were measured and used to train the ICA-ANN model. Based on the generalized predictor equation and considering the measured data from the granite quarry site, a new empirical equation was developed to predict AOp. For comparison purposes, conventional ANN models were developed and compared with the ICA-ANN results. The results demonstrated that the proposed ICA-ANN model is able to predict blast-induced AOp more accurately than other presented techniques.

  15. Prediction of survival with alternative modeling techniques using pseudo values.

    Directory of Open Access Journals (Sweden)

    Tjeerd van der Ploeg

    Full Text Available BACKGROUND: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo values enable statistically appropriate analyses of survival outcomes when used in seven alternative modeling techniques. METHODS: In this case study, we analyzed survival of 1282 Dutch patients with newly diagnosed Head and Neck Squamous Cell Carcinoma (HNSCC with conventional Kaplan-Meier and Cox regression analysis. We subsequently calculated pseudo values to reflect the individual survival patterns. We used these pseudo values to compare recursive partitioning (RPART, neural nets (NNET, logistic regression (LR general linear models (GLM and three variants of support vector machines (SVM with respect to dichotomous 60-month survival, and continuous pseudo values at 60 months or estimated survival time. We used the area under the ROC curve (AUC and the root of the mean squared error (RMSE to compare the performance of these models using bootstrap validation. RESULTS: Of a total of 1282 patients, 986 patients died during a median follow-up of 66 months (60-month survival: 52% [95% CI: 50%-55%]. The LR model had the highest optimism corrected AUC (0.791 to predict 60-month survival, followed by the SVM model with a linear kernel (AUC 0.787. The GLM model had the smallest optimism corrected RMSE when continuous pseudo values were considered for 60-month survival or the estimated survival time followed by SVM models with a linear kernel. The estimated importance of predictors varied substantially by the specific aspect of survival studied and modeling technique used. CONCLUSIONS: The use of pseudo values makes it readily possible to apply alternative modeling techniques to survival problems, to compare their performance and to search further for promising

  16. A revised automated proximity and conformity analysis method to compare predicted and observed spatial boundaries of geologic phenomena

    Science.gov (United States)

    Li, Yingkui; Napieralski, Jacob; Harbor, Jon

    2008-12-01

    Quantitative assessment of the level of agreement between model-predicted and field-observed geologic data is crucial to calibrate and validate numerical landscape models. Application of Geographic Information Systems (GIS) provides an opportunity to integrate model and field data and quantify their levels of correspondence. Napieralski et al. [Comparing predicted and observed spatial boundaries of geologic phenomena: Automated Proximity and Conformity Analysis (APCA) applied to ice sheet reconstructions. Computers and Geosciences 32, 124-134] introduced an Automated Proximity and Conformity Analysis (APCA) method to compare model-predicted and field-observed spatial boundaries and used it to quantify the level of correspondence between predicted ice margins from ice sheet models and field observations from end moraines. However, as originally formulated, APCA involves a relatively large amount of user intervention during the analysis and results in an index to quantify the level of correspondence that lacks direct statistical meaning. Here, we propose a revised APCA approach and a more automated and statistically robust way to quantify the level of correspondence between model predictions and field observations. Specifically, the mean and standard deviation of distances between model and field boundaries are used to quantify proximity and conformity, respectively. An illustration of the revised method comparing modeled ice margins of the Fennoscandian Ice Sheet with observed end moraines of the Last Glacial Maximum shows that this approach provides a more automated and statistically robust means to quantify correspondence than the original APCA. The revised approach can be adopted for a wide range of geoscience issues where comparisons of model-predicted and field-observed spatial boundaries are useful, including mass movement and flood extents.

  17. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method.

    Science.gov (United States)

    Fogel, Allison R; Rosenberg, Jason C; Lehman, Frank M; Kuperberg, Gina R; Patel, Aniruddh D

    2015-01-01

    Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such 'authentic cadence' melody was matched to a 'non-cadential' (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in

  18. Methods for Handling Missing Variables in Risk Prediction Models

    NARCIS (Netherlands)

    Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.

    2016-01-01

    Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient

  19. Predicting the Istanbul Stock Exchange Index Return using Technical Indicators: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Şenol Emir

    2013-07-01

    Full Text Available Theaim of this study to examine the performance of Support Vector Regression (SVRwhich is a novel regression method based on Support Vector Machines (SVMapproach in predicting the Istanbul Stock Exchange (ISE National 100 Index dailyreturns. For bechmarking, results given by SVR were compared to those given byclassical Linear Regression (LR. Dataset contains 6 technical indicators whichwere selected as model inputs for 2005-2011 period. Grid search and crossvaliadation is used for finding optimal model parameters and evaluating themodels. Comparisons were made based on Root Mean Square (RMSE, Mean AbsoluteError (MAE, Mean Absolute Percentage Error (MAPE, Theil InequalityCoefficient (TIC and Mean Mixed Error (MME metrics. Results indicate that SVRoutperforms the LR for all metrics.

  20. Predictive modelling of contagious deforestation in the Brazilian Amazon.

    Directory of Open Access Journals (Sweden)

    Isabel M D Rosa

    Full Text Available Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1 it is probabilistic and quantifies uncertainty around predictions and parameters; (2 the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3 deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia"-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy, annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is

  1. Predictive modelling of contagious deforestation in the Brazilian Amazon.

    Science.gov (United States)

    Rosa, Isabel M D; Purves, Drew; Souza, Carlos; Ewers, Robert M

    2013-01-01

    Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1) it is probabilistic and quantifies uncertainty around predictions and parameters; (2) the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3) deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia")-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy), annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is currently

  2. Comparing Fine-Grained Source Code Changes And Code Churn For Bug Prediction

    NARCIS (Netherlands)

    Giger, E.; Pinzger, M.; Gall, H.C.

    2011-01-01

    A significant amount of research effort has been dedicated to learning prediction models that allow project managers to efficiently allocate resources to those parts of a software system that most likely are bug-prone and therefore critical. Prominent measures for building bug prediction models are

  3. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2015-10-01

    Full Text Available was performed, using as a springboard seven models of cyber- attack, and resulted in the development of what is described as a canonical model. Keywords: Offensive cyber operations; Process models; Rational reconstructions; Canonical models; Structured...

  4. Regression Model to Predict Global Solar Irradiance in Malaysia

    Directory of Open Access Journals (Sweden)

    Hairuniza Ahmed Kutty

    2015-01-01

    Full Text Available A novel regression model is developed to estimate the monthly global solar irradiance in Malaysia. The model is developed based on different available meteorological parameters, including temperature, cloud cover, rain precipitate, relative humidity, wind speed, pressure, and gust speed, by implementing regression analysis. This paper reports on the details of the analysis of the effect of each prediction parameter to identify the parameters that are relevant to estimating global solar irradiance. In addition, the proposed model is compared in terms of the root mean square error (RMSE, mean bias error (MBE, and the coefficient of determination (R2 with other models available from literature studies. Seven models based on single parameters (PM1 to PM7 and five multiple-parameter models (PM7 to PM12 are proposed. The new models perform well, with RMSE ranging from 0.429% to 1.774%, R2 ranging from 0.942 to 0.992, and MBE ranging from −0.1571% to 0.6025%. In general, cloud cover significantly affects the estimation of global solar irradiance. However, cloud cover in Malaysia lacks sufficient influence when included into multiple-parameter models although it performs fairly well in single-parameter prediction models.

  5. Impact injury prediction by FE human body model

    Directory of Open Access Journals (Sweden)

    Hynčík L.

    2008-12-01

    Full Text Available The biomechanical simulations as powerful instruments are used in many areas such as traffic, medicine, sport, army etc. The simulations are often performed with models, which are based on the Finite Element Method. The great ability of FE deformable models of human bodies is to predict the injuries during accidents. Due to its modular implementation of thorax and abdomen FE models, human articulated rigid body model ROBBY, which was previously developed at the University of West Bohemia in cooperation with ESI Group (Engineering Simulation for Industry, can be used for this purpose. ROBBY model representing average adult man is still being improved to obtain more precise model of human body with the possibility to predict injuries during accidents. Recently, new generated thoracic model was embedded into ROBBY model and this was subsequently satisfactorily validated. In this study the updated ROBBY model was used and injury of head and thorax were investigated during frontal crashes simulated by virtue of two types of sled tests with various types of restraint system (shoulder belt, lap belt and airbag. The results of the simulation were compared with the experimental ones.

  6. Models for predicting objective function weights in prostate cancer IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Boutilier, Justin J., E-mail: j.boutilier@mail.utoronto.ca; Lee, Taewoo [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, Ontario M5S 3G8 (Canada); Craig, Tim [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, 610 University of Avenue, Toronto, Ontario M5T 2M9, Canada and Department of Radiation Oncology, University of Toronto, 148 - 150 College Street, Toronto, Ontario M5S 3S2 (Canada); Sharpe, Michael B. [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, 610 University of Avenue, Toronto, Ontario M5T 2M9 (Canada); Department of Radiation Oncology, University of Toronto, 148 - 150 College Street, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, 124 - 100 College Street, Toronto, Ontario M5G 1P5 (Canada); Chan, Timothy C. Y. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, Ontario M5S 3G8, Canada and Techna Institute for the Advancement of Technology for Health, 124 - 100 College Street, Toronto, Ontario M5G 1P5 (Canada)

    2015-04-15

    Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and applied three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR

  7. Formalization of the model of the enterprise insolvency risk prediction

    Directory of Open Access Journals (Sweden)

    Elena V. Shirinkina

    2015-12-01

    Full Text Available Objective to improve the conceptual apparatus and analytical procedures of insolvency risk identification. Methods general scientific methods of systemic and comparative analysis economicstatistical and dynamic analysis of economic processes and phenomena. Results nowadays managing the insolvency risk is relevant for any company regardless of the economy sector. Instability manifests itself through the uncertainty of the directions of the external environment changes and their high frequency. Analysis of the economic literature showed that currently there is no single approach to systematization of methods for insolvency risk prediction which means that there is no objective view on tools that can be used to monitor the insolvency risk. In this respect scientific and practical search of representative indicators for the formalization of the models predicting the insolvency is very important. Therefore the study has solved the following tasks defined the nature of the insolvency risk and its identification in the process of financial relations in management system proved the representativeness of the indicators in the insolvency risk prediction and formed the model of the risk insolvency prediction. Scientific novelty grounding the model of risk insolvency prediction. Practical significance development of a theoretical framework to address issues arising in the diagnosis of insolvent enterprises and application of the results obtained in the practice of the bankruptcy institution bodies. The presented model allows to predict the insolvency risk of the enterprise through the general development trend and the fluctuation boundaries of bankruptcy risk to determine the significance of each indicatorfactor its quantitative impact and therefore to avoid the risk of the enterprise insolvency. nbsp

  8. A joint calibration model for combining predictive distributions

    Directory of Open Access Journals (Sweden)

    Patrizia Agati

    2013-05-01

    Full Text Available In many research fields, as for example in probabilistic weather forecasting, valuable predictive information about a future random phenomenon may come from several, possibly heterogeneous, sources. Forecast combining methods have been developed over the years in order to deal with ensembles of sources: the aim is to combine several predictions in such a way to improve forecast accuracy and reduce risk of bad forecasts.In this context, we propose the use of a Bayesian approach to information combining, which consists in treating the predictive probability density functions (pdfs from the individual ensemble members as data in a Bayesian updating problem. The likelihood function is shown to be proportional to the product of the pdfs, adjusted by a joint “calibration function” describing the predicting skill of the sources (Morris, 1977. In this paper, after rephrasing Morris’ algorithm in a predictive context, we propose to model the calibration function in terms of bias, scale and correlation and to estimate its parameters according to the least squares criterion. The performance of our method is investigated and compared with that of Bayesian Model Averaging (Raftery, 2005 on simulated data.

  9. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    Science.gov (United States)

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  10. Estimating the magnitude of prediction uncertainties for the APLE model

    Science.gov (United States)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  11. Evaluation of Spatial Agreement of Distinct Landslide Prediction Models

    Science.gov (United States)

    Sterlacchini, Simone; Bordogna, Gloria; Frigerio, Ivan

    2013-04-01

    derived to test agreement among the maps. Nevertheless, no information was made available about the location where the prediction of two or more maps agreed and where they did not. Thus we wanted to study if also the spatial agreements of the models predicted the same or similar values. To this end we adopted a soft image fusion approach proposed in. It is defined as a group decision making model for ranking spatial alternatives based on a soft fusion of coherent evaluations. In order to apply this approach, the prediction maps were categorized into 10 distinct classes by using an equal-area criterion to compare the predicted results. Thus we applied soft fusion of the prediction maps regarded as evaluations of distinct human experts. The fusion process needs the definition of the concept of "fuzzy majority", provided by a linguistic quantifier, in order to determine the coherence of a majority of maps in each pixel of the territory. Based on this, the overall spatial coherence among the majority of the prediction maps was evaluated. The spatial coherence among a fuzzy majority is defined based on the Minkowski OWA operators. The result made it possible to spatially identify sectors of the study area in which the predictions were in agreement for the same or for close classes of susceptibility, or discordant, or even distant classes. We studied the spatial agreement among a "fuzzy majority" defined as "80% of the 13 coherent maps", thus requiring that at least 11 out of 13 agree, since from previous results we knew that two maps were in disagreement. So the fuzzy majority AtLeast80% was defined by a quantifier with linear increasing membership function (0.8, 1). The coherence metric used was the Euclidean distance. We thus computed the soft fusion of AtLeast80% coherent maps for homogeneous groups of classes. We considered as homogeneous classes the highest two classes (9 and 10), the lowest two classes, and the central classes (4, 5 and 6). We then fused the maps

  12. Prediction of Catastrophes: an experimental model

    CERN Document Server

    Peters, Randall D; Pomeau, Yves

    2012-01-01

    Catastrophes of all kinds can be roughly defined as short duration-large amplitude events following and followed by long periods of "ripening". Major earthquakes surely belong to the class of 'catastrophic' events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in both of two dynamical models. The first is an "abstract" model in which a time dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for creep). Hope thus exists that similar changes in the response to ...

  13. Predictive modeling of low solubility semiconductor alloys

    Science.gov (United States)

    Rodriguez, Garrett V.; Millunchick, Joanna M.

    2016-09-01

    GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.

  14. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  15. Leptogenesis in minimal predictive seesaw models

    Science.gov (United States)

    Björkeroth, Fredrik; de Anda, Francisco J.; de Medeiros Varzielas, Ivo; King, Stephen F.

    2015-10-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to ( ν e , ν μ , ν τ ) proportional to (0, 1, 1) and (1, n, n - 2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A 4 vacuum alignment provides the required Yukawa structures with n = 3, while a {{Z}}_9 symmetry fixes the relatives phase to be a ninth root of unity.

  16. Comparing Productivity Simulated with Inventory Data Using Different Modelling Technologies

    Science.gov (United States)

    Klopf, M.; Pietsch, S. A.; Hasenauer, H.

    2009-04-01

    The Lime Stone National Park in Austria was established in 1997 to protect sensible lime stone soils from degradation due to heavy forest management. Since 1997 the management activities were successively reduced and standing volume and coarse woody debris (CWD) increased and degraded soils began to recover. One option to study the rehabilitation process towards natural virgin forest state is the use of modelling technology. In this study we will test two different modelling approaches for their applicability to Lime Stone National Park. We will compare standing tree volume simulated resulting from (i) the individual tree growth model MOSES, and (ii) the species and management sensitive adaptation of the biogeochemical-mechanistic model Biome-BGC. The results from the two models are compared with filed observations form repeated permanent forest inventory plots of the Lime Stone National Park in Austria. The simulated CWD predictions of the BGC-model were compared with dead wood measurements (standing and lying dead wood) recorded at the permanent inventory plots. The inventory was established between 1994 and 1996 and remeasured from 2004 to 2005. For this analysis 40 plots of this inventory were selected which comprise the required dead wood components and are dominated by a single tree species. First we used the distance dependant individual tree growth model MOSES to derive the standing timber and the amount of mortality per hectare. MOSES is initialized with the inventory data at plot establishment and each sampling plot is treated as forest stand. The Biome-BGC is a process based biogeochemical model with extensions for Austrian tree species, a self initialization and a forest management tool. The initialization for the actual simulations with the BGC model was done as follows: We first used spin up runs to derive a balanced forest vegetation, similar to an undisturbed forest. Next we considered the management history of the past centuries (heavy clear cuts

  17. Application of a predictive Bayesian model to environmental accounting.

    Science.gov (United States)

    Anex, R P; Englehardt, J D

    2001-03-30

    Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.

  18. Prediction of Farmers’ Income and Selection of Model ARIMA

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Based on the research technology of scholars’ prediction of farmers’ income and the data of per capita annual net income in rural households in Henan Statistical Yearbook from 1979 to 2009,it is found that time series of farmers’ income is in accordance with I(2)non-stationary process.The order-determination and identification of the model are achieved by adopting the correlogram-based analytical method of Box-Jenkins.On the basis of comparing a group of model properties with different parameters,model ARIMA(4,2,2)is built up.The testing result shows that the residual error of the selected model is white noise and accords with the normal distribution,which can be used to predict farmers’ income.The model prediction indicates that income in rural households will continue to increase from 2009 to 2012 and will reach the value of 2 282.4,2 502.9,2 686.9 and 2 884.5 respectively.The growth speed will go down from fast to slow with weak sustainability.

  19. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  20. Prediction Model for Offloading in Vehicular Wi-Fi Network

    Directory of Open Access Journals (Sweden)

    Mahmoud Abdulwahab Alawi

    2016-12-01

    Full Text Available It cannot be denied that, the inescapable diffusion of smartphones, tablets and other vehicular network applications with diverse networking and multimedia capabilities, and the associated blooming of all kinds of data-hungry multimedia services that passengers normally used while traveling exert a big challenge to cellular infrastructure operators. Wireless fidelity (Wi-Fi as well as fourth generation long term evolution advanced (4G LTE-A network are widely available today, Wi-Fi could be used by the vehicle users to relieve 4G LTE-A networks. Though, using IEE802.11 Wi-Fi AP to offload 4G LTE-A network for moving vehicle is a challenging task since it only covers short distance and not well deployed to cover all the roads. Several studies have proposed the offloading techniques based on predicted available APs for making offload decision. However, most of the proposed prediction mechanisms are only based on historical connection pattern. This work proposed a prediction model which utilized historical connection pattern, vehicular movement and driver profile to predict the next available AP.  The proposed model is compared with the existing models to evaluate its practicability.

  1. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... reference tracking and disturbance rejection in an economically optimal way. The performance function is chosen as a mixture of the `1-norm and a linear weighting to model the economics of the system. Simulations show a significant improvement of the performance of the MPC compared to the current...

  2. Fuel Conditioning Facility Electrorefiner Model Predictions versus Measurements

    Energy Technology Data Exchange (ETDEWEB)

    D Vaden

    2007-10-01

    Electrometallurgical treatment of spent nuclear fuel is performed in the Fuel Conditioning Facility (FCF) at the Idaho National Laboratory (INL) by electrochemically separating uranium from the fission products and structural materials in a vessel called an electrorefiner (ER). To continue processing without waiting for sample analyses to assess process conditions, an ER process model predicts the composition of the ER inventory and effluent streams via multicomponent, multi-phase chemical equilibrium for chemical reactions and a numerical solution to differential equations for electro-chemical transport. The results of the process model were compared to the electrorefiner measured data.

  3. Comparing Observed with Predicted Weekly Influenza-Like Illness Rates during the Winter Holiday Break, United States, 2004-2013

    Science.gov (United States)

    Gao, Hongjiang; Wong, Karen K.; Zheteyeva, Yenlik; Shi, Jianrong; Uzicanin, Amra; Rainey, Jeanette J.

    2015-01-01

    In the United States, influenza season typically begins in October or November, peaks in February, and tapers off in April. During the winter holiday break, from the end of December to the beginning of January, changes in social mixing patterns, healthcare-seeking behaviors, and surveillance reporting could affect influenza-like illness (ILI) rates. We compared predicted with observed weekly ILI to examine trends around the winter break period. We examined weekly rates of ILI by region in the United States from influenza season 2003–2004 to 2012–2013. We compared observed and predicted ILI rates from week 44 to week 8 of each influenza season using the auto-regressive integrated moving average (ARIMA) method. Of 1,530 region, week, and year combinations, 64 observed ILI rates were significantly higher than predicted by the model. Of these, 21 occurred during the typical winter holiday break period (weeks 51–52); 12 occurred during influenza season 2012–2013. There were 46 observed ILI rates that were significantly lower than predicted. Of these, 16 occurred after the typical holiday break during week 1, eight of which occurred during season 2012–2013. Of 90 (10 HHS regions x 9 seasons) predictions during the peak week, 78 predicted ILI rates were lower than observed. Out of 73 predictions for the post-peak week, 62 ILI rates were higher than observed. There were 53 out of 73 models that had lower peak and higher post-peak predicted ILI rates than were actually observed. While most regions had ILI rates higher than predicted during winter holiday break and lower than predicted after the break during the 2012–2013 season, overall there was not a consistent relationship between observed and predicted ILI around the winter holiday break during the other influenza seasons. PMID:26649568

  4. Forced versus coupled dynamics in Earth system modelling and prediction

    Directory of Open Access Journals (Sweden)

    B. Knopf

    2005-01-01

    Full Text Available We compare coupled nonlinear climate models and their simplified forced counterparts with respect to predictability and phase space topology. Various types of uncertainty plague climate change simulation, which is, in turn, a crucial element of Earth System modelling. Since the currently preferred strategy for simulating the climate system, or the Earth System at large, is the coupling of sub-system modules (representing, e.g. atmosphere, oceans, global vegetation, this paper explicitly addresses the errors and indeterminacies generated by the coupling procedure. The focus is on a comparison of forced dynamics as opposed to fully, i.e. intrinsically, coupled dynamics. The former represents a particular type of simulation, where the time behaviour of one complex systems component is prescribed by data or some other external information source. Such a simplifying technique is often employed in Earth System models in order to save computing resources, in particular when massive model inter-comparisons need to be carried out. Our contribution to the debate is based on the investigation of two representative model examples, namely (i a low-dimensional coupled atmosphere-ocean simulator, and (ii a replica-like simulator embracing corresponding components.Whereas in general the forced version (ii is able to mimic its fully coupled counterpart (i, we show in this paper that for a considerable fraction of parameter- and state-space, the two approaches qualitatively differ. Here we take up a phenomenon concerning the predictability of coupled versus forced models that was reported earlier in this journal: the observation that the time series of the forced version display artificial predictive skill. We present an explanation in terms of nonlinear dynamical theory. In particular we observe an intermittent version of artificial predictive skill, which we call on-off synchronization, and trace it back to the appearance of unstable periodic orbits. We also

  5. Protein structure prediction provides comparable performance to crystallographic structures in docking-based virtual screening.

    Science.gov (United States)

    Du, Hongying; Brender, Jeffrey R; Zhang, Jian; Zhang, Yang

    2015-01-01

    Structure based virtual screening has largely been limited to protein targets for which either an experimental structure is available or a strongly homologous template exists so that a high-resolution model can be constructed. The performance of state of the art protein structure predictions in virtual screening in systems where only weakly homologous templates are available is largely untested. Using the challenging DUD database of structural decoys, we show here that even using templates with only weak sequence homology (identity) structural models can be constructed by I-TASSER which achieve comparable enrichment rates to using the experimental bound crystal structure in the majority of the cases studied. For 65% of the targets, the I-TASSER models, which are constructed essentially in the apo conformations, reached 70% of the virtual screening performance of using the holo-crystal structures. A correlation was observed between the success of I-TASSER in modeling the global fold and local structures in the binding pockets of the proteins versus the relative success in virtual screening. The virtual screening performance can be further improved by the recognition of chemical features of the ligand compounds. These results suggest that the combination of structure-based docking and advanced protein structure modeling methods should be a valuable approach to the large-scale drug screening and discovery studies, especially for the proteins lacking crystallographic structures.

  6. A Comparative Study of Neural Networks and Fuzzy Systems in Modeling of a Nonlinear Dynamic System

    Directory of Open Access Journals (Sweden)

    Metin Demirtas

    2011-07-01

    Full Text Available The aim of this paper is to compare the neural networks and fuzzy modeling approaches on a nonlinear system. We have taken Permanent Magnet Brushless Direct Current (PMBDC motor data and have generated models using both approaches. The predictive performance of both methods was compared on the data set for model configurations. The paper describes the results of these tests and discusses the effects of changing model parameters on predictive and practical performance. Modeling sensitivity was used to compare for two methods.

  7. A microbial model of economic trading and comparative advantage.

    Science.gov (United States)

    Enyeart, Peter J; Simpson, Zachary B; Ellington, Andrew D

    2015-01-07

    The economic theory of comparative advantage postulates that beneficial trading relationships can be arrived at by two self-interested entities producing the same goods as long as they have opposing relative efficiencies in producing those goods. The theory predicts that upon entering trade, in order to maximize consumption both entities will specialize in producing the good they can produce at higher efficiency, that the weaker entity will specialize more completely than the stronger entity, and that both will be able to consume more goods as a result of trade than either would be able to alone. We extend this theory to the realm of unicellular organisms by developing mathematical models of genetic circuits that allow trading of a common good (specifically, signaling molecules) required for growth in bacteria in order to demonstrate comparative advantage interactions. In Conception 1, the experimenter controls production rates via exogenous inducers, allowing exploration of the parameter space of specialization. In Conception 2, the circuits self-regulate via feedback mechanisms. Our models indicate that these genetic circuits can demonstrate comparative advantage, and that cooperation in such a manner is particularly favored under stringent external conditions and when the cost of production is not overly high. Further work could involve implementing the models in living bacteria and searching for naturally occurring cooperative relationships between bacteria that conform to the principles of comparative advantage.

  8. Saccharomyces cerevisiae as a model organism: a comparative study.

    Directory of Open Access Journals (Sweden)

    Hiren Karathia

    Full Text Available BACKGROUND: Model organisms are used for research because they provide a framework on which to develop and optimize methods that facilitate and standardize analysis. Such organisms should be representative of the living beings for which they are to serve as proxy. However, in practice, a model organism is often selected ad hoc, and without considering its representativeness, because a systematic and rational method to include this consideration in the selection process is still lacking. METHODOLOGY/PRINCIPAL FINDINGS: In this work we propose such a method and apply it in a pilot study of strengths and limitations of Saccharomyces cerevisiae as a model organism. The method relies on the functional classification of proteins into different biological pathways and processes and on full proteome comparisons between the putative model organism and other organisms for which we would like to extrapolate results. Here we compare S. cerevisiae to 704 other organisms from various phyla. For each organism, our results identify the pathways and processes for which S. cerevisiae is predicted to be a good model to extrapolate from. We find that animals in general and Homo sapiens in particular are some of the non-fungal organisms for which S. cerevisiae is likely to be a good model in which to study a significant fraction of common biological processes. We validate our approach by correctly predicting which organisms are phenotypically more distant from S. cerevisiae with respect to several different biological processes. CONCLUSIONS/SIGNIFICANCE: The method we propose could be used to choose appropriate substitute model organisms for the study of biological processes in other species that are harder to study. For example, one could identify appropriate models to study either pathologies in humans or specific biological processes in species with a long development time, such as plants.

  9. Comparing spatial and temporal transferability of hydrological model parameters

    Science.gov (United States)

    Patil, Sopan; Stieglitz, Marc

    2015-04-01

    Operational use of hydrological models requires the transfer of calibrated parameters either in time (for streamflow forecasting) or space (for prediction at ungauged catchments) or both. Although the effects of spatial and temporal parameter transfer on catchment streamflow predictions have been well studied individually, a direct comparison of these approaches is much less documented. In our view, such comparison is especially pertinent in the context of increasing appeal and popularity of the "trading space for time" approaches that are proposed for assessing the hydrological implications of anthropogenic climate change. Here, we compare three different schemes of parameter transfer, viz., temporal, spatial, and spatiotemporal, using a spatially lumped hydrological model called EXP-HYDRO at 294 catchments across the continental United States. Results show that the temporal parameter transfer scheme performs best, with lowest decline in prediction performance (median decline of 4.2%) as measured using the Kling-Gupta efficiency metric. More interestingly, negligible difference in prediction performance is observed between the spatial and spatiotemporal parameter transfer schemes (median decline of 12.4% and 13.9% respectively). We further demonstrate that the superiority of temporal parameter transfer scheme is preserved even when: (1) spatial distance between donor and receiver catchments is reduced, or (2) temporal lag between calibration and validation periods is increased. Nonetheless, increase in the temporal lag between calibration and validation periods reduces the overall performance gap between the three parameter transfer schemes. Results suggest that spatiotemporal transfer of hydrological model parameters has the potential to be a viable option for climate change related hydrological studies, as envisioned in the "trading space for time" framework. However, further research is still needed to explore the relationship between spatial and temporal

  10. Modeling and simulation for heavy-duty mecanum wheel platform using model predictive control

    Science.gov (United States)

    Fuad, A. F. M.; Mahmood, I. A.; Ahmad, S.; Norsahperi, N. M. H.; Toha, S. F.; Akmeliawati, R.; Darsivan, F. J.

    2017-03-01

    This paper presents a study on a control system for a heavy-duty four Mecanum wheel platform. A mathematical model for the system is synthesized for the purpose of examining system behavior, including Mecanum wheel kinematics, AC servo motor, gearbox, and heavy duty load. The system is tested for velocity control, using model predictive control (MPC), and compared with a traditional PID setup. The parameters for the controllers are determined by manual tuning. Model predictive control was found to be more effective with reference to a linear velocity.

  11. A comparison of corporate distress prediction models in Brazil: hybrid neural networks, logit models and discriminant analysis

    Directory of Open Access Journals (Sweden)

    Juliana Yim

    2009-06-01

    Full Text Available This paper looks at the ability of a relatively new technique, hybrid ANN’s, to predict corporate distress in Brazil. These models are compared with traditional statistical techniques and conventional ANN models. The results suggest that hybrid neural networks outperform all other models in predicting firms in financial distress one year prior to the event. This suggests that for researchers, policymakers and others interested in early warning systems, hybrid networks may be a useful tool for predicting firm failure.

  12. A comparison of corporate distress prediction models in Brazil: hybrid neural networks, logit models and discriminant analysis

    Directory of Open Access Journals (Sweden)

    Juliana Yim

    2005-01-01

    Full Text Available This paper looks at the ability of a relatively new technique, hybrid ANN's, to predict corporate distress in Brazil. These models are compared with traditional statistical techniques and conventional ANN models. The results suggest that hybrid neural networks outperform all other models in predicting firms in financial distress one year prior to the event. This suggests that for researchers, policymakers and others interested in early warning systems, hybrid networks may be a useful tool for predicting firm failure.

  13. Aerodynamic Noise Prediction Using stochastic Turbulence Modeling

    Directory of Open Access Journals (Sweden)

    Arash Ahmadzadegan

    2008-01-01

    Full Text Available Amongst many approaches to determine the sound propagated from turbulent flows, hybrid methods, in which the turbulent noise source field is computed or modeled separately from the far field calculation, are frequently used. For basic estimation of sound propagation, less computationally intensive methods can be developed using stochastic models of the turbulent fluctuations (turbulent noise source field. A simple and easy to use stochastic model for generating turbulent velocity fluctuations called continuous filter white noise (CFWN model was used. This method based on the use of classical Langevian-equation to model the details of fluctuating field superimposed on averaged computed quantities. The resulting sound field due to the generated unsteady flow field was evaluated using Lighthill's acoustic analogy. Volume integral method used for evaluating the acoustic analogy. This formulation presents an advantage, as it confers the possibility to determine separately the contribution of the different integral terms and also integration regions to the radiated acoustic pressure. Our results validated by comparing the directivity and the overall sound pressure level (OSPL magnitudes with the available experimental results. Numerical results showed reasonable agreement with the experiments, both in maximum directivity and magnitude of the OSPL. This method presents a very suitable tool for the noise calculation of different engineering problems in early stages of the design process where rough estimates using cheaper methods are needed for different geometries.

  14. Comparison of mixed layer models predictions with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Faggian, P.; Riva, G.M. [CISE Spa, Divisione Ambiente, Segrate (Italy); Brusasca, G. [ENEL Spa, CRAM, Milano (Italy)

    1997-10-01

    The temporal evolution of the PBL vertical structure for a North Italian rural site, situated within relatively large agricultural fields and almost flat terrain, has been investigated during the period 22-28 June 1993 by experimental and modellistic point of view. In particular, the results about a sunny day (June 22) and a cloudy day (June 25) are presented in this paper. Three schemes to estimate mixing layer depth have been compared, i.e. Holzworth (1967), Carson (1973) and Gryning-Batchvarova models (1990), which use standard meteorological observations. To estimate their degree of accuracy, model outputs were analyzed considering radio-sounding meteorological profiles and stability atmospheric classification criteria. Besides, the mixed layer depths prediction were compared with the estimated values obtained by a simple box model, whose input requires hourly measures of air concentrations and ground flux of {sup 222}Rn. (LN)

  15. Remaining Useful Lifetime (RUL - Probabilistic Predictive Model

    Directory of Open Access Journals (Sweden)

    Ephraim Suhir

    2011-01-01

    Full Text Available Reliability evaluations and assurances cannot be delayed until the device (system is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications, by electrical, optical and mechanical measurements and testing; checked (screened during manufacturing (fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation Simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL of an electronic device (system after an appreciable deviation from its normal operation conditions has been detected, and the increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub has been assessed. The general concepts are illustrated by numerical examples. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure of a device (system at the operation stage of its lifetime.

  16. A Predictive Model of Geosynchronous Magnetopause Crossings

    CERN Document Server

    Dmitriev, A; Chao, J -K

    2013-01-01

    We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at given location for given solar wind pressure Psw, Bz component of interplanetary magnetic field (IMF) and geomagnetic conditions characterized by 1-min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 minutes when geosynchronous satellites of GOES and LANL series are located in the magnetosheath (so-called MSh intervals) in 1994 to 2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger amplitude of negative SYM-H the lower solar wind pressure is required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm-time intensification of t...

  17. Comparative Functional Responses Predict the Invasiveness and Ecological Impacts of Alien Herbivorous Snails.

    Directory of Open Access Journals (Sweden)

    Meng Xu

    Full Text Available Understanding determinants of the invasiveness and ecological impacts of alien species is amongst the most sought-after and urgent research questions in ecology. Several studies have shown the value of comparing the functional responses (FRs of alien and native predators towards native prey, however, the technique is under-explored with herbivorous alien species and as a predictor of invasiveness as distinct from ecological impact. Here, in China, we conducted a mesocosm experiment to compare the FRs among three herbivorous snail species: the golden apple snail, Pomacea canaliculata, a highly invasive and high impact alien listed in "100 of the World's Worst Invasive Alien Species"; Planorbarius corneus, a non-invasive, low impact alien; and the Chinese native snail, Bellamya aeruginosa, when feeding on four locally occurring plant species. Further, by using a numerical response equation, we modelled the population dynamics of the snail consumers. For standard FR parameters, we found that the invasive and damaging alien snail had the highest "attack rates" a, shortest "handling times" h and also the highest estimated maximum feeding rates, 1/hT, whereas the native species had the lowest attack rates, longest handling times and lowest maximum feeding rates. The non-invasive, low impact alien species had consistently intermediate FR parameters. The invasive alien species had higher population growth potential than the native snail species, whilst that of the non-invasive alien species was intermediate. Thus, while the comparative FR approach has been proposed as a reliable method for predicting the ecological impacts of invasive predators, our results further suggest that comparative FRs could extend to predict the invasiveness and ecological impacts of alien herbivores and should be explored in other taxa and trophic groups to determine the general utility of the approach.

  18. Predictive Models of Li-ion Battery Lifetime (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Smith, K.; Wood, E.; Santhanagopalan, S.; Kim, G.; Shi, Y.; Pesaran, A.

    2014-09-01

    Predictive models of Li-ion battery reliability must consider a multiplicity of electrochemical, thermal and mechanical degradation modes experienced by batteries in application environments. Complicating matters, Li-ion batteries can experience several path dependent degradation trajectories dependent on storage and cycling history of the application environment. Rates of degradation are controlled by factors such as temperature history, electrochemical operating window, and charge/discharge rate. Lacking accurate models and tests, lifetime uncertainty must be absorbed by overdesign and warranty costs. Degradation models are needed that predict lifetime more accurately and with less test data. Models should also provide engineering feedback for next generation battery designs. This presentation reviews both multi-dimensional physical models and simpler, lumped surrogate models of battery electrochemical and mechanical degradation. Models are compared with cell- and pack-level aging data from commercial Li-ion chemistries. The analysis elucidates the relative importance of electrochemical and mechanical stress-induced degradation mechanisms in real-world operating environments. Opportunities for extending the lifetime of commercial battery systems are explored.

  19. Predictive value of clinical history compared with urodynamic study in 1,179 women

    Directory of Open Access Journals (Sweden)

    Jorge Milhem Haddad

    2016-02-01

    Full Text Available SUMMARY Objective: to determine the positive predictive value of clinical history in comparison with urodynamic study for the diagnosis of urinary incontinence. Methods: retrospective analysis comparing clinical history and urodynamic evaluation of 1,179 women with urinary incontinence. The urodynamic study was considered the gold standard, whereas the clinical history was the new test to be assessed. This was established after analyzing each method as the gold standard through the difference between their positive predictive values. Results: the positive predictive values of clinical history compared with urodynamic study for diagnosis of stress urinary incontinence, overactive bladder and mixed urinary incontinence were, respectively, 37% (95% CI 31-44, 40% (95% CI 33-47 and 16% (95% CI 14-19. Conclusion: we concluded that the positive predictive value of clinical history was low compared with urodynamic study for urinary incontinence diagnosis. The positive predictive value was low even among women with pure stress urinary incontinence.

  20. Predicting future glacial lakes in Austria using different modelling approaches

    Science.gov (United States)

    Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus

    2017-04-01

    Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers

  1. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  2. Depression Begets Depression: Comparing the Predictive Utility of Depression and Anxiety Symptoms to Later Depression

    Science.gov (United States)

    Keenan, Kate; Feng, Xin; Hipwell, Alison; Klostermann, Susan

    2009-01-01

    Background: The high comorbidity between depressive and anxiety disorders, especially among females, has called into question the independence of these two symptom groups. It is possible that childhood anxiety typically precedes depression in girls. Comparing of the predictive utility of symptoms of anxiety with the predictive utility of symptoms…

  3. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  4. RFI modeling and prediction approach for SATOP applications: RFI prediction models

    Science.gov (United States)

    Nguyen, Tien M.; Tran, Hien T.; Wang, Zhonghai; Coons, Amanda; Nguyen, Charles C.; Lane, Steven A.; Pham, Khanh D.; Chen, Genshe; Wang, Gang

    2016-05-01

    This paper describes a technical approach for the development of RFI prediction models using carrier synchronization loop when calculating Bit or Carrier SNR degradation due to interferences for (i) detecting narrow-band and wideband RFI signals, and (ii) estimating and predicting the behavior of the RFI signals. The paper presents analytical and simulation models and provides both analytical and simulation results on the performance of USB (Unified S-Band) waveforms in the presence of narrow-band and wideband RFI signals. The models presented in this paper will allow the future USB command systems to detect the RFI presence, estimate the RFI characteristics and predict the RFI behavior in real-time for accurate assessment of the impacts of RFI on the command Bit Error Rate (BER) performance. The command BER degradation model presented in this paper also allows the ground system operator to estimate the optimum transmitted SNR to maintain a required command BER level in the presence of both friendly and un-friendly RFI sources.

  5. A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  6. A Grey NGM(1,1, k) Self-Memory Coupling Prediction Model for Energy Consumption Prediction

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span. PMID:25054174

  7. Comparative study of ANN,ANFIS and AR model for daily runoff time series prediction%ANN 、ANFIS 和 AR 模型在日径流时间序列预测中的应用比较

    Institute of Scientific and Technical Information of China (English)

    谭乔凤; 王旭; 王浩; 雷晓辉

    2016-01-01

    Hydrological prediction is an important aspect of hydrology′s service for economic and society .The prediction result not only provides decision support for reservoir generation operation ,but also is of great significance to the economical operation of hydropower systems ,navigation ,flood control and so on .The autoregressive model (AR model) ,artificial neural network (ANN) and adaptive neural fuzzy inference system (ANFIS) have been widely applied in the daily runoff time series prediction . In this paper ,these three models were applied in daily runoff prediction at Tongzilin station .Nash‐Sutcliffe efficiency coefficient (NS coefficient) ,root mean square error (RMSE) and mean absolute relative error (MARE) were used to evaluate the perform‐ances of three models .Threshold statistics index was used to analyze prediction error distribution of three models .At the same time ,the prediction ability of three models was studied by gradually increasing the prediction period .The results showed that ANFIS had not only better simulation ability and generalization ability ,but also better model performance in the same prediction period compared to ANN and AR model .As a result ,ANFIS can be a recommended prediction model for daily runoff time se‐ries .%水文预测是水文学为经济和社会服务的重要方面。其预报结果不仅能为水库优化调度提供决策支持,而且对水电系统的经济运行、航运以及防洪等方面具有重大意义。自回归模型(AR模型)、人工神经网络(ANN)和自适应神经模糊推理系统(A N FIS )在日径流时间序列中应用广泛。将这三种模型应用于桐子林的日径流时间序列预测中,不仅采用纳什系数(NS系数)、均方根误差(RMSE)和平均相对误差(MARE)为评价指标,对三种模型的综合性能进行了比较。而且,在对三种模型预测结果的平均相对误差的阈值统计基础上,分析了三种模型的预

  8. On Comparing NWP and Radar Nowcast Models for Forecasting of Urban Runoff

    DEFF Research Database (Denmark)

    Thorndahl, Søren Liedtke; Bøvith, T.; Rasmussen, Michael R.;

    2012-01-01

    The paper compares quantitative precipitation forecasts using weather radars and numerical weather prediction models. In order to test forecasts under different conditions, point-comparisons with quantitative radar precipitation estimates and raingauges are presented. Furthermore, spatial...

  9. Quantitative predictivity of the transformation in vitro assay compared with the Ames test. [Hamsters

    Energy Technology Data Exchange (ETDEWEB)

    Parodi, S.; Taningher, M.; Russo, P.; Pala, M.; Vecchio, D.; Fassina, G.; Santi, L.

    For 59 chemical compounds, homogeneous data on transformation in vitro, mutagenicity in the Ames test, and carcinogenicity was reviewed. The potency in inducing transformation in vitro in hamster fibroblast cells was compared with the carcinogenic potency and a modest correlation coefficient was found between the two parameters. For these same 59 compounds it was also possible to compare mutagenic potency in the Ames test with carcinogenic potency. The correlation level was very similar. The predictivity of transformation in vitro increased significantly when only compounds for which some kind of dose-response relationship was available were utilized. This result stresses the importance of the quantitative aspect of the response in predictivity studies. The present study is compared with previous studies on the quantitative predictivity of different short-term tests. The work is not definitive, but gives an idea of the possible type of approach to the problem of comparing quantitative predictivities.

  10. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to unders

  11. How realistic are flat-ramp-flat fault kinematic models? Comparing mechanical and kinematic models

    Science.gov (United States)

    Cruz, L.; Nevitt, J. M.; Hilley, G. E.; Seixas, G.

    2015-12-01

    Rock within the upper crust appears to deform according to elasto-plastic constitutive rules, but structural geologists often employ kinematic descriptions that prescribe particle motions irrespective of these physical properties. In this contribution, we examine the range of constitutive properties that are approximately implied by kinematic models by comparing predicted deformations between mechanical and kinematic models for identical fault geometric configurations. Specifically, we use the ABAQUS finite-element package to model a fault-bend-fold geometry using an elasto-plastic constitutive rule (the elastic component is linear and the plastic failure occurs according to a Mohr-Coulomb failure criterion). We varied physical properties in the mechanical model (i.e., Young's modulus, Poisson ratio, cohesion yield strength, internal friction angle, sliding friction angle) to determine the impact of each on the observed deformations, which were then compared to predictions of kinematic models parameterized with identical geometries. We found that a limited sub-set of physical properties were required to produce deformations that were similar to those predicted by the kinematic models. Specifically, mechanical models with low cohesion are required to allow the kink at the bottom of the flat-ramp geometry to remain stationary over time. Additionally, deformations produced by steep ramp geometries (30 degrees) are difficult to reconcile between the two types of models, while lower slope gradients better conform to the geometric assumptions. These physical properties may fall within the range of those observed in laboratory experiments, suggesting that particle motions predicted by kinematic models may provide an approximate representation of those produced by a physically consistent model under some circumstances.

  12. Admission Laboratory Results to Enhance Prediction Models of Postdischarge Outcomes in Cardiac Care.

    Science.gov (United States)

    Pine, Michael; Fry, Donald E; Hannan, Edward L; Naessens, James M; Whitman, Kay; Reband, Agnes; Qian, Feng; Schindler, Joseph; Sonneborn, Mark; Roland, Jaclyn; Hyde, Linda; Dennison, Barbara A

    Predictive modeling for postdischarge outcomes of inpatient care has been suboptimal. This study evaluated whether admission numerical laboratory data added to administrative models from New York and Minnesota hospitals would enhance the prediction accuracy for 90-day postdischarge deaths without readmission (PD-90) and 90-day readmissions (RA-90) following inpatient care for cardiac patients. Risk-adjustment models for the prediction of PD-90 and RA-90 were designed for acute myocardial infarction, percutaneous cardiac intervention, coronary artery bypass grafting, and congestive heart failure. Models were derived from hospital claims data and were then enhanced with admission laboratory predictive results. Case-level discrimination, goodness of fit, and calibration were used to compare administrative models (ADM) and laboratory predictive models (LAB). LAB models for the prediction of PD-90 were modestly enhanced over ADM, but negligible benefit was seen for RA-90. A consistent predictor of PD-90 and RA-90 was prolonged length of stay outliers from the index hospitalization.

  13. STUDY OF RED TIDE PREDICTION MODEL FOR THE CHANGJIANG ESTUARY

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper based on field data (on red tide water quality monitoring at the Changjiang River mouth and Hutoudu mariculture area in Zhejiang Province from May to August in 1995, and May to September in 1996) presents an effective model for short term prediction of red tide in the Changjiang Estuary. The measured parameters include: depth, temperature, color diaphaneity, density, DO, COD and nutrients (PO4-P, NO2-N, NO3-N, NH4-N). The model was checked by field-test data, and compared with other related models.The model: Z=SAL-3.95 DO-2.974 PH-5.421 PO4-P is suitable for application to the Shengsi aquiculture area near the Changjiang Estuary.

  14. Merging imagery and models for river current prediction

    Science.gov (United States)

    Blain, Cheryl Ann; Linzell, Robert S.; McKay, Paul

    2011-06-01

    To meet the challenge of operating in river environments with denied access and to improve the riverine intelligence available to the warfighter, advanced high resolution river circulation models are combined with remote sensing feature extraction algorithms to produce a predictive capability for currents and water levels in rivers where a priori knowledge of the river environment is limited. A River Simulation Tool (RST) is developed to facilitate the rapid configuration of a river model. River geometry is extracted from the automated processing of available imagery while minimal user input is collected to complete the parameter and forcing specifications necessary to configure a river model. Contingencies within the RST accommodate missing data such as a lack of water depth information and allow for ensemble computations. Successful application of the RST to river environments is demonstrated for the Snohomish River, WA. Modeled currents compare favorably to in-situ currents reinforcing the value of the developed approach.

  15. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  16. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

    Science.gov (United States)

    Minh, Vu Trieu; Katushin, Dmitri; Antonov, Maksim; Veinthal, Renno

    2017-03-01

    This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM) based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), rock brittleness index (BI), the distance between planes of weakness (DPW), and the alpha angle (Alpha) between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP). Four (4) statistical regression models (two linear and two nonlinear) are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2) of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

  17. Predictability of the Indian Ocean Dipole in the coupled models

    Science.gov (United States)

    Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao

    2017-03-01

    In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.

  18. Smooth-Threshold Multivariate Genetic Prediction with Unbiased Model Selection.

    Science.gov (United States)

    Ueki, Masao; Tamiya, Gen

    2016-04-01

    We develop a new genetic prediction method, smooth-threshold multivariate genetic prediction, using single nucleotide polymorphisms (SNPs) data in genome-wide association studies (GWASs). Our method consists of two stages. At the first stage, unlike the usual discontinuous SNP screening as used in the gene score method, our method continuously screens SNPs based on the output from standard univariate analysis for marginal association of each SNP. At the second stage, the predictive model is built by a generalized ridge regression simultaneously using the screened SNPs with SNP weight determined by the strength of marginal association. Continuous SNP screening by the smooth thresholding not only makes prediction stable but also leads to a closed form expression of generalized degrees of freedom (GDF). The GDF leads to the Stein's unbiased risk estimation (SURE), which enables data-dependent choice of optimal SNP screening cutoff without using cross-validation. Our method is very rapid because computationally expensive genome-wide scan is required only once in contrast to the penalized regression methods including lasso and elastic net. Simulation studies that mimic real GWAS data with quantitative and binary traits demonstrate that the proposed method outperforms the gene score method and genomic best linear unbiased prediction (GBLUP), and also shows comparable or sometimes improved performance with the lasso and elastic net being known to have good predictive ability but with heavy computational cost. Application to whole-genome sequencing (WGS) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) exhibits that the proposed method shows higher predictive power than the gene score and GBLUP methods.

  19. Evaluation of the Predictive Accuracy of Five Whole Building Baseline Models

    Energy Technology Data Exchange (ETDEWEB)

    Granderson, Jessica [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Price, Phillip [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-08-31

    This report documents the relative and absolute performance of five baseline models used to characterize whole­building energy consumption. The Pulse Adaptive Model1, multi-­parameter change-­point, mean-week, day-­time-emperature, and LBNL models were evaluated according to a number of statistical ‘goodness of fit’ metrics, to determine their accuracy in characterizing the energy consumption of a set of 29 buildings. The baseline training period, prediction horizon, and predicted energy quantity (daily, weekly, and monthly energy consumption) were varied, and model predictions were compared to interval meter data to determine the accuracy of each model. Three combinations of baseline training periods and prediction horizons were considered: 6 months of training to generate a 12-­month prediction; 9 months of training to generate a 7-month prediction; and 12 months of training to generate a 6- month prediction.

  20. Comparing the ecological relevance of four wave exposure models

    Science.gov (United States)

    Sundblad, G.; Bekkby, T.; Isæus, M.; Nikolopoulos, A.; Norderhaug, K. M.; Rinde, E.

    2014-03-01

    Wave exposure is one of the main structuring forces in the marine environment. Methods that enable large scale quantification of environmental variables have become increasingly important for predicting marine communities in the context of spatial planning and coastal zone management. Existing methods range from cartographic solutions to numerical hydrodynamic simulations, and differ in the scale and spatial coverage of their outputs. Using a biological exposure index we compared the performance of four wave exposure models ranging from simple to more advanced techniques. All models were found to be related to the biological exposure index and their performance, measured as bootstrapped R2 distributions, overlapped. Qualitatively, there were differences in the spatial patterns indicating higher complexity with more advanced techniques. In order to create complex spatial patterns wave exposure models should include diffraction, especially in coastal areas rich in islands. The inclusion of wind strength and frequency, in addition to wind direction and bathymetry, further tended to increase the amount of explained variation. The large potential of high-resolution numerical models to explain the observed patterns of species distribution in complex coastal areas provide exciting opportunities for future research. Easy access to relevant wave exposure models will aid large scale habitat classification systems and the continuously growing field of marine species distribution modelling, ultimately serving marine spatial management and planning.

  1. COGNITIVE MODELS OF PREDICTION THE DEVELOPMENT OF A DIVERSIFIED CORPORATION

    Directory of Open Access Journals (Sweden)

    Baranovskaya T. P.

    2016-10-01

    Full Text Available The application of classical forecasting methods applied to a diversified corporation faces some certain difficulties, due to its economic nature. Unlike other businesses, diversified corporations are characterized by multidimensional arrays of data with a high degree of distortion and fragmentation of information due to the cumulative effect of the incompleteness and distortion of accounting information from the enterprises in it. Under these conditions, the applied methods and tools must have high resolution and work effectively with large databases with incomplete information, ensure the correct common comparable quantitative processing of the heterogeneous nature of the factors measured in different units. It is therefore necessary to select or develop some methods that can work with complex poorly formalized tasks. This fact substantiates the relevance of the problem of developing models, methods and tools for solving the problem of forecasting the development of diversified corporations. This is the subject of this work, which makes it relevant. The work aims to: 1 analyze the forecasting methods to justify the choice of system-cognitive analysis as one of the effective methods for the prediction of semi-structured tasks; 2 to adapt and develop the method of systemic-cognitive analysis for forecasting of dynamics of development of the corporation subject to the scenario approach; 3 to develop predictive model scenarios of changes in basic economic indicators of development of the corporation and to assess their credibility; 4 determine the analytical form of the dependence between past and future scenarios of various economic indicators; 5 develop analytical models weighing predictable scenarios, taking into account all prediction results with positive levels of similarity, to increase the level of reliability of forecasts; 6 to develop a calculation procedure to assess the strength of influence on the corporation (sensitivity of its

  2. Comparing Two Strategies to Model Uncertainties in Structural Dynamics

    Directory of Open Access Journals (Sweden)

    Rubens Sampaio

    2010-01-01

    Full Text Available In the modeling of dynamical systems, uncertainties are present and they must be taken into account to improve the prediction of the models. Some strategies have been used to model uncertainties and the aim of this work is to discuss two of those strategies and to compare them. This will be done using the simplest model possible: a two d.o.f. (degrees of freedom dynamical system. A simple system is used because it is very helpful to assure a better understanding and, consequently, comparison of the strategies. The first strategy (called parametric strategy consists in taking each spring stiffness as uncertain and a random variable is associated to each one of them. The second strategy (called nonparametric strategy is more general and considers the whole stiffness matrix as uncertain, and associates a random matrix to it. In both cases, the probability density functions either of the random parameters or of the random matrix are deduced from the Maximum Entropy Principle using only the available information. With this example, some important results can be discussed, which cannot be assessed when complex structures are used, as it has been done so far in the literature. One important element for the comparison of the two strategies is the analysis of the samples spaces and the how to compare them.

  3. Feature selection and validated predictive performance in the domain of Legionella pneumophila: A comparative study

    NARCIS (Netherlands)

    T. van der Ploeg (Tjeerd); E.W. Steyerberg (Ewout)

    2016-01-01

    textabstractBackground: Genetic comparisons of clinical and environmental Legionella strains form an essential part of outbreak investigations. DNA microarrays often comprise many DNA markers (features). Feature selection and the development of prediction models are particularly challenging in this

  4. Leptogenesis in minimal predictive seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Björkeroth, Fredrik [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom); Anda, Francisco J. de [Departamento de Física, CUCEI, Universidad de Guadalajara,Guadalajara (Mexico); Varzielas, Ivo de Medeiros; King, Stephen F. [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom)

    2015-10-15

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the “atmospheric” and “solar” neutrino masses with Yukawa couplings to (ν{sub e},ν{sub μ},ν{sub τ}) proportional to (0,1,1) and (1,n,n−2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A{sub 4} vacuum alignment provides the required Yukawa structures with n=3, while a ℤ{sub 9} symmetry fixes the relatives phase to be a ninth root of unity.

  5. QSPR Models for Octane Number Prediction

    Directory of Open Access Journals (Sweden)

    Jabir H. Al-Fahemi

    2014-01-01

    Full Text Available Quantitative structure-property relationship (QSPR is performed as a means to predict octane number of hydrocarbons via correlating properties to parameters calculated from molecular structure; such parameters are molecular mass M, hydration energy EH, boiling point BP, octanol/water distribution coefficient logP, molar refractivity MR, critical pressure CP, critical volume CV, and critical temperature CT. Principal component analysis (PCA and multiple linear regression technique (MLR were performed to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The results of PCA explain the interrelationships between octane number and different variables. Correlation coefficients were calculated using M.S. Excel to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The data set was split into training of 40 hydrocarbons and validation set of 25 hydrocarbons. The linear relationship between the selected descriptors and the octane number has coefficient of determination (R2=0.932, statistical significance (F=53.21, and standard errors (s =7.7. The obtained QSPR model was applied on the validation set of octane number for hydrocarbons giving RCV2=0.942 and s=6.328.

  6. Bayesian prediction of placebo analgesia in an instrumental learning model

    Science.gov (United States)

    Jung, Won-Mo; Lee, Ye-Seul; Wallraven, Christian; Chae, Younbyoung

    2017-01-01

    Placebo analgesia can be primarily explained by the Pavlovian conditioning paradigm in which a passively applied cue becomes associated with less pain. In contrast, instrumental conditioning employs an active paradigm that might be more similar to clinical settings. In the present study, an instrumental conditioning paradigm involving a modified trust game in a simulated clinical situation was used to induce placebo analgesia. Additionally, Bayesian modeling was applied to predict the placebo responses of individuals based on their choices. Twenty-four participants engaged in a medical trust game in which decisions to receive treatment from either a doctor (more effective with high cost) or a pharmacy (less effective with low cost) were made after receiving a reference pain stimulus. In the conditioning session, the participants received lower levels of pain following both choices, while high pain stimuli were administered in the test session even after making the decision. The choice-dependent pain in the conditioning session was modulated in terms of both intensity and uncertainty. Participants reported significantly less pain when they chose the doctor or the pharmacy for treatment compared to the control trials. The predicted pain ratings based on Bayesian modeling showed significant correlations with the actual reports from participants for both of the choice categories. The instrumental conditioning paradigm allowed for the active choice of optional cues and was able to induce the placebo analgesia effect. Additionally, Bayesian modeling successfully predicted pain ratings in a simulated clinical situation that fits well with placebo analgesia induced by instrumental conditioning. PMID:28225816

  7. Learning Predictive Movement Models From Fabric-Mounted Wearable Sensors.

    Science.gov (United States)

    Michael, Brendan; Howard, Matthew

    2016-12-01

    The measurement and analysis of human movement for applications in clinical diagnostics or rehabilitation is often performed in a laboratory setting using static motion capture devices. A growing interest in analyzing movement in everyday environments (such as the home) has prompted the development of "wearable sensors", with the most current wearable sensors being those embedded into clothing. A major issue however with the use of these fabric-embedded sensors is the undesired effect of fabric motion artefacts corrupting movement signals. In this paper, a nonparametric method is presented for learning body movements, viewing the undesired motion as stochastic perturbations to the sensed motion, and using orthogonal regression techniques to form predictive models of the wearer's motion that eliminate these errors in the learning process. Experiments in this paper show that standard nonparametric learning techniques underperform in this fabric motion context and that improved prediction accuracy can be made by using orthogonal regression techniques. Modelling this motion artefact problem as a stochastic learning problem shows an average 77% decrease in prediction error in a body pose task using fabric-embedded sensors, compared to a kinematic model.

  8. Comparative Study on Prediction Effects of Short Fatigue Crack Propagation Rate by Two Different Calculation Methods

    Science.gov (United States)

    Yang, Bing; Liao, Zhen; Qin, Yahang; Wu, Yayun; Liang, Sai; Xiao, Shoune; Yang, Guangwu; Zhu, Tao

    2017-05-01

    To describe the complicated nonlinear process of the fatigue short crack evolution behavior, especially the change of the crack propagation rate, two different calculation methods are applied. The dominant effective short fatigue crack propagation rates are calculated based on the replica fatigue short crack test with nine smooth funnel-shaped specimens and the observation of the replica films according to the effective short fatigue cracks principle. Due to the fast decay and the nonlinear approximation ability of wavelet analysis, the self-learning ability of neural network, and the macroscopic searching and global optimization of genetic algorithm, the genetic wavelet neural network can reflect the implicit complex nonlinear relationship when considering multi-influencing factors synthetically. The effective short fatigue cracks and the dominant effective short fatigue crack are simulated and compared by the Genetic Wavelet Neural Network. The simulation results show that Genetic Wavelet Neural Network is a rational and available method for studying the evolution behavior of fatigue short crack propagation rate. Meanwhile, a traditional data fitting method for a short crack growth model is also utilized for fitting the test data. It is reasonable and applicable for predicting the growth rate. Finally, the reason for the difference between the prediction effects by these two methods is interpreted.

  9. Development of a Mobile Application for Building Energy Prediction Using Performance Prediction Model

    Directory of Open Access Journals (Sweden)

    Yu-Ri Kim

    2016-03-01

    Full Text Available Recently, the Korean government has enforced disclosure of building energy performance, so that such information can help owners and prospective buyers to make suitable investment plans. Such a building energy performance policy of the government makes it mandatory for the building owners to obtain engineering audits and thereby evaluate the energy performance levels of their buildings. However, to calculate energy performance levels (i.e., asset rating methodology, a qualified expert needs to have access to at least the full project documentation and/or conduct an on-site inspection of the buildings. Energy performance certification costs a lot of time and money. Moreover, the database of certified buildings is still actually quite small. A need, therefore, is increasing for a simplified and user-friendly energy performance prediction tool for non-specialists. Also, a database which allows building owners and users to compare best practices is required. In this regard, the current study developed a simplified performance prediction model through experimental design, energy simulations and ANOVA (analysis of variance. Furthermore, using the new prediction model, a related mobile application was also developed.

  10. Protein comparative sequence analysis and computer modeling.

    Science.gov (United States)

    Hambly, Brett D; Oakley, Cecily E; Fajer, Piotr G

    2008-01-01

    A problem frequently encountered by the biological scientist is the identification of a previously unknown gene or protein sequence, where there are few or no clues as to the biochemical function, ligand specificity, gene regulation, protein-protein interactions, tissue specificity, cellular localization, developmental phase of activity, or biological role. Through the process of bioinformatics there are now many approaches for predicting answers to at least some of these questions, often then allowing the design of more insightful experiments to characterize more definitively the new protein.

  11. Predictive Models for Photovoltaic Electricity Production in Hot Weather Conditions

    Directory of Open Access Journals (Sweden)

    Jabar H. Yousif

    2017-07-01

    Full Text Available The process of finding a correct forecast equation for photovoltaic electricity production from renewable sources is an important matter, since knowing the factors affecting the increase in the proportion of renewable energy production and reducing the cost of the product has economic and scientific benefits. This paper proposes a mathematical model for forecasting energy production in photovoltaic (PV panels based on a self-organizing feature map (SOFM model. The proposed model is compared with other models, including the multi-layer perceptron (MLP and support vector machine (SVM models. Moreover, a mathematical model based on a polynomial function for fitting the desired output is proposed. Different practical measurement methods are used to validate the findings of the proposed neural and mathematical models such as mean square error (MSE, mean absolute error (MAE, correlation (R, and coefficient of determination (R2. The proposed SOFM model achieved a final MSE of 0.0007 in the training phase and 0.0005 in the cross-validation phase. In contrast, the SVM model resulted in a small MSE value equal to 0.0058, while the MLP model achieved a final MSE of 0.026 with a correlation coefficient of 0.9989, which indicates a strong relationship between input and output variables. The proposed SOFM model closely fits the desired results based on the R2 value, which is equal to 0.9555. Finally, the comparison results of MAE for the three models show that the SOFM model achieved a best result of 0.36156, whereas the SVM and MLP models yielded 4.53761 and 3.63927, respectively. A small MAE value indicates that the output of the SOFM model closely fits the actual results and predicts the desired output.

  12. Predictability in models of the atmospheric circulation.

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error are. The

  13. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model

  14. Hybrid Wavelet-Postfix-GP Model for Rainfall Prediction of Anand Region of India

    Directory of Open Access Journals (Sweden)

    Vipul K. Dabhi

    2014-01-01

    Full Text Available An accurate prediction of rainfall is crucial for national economy and management of water resources. The variability of rainfall in both time and space makes the rainfall prediction a challenging task. The present work investigates the applicability of a hybrid wavelet-postfix-GP model for daily rainfall prediction of Anand region using meteorological variables. The wavelet analysis is used as a data preprocessing technique to remove the stochastic (noise component from the original time series of each meteorological variable. The Postfix-GP, a GP variant, and ANN are then employed to develop models for rainfall using newly generated subseries of meteorological variables. The developed models are then used for rainfall prediction. The out-of-sample prediction performance of Postfix-GP and ANN models is compared using statistical measures. The results are comparable and suggest that Postfix-GP could be explored as an alternative tool for rainfall prediction.

  15. Uric acid and the prediction models of tumor lysis syndrome in AML.

    Directory of Open Access Journals (Sweden)

    A Ahsan Ejaz

    Full Text Available We investigated the ability of serum uric acid (SUA to predict laboratory tumor lysis syndrome (LTLS and compared it to common laboratory variables, cytogenetic profiles, tumor markers and prediction models in acute myeloid leukemia patients. In this retrospective study patients were risk-stratified for LTLS based on SUA cut-off values and the discrimination ability was compared to current prediction models. The incidences of LTLS were 17.8%, 21% and 62.5% in the low, intermediate and high-risk groups, respectively. SUA was an independent predictor of LTLS (adjusted OR 1.12, CI95% 1.0-1.3, p = 0.048. The discriminatory ability of SUA, per ROC curves, to predict LTLS was superior to LDH, cytogenetic profile, tumor markers and the combined model but not to WBC (AUCWBC 0.679. However, in comparisons between high-risk SUA and high-risk WBC, SUA had superior discriminatory capability than WBC (AUCSUA 0.664 vs. AUCWBC 0.520; p <0.001. SUA also demonstrated better performance than the prediction models (high-risk SUAAUC 0.695, p<0.001. In direct comparison of high-risk groups, SUA again demonstrated superior performance than the prediction models (high-risk SUAAUC 0.668, p = 0.001 in predicting LTLS, approaching that of the combined model (AUC 0.685, p<0.001. In conclusion, SUA alone is comparable and highly predictive for LTLS than other prediction models.

  16. Predicting aquifer response time for application in catchment modeling.

    Science.gov (United States)

    Walker, Glen R; Gilfedder, Mat; Dawes, Warrick R; Rassam, David W

    2015-01-01

    It is well established that changes in catchment land use can lead to significant impacts on water resources. Where land-use changes increase evapotranspiration there is a resultant decrease in groundwater recharge, which in turn decreases groundwater discharge to streams. The response time of changes in groundwater discharge to a change in recharge is a key aspect of predicting impacts of land-use change on catchment water yield. Predicting these impacts across the large catchments relevant to water resource planning can require the estimation of groundwater response times from hundreds of aquifers. At this scale, detailed site-specific measured data are often absent, and available spatial data are limited. While numerical models can be applied, there is little advantage if there are no detailed data to parameterize them. Simple analytical methods are useful in this situation, as they allow the variability in groundwater response to be incorporated into catchment hydrological models, with minimal modeling overhead. This paper describes an analytical model which has been developed to capture some of the features of real, sloping aquifer systems. The derived groundwater response timescale can be used to parameterize a groundwater discharge function, allowing groundwater response to be predicted in relation to different broad catchment characteristics at a level of complexity which matches the available data. The results from the analytical model are compared to published field data and numerical model results, and provide an approach with broad application to inform water resource planning in other large, data-scarce catchments. © 2014, CommonWealth of Australia. Groundwater © 2014, National Ground Water Association.

  17. Prediction of interest rate using CKLS model with stochastic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Ying, Khor Chia [Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Selangor (Malaysia); Hin, Pooi Ah [Sunway University Business School, No. 5, Jalan Universiti, Bandar Sunway, 47500 Subang Jaya, Selangor (Malaysia)

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  18. BAYESIAN FORECASTS COMBINATION TO IMPROVE THE ROMANIAN INFLATION PREDICTIONS BASED ON ECONOMETRIC MODELS

    Directory of Open Access Journals (Sweden)

    Mihaela Simionescu

    2014-12-01

    Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.

  19. Comparative Study of MHD Modeling of the Background Solar Wind

    CERN Document Server

    Gressl, C; Temmer, M; Odstrcil, D; Linker, J A; Mikic, Z; Riley, P

    2013-01-01

    Knowledge about the background solar wind plays a crucial role in the framework of space weather forecasting. In-situ measurements of the background solar wind are only available for a few points in the heliosphere where spacecraft are located, therefore we have to rely on heliospheric models to derive the distribution of solar wind parameters in interplanetary space. We test the performance of different solar wind models, namely Magnetohydrodynamic Algorithm outside a Sphere/ENLIL (MAS/ENLIL), Wang-Sheeley-Arge/ENLIL (WSA/ENLIL), and MAS/MAS, by comparing model results with in-situ measurements from spacecraft located at 1 AU distance to the Sun (ACE, Wind). To exclude the influence of interplanetary coronal mass ejections (ICMEs), we chose the year 2007 as a time period with low solar activity for our comparison. We found that the general structure of the background solar wind is well reproduced by all models. The best model results were obtained for the parameter solar wind speed. However, the predicted ar...

  20. Atmosphere of Mars - Mariner IV models compared.

    Science.gov (United States)

    Eshleman, V. R.; Fjeldbo, G.; Fjeldbo, W. C.

    1966-01-01

    Mariner IV models of three Mars atmospheric layers analogous to terrestrial E, F-1 and F-2 layers, considering relative mass densities, temperatures, carbon dioxide photodissociation and ionization profile

  1. Allostasis: a model of predictive regulation.

    Science.gov (United States)

    Sterling, Peter

    2012-04-12

    The premise of the standard regulatory model, "homeostasis", is flawed: the goal of regulation is not to preserve constancy of the internal milieu. Rather, it is to continually adjust the milieu to promote survival and reproduction. Regulatory mechanisms need to be efficient, but homeostasis (error-correction by feedback) is inherently inefficient. Thus, although feedbacks are certainly ubiquitous, they could not possibly serve as the primary regulatory mechanism. A newer model, "allostasis", proposes that efficient regulation requires anticipating needs and preparing to satisfy them before they arise. The advantages: (i) errors are reduced in magnitude and frequency; (ii) response capacities of different components are matched -- to prevent bottlenecks and reduce safety factors; (iii) resources are shared between systems to minimize reserve capacities; (iv) errors are remembered and used to reduce future errors. This regulatory strategy requires a dedicated organ, the brain. The brain tracks multitudinous variables and integrates their values with prior knowledge to predict needs and set priorities. The brain coordinates effectors to mobilize resources from modest bodily stores and enforces a system of flexible trade-offs: from each organ according to its ability, to each organ according to its need. The brain also helps regulate the internal milieu by governing anticipatory behavior. Thus, an animal conserves energy by moving to a warmer place - before it cools, and it conserves salt and water by moving to a cooler one before it sweats. The behavioral strategy requires continuously updating a set of specific "shopping lists" that document the growing need for each key component (warmth, food, salt, water). These appetites funnel into a common pathway that employs a "stick" to drive the organism toward filling the need, plus a "carrot" to relax the organism when the need is satisfied. The stick corresponds broadly to the sense of anxiety, and the carrot broadly to

  2. Mixing height computation from a numerical weather prediction model

    Energy Technology Data Exchange (ETDEWEB)

    Jericevic, A. [Croatian Meteorological and Hydrological Service, Zagreb (Croatia); Grisogono, B. [Univ. of Zagreb, Zagreb (Croatia). Andrija Mohorovicic Geophysical Inst., Faculty of Science

    2004-07-01

    Dispersion models require hourly values of the mixing height, H, that indicates the existence of turbulent mixing. The aim of this study was to investigate a model ability and characteristics in the prediction of H. The ALADIN, limited area numerical weather prediction (NWP) model for short-range 48-hour forecasts was used. The bulk Richardson number (R{sub iB}) method was applied to determine the height of the atmospheric boundary layer at one grid point nearest to Zagreb, Croatia. This specific location was selected because there were available radio soundings and the verification of the model could be done. Critical value of bulk Richardson number R{sub iBc}=0.3 was used. The values of H, modelled and measured, for 219 days at 12 UTC are compared, and the correlation coefficient of 0.62 is obtained. This indicates that ALADIN can be used for the calculation of H in the convective boundary layer. For the stable boundary layer (SBL), the model underestimated H systematically. Results showed that R{sub iBc} evidently increases with the increase of stability. Decoupling from the surface in the very SBL was detected, which is a consequence of the flow ease resulting in R{sub iB} becoming very large. Verification of the practical usage of the R{sub iB} method for H calculations from NWP model was performed. The necessity for including other stability parameters (e.g., surface roughness length) was evidenced. Since ALADIN model is in operational use in many European countries, this study would help the others in pre-processing NWP data for input to dispersion models. (orig.)

  3. Predicting lower mantle heterogeneity from 4-D Earth models

    Science.gov (United States)

    Flament, Nicolas; Williams, Simon; Müller, Dietmar; Gurnis, Michael; Bower, Dan J.

    2016-04-01

    The Earth's lower mantle is characterized by two large-low-shear velocity provinces (LLSVPs), approximately ˜15000 km in diameter and 500-1000 km high, located under Africa and the Pacific Ocean. The spatial stability and chemical nature of these LLSVPs are debated. Here, we compare the lower mantle structure predicted by forward global mantle flow models constrained by tectonic reconstructions (Bower et al., 2015) to an analysis of five global tomography models. In the dynamic models, spanning 230 million years, slabs subducting deep into the mantle deform an initially uniform basal layer containing 2% of the volume of the mantle. Basal density, convective vigour (Rayleigh number Ra), mantle viscosity, absolute plate motions, and relative plate motions are varied in a series of model cases. We use cluster analysis to classify a set of equally-spaced points (average separation ˜0.45°) on the Earth's surface into two groups of points with similar variations in present-day temperature between 1000-2800 km depth, for each model case. Below ˜2400 km depth, this procedure reveals a high-temperature cluster in which mantle temperature is significantly larger than ambient and a low-temperature cluster in which mantle temperature is lower than ambient. The spatial extent of the high-temperature cluster is in first-order agreement with the outlines of the African and Pacific LLSVPs revealed by a similar cluster analysis of five tomography models (Lekic et al., 2012). Model success is quantified by computing the accuracy and sensitivity of the predicted temperature clusters in predicting the low-velocity cluster obtained from tomography (Lekic et al., 2012). In these cases, the accuracy varies between 0.61-0.80, where a value of 0.5 represents the random case, and the sensitivity ranges between 0.18-0.83. The largest accuracies and sensitivities are obtained for models with Ra ≈ 5 x 107, no asthenosphere (or an asthenosphere restricted to the oceanic domain), and a

  4. A comparative study of the reported performance of ab initio protein structure prediction algorithms.

    Science.gov (United States)

    Helles, Glennie

    2008-04-01

    Protein structure prediction is one of the major challenges in bioinformatics today. Throughout the past five decades, many different algorithmic approaches have been attempted, and although progress has been made the problem remains unsolvable even for many small proteins. While the general objective is to predict the three-dimensional structure from primary sequence, our current knowledge and computational power are simply insufficient to solve a problem of such high complexity. Some prediction algorithms do, however, appear to perform better than others, although it is not always obvious which ones they are and it is perhaps even less obvious why that is. In this review, the reported performance results from 18 different recently published prediction algorithms are compared. Furthermore, the general algorithmic settings most likely responsible for the difference in the reported performance are identified, and the specific settings of each of the 18 prediction algorithms are also compared. The average normalized r.m.s.d. scores reported range from 11.17 to 3.48. With a performance measure including both r.m.s.d. scores and CPU time, the currently best-performing prediction algorithm is identified to be the I-TASSER algorithm. Two of the algorithmic settings--protein representation and fragment assembly--were found to have definite positive influence on the running time and the predicted structures, respectively. There thus appears to be a clear benefit from incorporating this knowledge in the design of new prediction algorithms.

  5. A New Method of Comparing Forcing Agents in Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Kravitz, Benjamin S.; MacMartin, Douglas; Rasch, Philip J.; Jarvis, Andrew

    2015-10-14

    We describe a new method of comparing different climate forcing agents (e.g., CO2, CH4, and solar irradiance) that avoids many of the ambiguities introduced by temperature-related climate feedbacks. This is achieved by introducing an explicit feedback loop external to the climate model that adjusts one forcing agent to balance another while keeping global mean surface temperature constant. Compared to current approaches, this method has two main advantages: (i) the need to define radiative forcing is bypassed and (ii) by maintaining roughly constant global mean temperature, the effects of state dependence on internal feedback strengths are minimized. We demonstrate this approach for several different forcing agents and derive the relationships between these forcing agents in two climate models; comparisons between forcing agents are highly linear in concordance with predicted functional forms. Transitivity of the relationships between the forcing agents appears to hold within a wide range of forcing. The relationships between the forcing agents obtained from this method are consistent across both models but differ from relationships that would be obtained from calculations of radiative forcing, highlighting the importance of controlling for surface temperature feedback effects when separating radiative forcing and climate response.

  6. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  7. A prediction model for assessing residential radon concentration in Switzerland

    NARCIS (Netherlands)

    Hauri, D.D.; Huss, A.; Zimmermann, F.; Kuehni, C.E.; Roosli, M.

    2012-01-01

    Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the

  8. A Predictive Model of Cell Traction Forces Based on Cell Geometry

    OpenAIRE

    Lemmon, Christopher A.; Romer, Lewis H

    2010-01-01

    Recent work has indicated that the shape and size of a cell can influence how a cell spreads, develops focal adhesions, and exerts forces on the substrate. However, it is unclear how cell shape regulates these events. Here we present a computational model that uses cell shape to predict the magnitude and direction of forces generated by cells. The predicted results are compared to experimentally measured traction forces, and show that the model can predict traction force direction, relative m...

  9. Predicting and understanding forest dynamics using a simple tractable model.

    Science.gov (United States)

    Purves, Drew W; Lichstein, Jeremy W; Strigul, Nikolay; Pacala, Stephen W

    2008-11-04

    The perfect-plasticity approximation (PPA) is an analytically tractable model of forest dynamics, defined in terms of parameters for individual trees, including allometry, growth, and mortality. We estimated these parameters for the eight most common species on each of four soil types in the US Lake states (Michigan, Wisconsin, and Minnesota) by using short-term (compared these predictions to chronosequences of stand development. Predictions for the timing and magnitude of basal area dynamics and ecological succession on each soil were accurate, and predictions for the diameter distribution of 100-year-old stands were correct in form and slope. For a given species, the PPA provides analytical metrics for early-successional performance (H(20), height of a 20-year-old open-grown tree) and late-successional performance (Z*, equilibrium canopy height in monoculture). These metrics predicted which species were early or late successional on each soil type. Decomposing Z* showed that (i) succession is driven both by superior understory performance and superior canopy performance of late-successional species, and (ii) performance differences primarily reflect differences in mortality rather than growth. The predicted late-successional dominants matched chronosequences on xeromesic (Quercus rubra) and mesic (codominance by Acer rubrum and Acer saccharum) soil. On hydromesic and hydric soils, the literature reports that the current dominant species in old stands (Thuja occidentalis) is now failing to regenerate. Consistent with this, the PPA predicted that, on these soils, stands are now succeeding to dominance by other late-successional species (e.g., Fraxinus nigra, A. rubrum).

  10. Models to predict intestinal absorption of therapeutic peptides and proteins.

    Science.gov (United States)

    Antunes, Filipa; Andrade, Fernanda; Ferreira, Domingos; Nielsen, Hanne Morck; Sarmento, Bruno

    2013-01-01

    Prediction of human intestinal absorption is a major goal in the design, optimization, and selection of drugs intended for oral delivery, in particular proteins, which possess intrinsic poor transport across intestinal epithelium. There are various techniques currently employed to evaluate the extension of protein absorption in the different phases of drug discovery and development. Screening protocols to evaluate protein absorption include a range of preclinical methodologies like in silico, in vitro, in situ, ex vivo and in vivo. It is the careful and critical use of these techniques that can help to identify drug candidates, which most probably will be well absorbed from the human intestinal tract. It is well recognized that the human intestinal permeability cannot be accurately predicted based on a single preclinical method. However, the present social and scientific concerns about the animal well care as well as the pharmaceutical industries need for rapid, cheap and reliable models predicting bioavailability give reasons for using methods providing an appropriate correlation between results of in vivo and in vitro drug absorption. The aim of this review is to describe and compare in silico, in vitro, in situ, ex vivo and in vivo methods used to predict human intestinal absorption, giving a special attention to the intestinal absorption of therapeutic peptides and proteins.

  11. Distributional Analysis for Model Predictive Deferrable Load Control

    OpenAIRE

    Chen, Niangjun; Gan, Lingwen; Low, Steven H.; Wierman, Adam

    2014-01-01

    Deferrable load control is essential for handling the uncertainties associated with the increasing penetration of renewable generation. Model predictive control has emerged as an effective approach for deferrable load control, and has received considerable attention. In particular, previous work has analyzed the average-case performance of model predictive deferrable load control. However, to this point, distributional analysis of model predictive deferrable load control has been elusive. In ...

  12. Prediction Uncertainty Analyses for the Combined Physically-Based and Data-Driven Models

    Science.gov (United States)

    Demissie, Y. K.; Valocchi, A. J.; Minsker, B. S.; Bailey, B. A.

    2007-12-01

    The unavoidable simplification associated with physically-based mathematical models can result in biased parameter estimates and correlated model calibration errors, which in return affect the accuracy of model predictions and the corresponding uncertainty analyses. In this work, a physically-based groundwater model (MODFLOW) together with error-correcting artificial neural networks (ANN) are used in a complementary fashion to obtain an improved prediction (i.e. prediction with reduced bias and error correlation). The associated prediction uncertainty of the coupled MODFLOW-ANN model is then assessed using three alternative methods. The first method estimates the combined model confidence and prediction intervals using first-order least- squares regression approximation theory. The second method uses Monte Carlo and bootstrap techniques for MODFLOW and ANN, respectively, to construct the combined model confidence and prediction intervals. The third method relies on a Bayesian approach that uses analytical or Monte Carlo methods to derive the intervals. The performance of these approaches is compared with Generalized Likelihood Uncertainty Estimation (GLUE) and Calibration-Constrained Monte Carlo (CCMC) intervals of the MODFLOW predictions alone. The results are demonstrated for a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study comprises structural, parameter, and measurement uncertainties. The preliminary results indicate that the proposed three approaches yield comparable confidence and prediction intervals, thus making the computationally efficient first-order least-squares regression approach attractive for estimating the coupled model uncertainty. These results will be compared with GLUE and CCMC results.

  13. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  14. Predictive modeling of terrestrial radiation exposure from geologic materials

    Science.gov (United States)

    Haber, Daniel A.

    Aerial gamma ray surveys are an important tool for national security, scientific, and industrial interests in determining locations of both anthropogenic and natural sources of radioactivity. There is a relationship between radioactivity and geology and in the past this relationship has been used to predict geology from an aerial survey. The purpose of this project is to develop a method to predict the radiologic exposure rate of the geologic materials in an area by creating a model using geologic data, images from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), geochemical data, and pre-existing low spatial resolution aerial surveys from the National Uranium Resource Evaluation (NURE) Survey. Using these data, geospatial areas, referred to as background radiation units, homogenous in terms of K, U, and Th are defined and the gamma ray exposure rate is predicted. The prediction is compared to data collected via detailed aerial survey by our partner National Security Technologies, LLC (NSTec), allowing for the refinement of the technique. High resolution radiation exposure rate models have been developed for two study areas in Southern Nevada that include the alluvium on the western shore of Lake Mohave, and Government Wash north of Lake Mead; both of these areas are arid with little soil moisture and vegetation. We determined that by using geologic units to define radiation background units of exposed bedrock and ASTER visualizations to subdivide radiation background units of alluvium, regions of homogeneous geochemistry can be defined allowing for the exposure rate to be predicted. Soil and rock samples have been collected at Government Wash and Lake Mohave as well as a third site near Cameron, Arizona. K, U, and Th concentrations of these samples have been determined using inductively coupled mass spectrometry (ICP-MS) and laboratory counting using radiation detection equipment. In addition, many sample locations also have

  15. On hydrological model complexity, its geometrical interpretations and prediction uncertainty

    NARCIS (Netherlands)

    Arkesteijn, E.C.M.M.; Pande, S.

    2013-01-01

    Knowledge of hydrological model complexity can aid selection of an optimal prediction model out of a set of available models. Optimal model selection is formalized as selection of the least complex model out of a subset of models that have lower empirical risk. This may be considered equivalent to

  16. Predictive modeling of dental pain using neural network.

    Science.gov (United States)

    Kim, Eun Yeob; Lim, Kun Ok; Rhee, Hyun Sill

    2009-01-01

    The mouth is a part of the body for ingesting food that is the most basic foundation and important part. The dental pain predicted by the neural network model. As a result of making a predictive modeling, the fitness of the predictive modeling of dental pain factors was 80.0%. As for the people who are likely to experience dental pain predicted by the neural network model, preventive measures including proper eating habits, education on oral hygiene, and stress release must precede any dental treatment.

  17. Analysis and Prediction of Rural Residents’ Living Consumption Growth in Sichuan Province Based on Markov Prediction and ARMA Model

    Institute of Scientific and Technical Information of China (English)

    LU Xiao-li

    2012-01-01

    I select 32 samples concerning per capita living consumption of rural residents in Sichuan Province during the period 1978-2009. First, using Markov prediction method, the growth rate of living consumption level in the future is predicted to largely range from 10% to 20%. Then, in order to improve the prediction accuracy, time variable t is added into the traditional ARMA model for modeling and prediction. The prediction results show that the average relative error rate is 1.56%, and the absolute value of relative error during the period 2006-2009 is less than 0.5%. Finally, I compare the prediction results during the period 2010-2012 by Markov prediction method and ARMA model, respectively, indicating that the two are consistent in terms of growth rate of living consumption, and the prediction results are reliable. The results show that under the similar policies, rural residents’ consumer demand in Sichuan Province will continue to grow in the short term, so it is necessary to further expand the consumer market.

  18. Comparative Distributions of Hazard Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Rana Abdul Wajid

    2006-07-01

    Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

  19. Mutual information model for link prediction in heterogeneous complex networks

    Science.gov (United States)

    Shakibian, Hadi; Moghadam Charkari, Nasrollah

    2017-01-01

    Recently, a number of meta-path based similarity indices like PathSim, HeteSim, and random walk have been proposed for link prediction in heterogeneous complex networks. However, these indices suffer from two major drawbacks. Firstly, they are primarily dependent on the connectivity degrees of node pairs without considering the further information provided by the given meta-path. Secondly, most of them are required to use a single and usually symmetric meta-path in advance. Hence, employing a set of different meta-paths is not straightforward. To tackle with these problems, we propose a mutual information model for link prediction in heterogeneous complex networks. The proposed model, called as Meta-path based Mutual Information Index (MMI), introduces meta-path based link entropy to estimate the link likelihood and could be carried on a set of available meta-paths. This estimation measures the amount of information through the paths instead of measuring the amount of connectivity between the node pairs. The experimental results on a Bibliography network show that the MMI obtains high prediction accuracy compared with other popular similarity indices. PMID:28344326

  20. A nonlinear regression model-based predictive control algorithm.

    Science.gov (United States)

    Dubay, R; Abu-Ayyad, M; Hernandez, J M

    2009-04-01

    This paper presents a unique approach for designing a nonlinear regression model-based predictive controller (NRPC) for single-input-single-output (SISO) and multi-input-multi-output (MIMO) processes that are common in industrial applications. The innovation of this strategy is that the controller structure allows nonlinear open-loop modeling to be conducted while closed-loop control is executed every sampling instant. Consequently, the system matrix is regenerated every sampling instant using a continuous function providing a more accurate prediction of the plant. Computer simulations are carried out on nonlinear plants, demonstrating that the new approach is easily implemented and provides tight control. Also, the proposed algorithm is implemented on two real time SISO applications; a DC motor, a plastic injection molding machine and a nonlinear MIMO thermal system comprising three temperature zones to be controlled with interacting effects. The experimental closed-loop responses of the proposed algorithm were compared to a multi-model dynamic matrix controller (MPC) with improved results for various set point trajectories. Good disturbance rejection was attained, resulting in improved tracking of multi-set point profiles in comparison to multi-model MPC.

  1. Modeling a multivariable reactor and on-line model predictive control.

    Science.gov (United States)

    Yu, D W; Yu, D L

    2005-10-01

    A nonlinear first principle model is developed for a laboratory-scaled multivariable chemical reactor rig in this paper and the on-line model predictive control (MPC) is implemented to the rig. The reactor has three variables-temperature, pH, and dissolved oxygen with nonlinear dynamics-and is therefore used as a pilot system for the biochemical industry. A nonlinear discrete-time model is derived for each of the three output variables and their model parameters are estimated from the real data using an adaptive optimization method. The developed model is used in a nonlinear MPC scheme. An accurate multistep-ahead prediction is obtained for MPC, where the extended Kalman filter is used to estimate system unknown states. The on-line control is implemented and a satisfactory tracking performance is achieved. The MPC is compared with three decentralized PID controllers and the advantage of the nonlinear MPC over the PID is clearly shown.

  2. SVM model for estimating the parameters of the probability-integral method of predicting mining subsidence

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hua; WANG Yun-jia; LI Yong-feng

    2009-01-01

    A new mathematical model to estimate the parameters of the probability-integral method for mining subsidence prediction is proposed. Based on least squares support vector machine (LS-SVM) theory, it is capable of improving the precision and reliability of mining subsidence prediction. Many of the geological and mining factors involved are related in a nonlinear way. The new model is based on statistical theory (SLT) and empirical risk minimization (ERM) principles. Typical data collected from observation stations were used for the learning and training samples. The calculated results from the LS-SVM model were compared with the prediction results of a back propagation neural network (BPNN) model. The results show that the parameters were more precisely predicted by the LS-SVM model than by the BPNN model. The LS-SVM model was faster in computation and had better generalized performance. It provides a highly effective method for calculating the predicting parameters of the probability-integral method.

  3. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2012-03-01

    Full Text Available (CSIR), Pretoria, South Africa tj.grant@nlda.nl iburke@csir.co.za rvhheerden@csir.co.za Abstract: Cyber operations denote the response of governments and organisations to cyber crime, terrorism, and warfare. To date, cyber operations have been.... This could include responding to an (impending) attack by counter-attacking or by proactively neutralizing the source of an impending attack. A good starting point to improving understanding would be to model the offensive cyber operations process...

  4. Assessing effects of variation in global climate data sets on spatial predictions from climate envelope models

    Science.gov (United States)

    Romanach, Stephanie; Watling, James I.; Fletcher, Robert J.; Speroterra, Carolina; Bucklin, David N.; Brandt, Laura A.; Pearlstine, Leonard G.; Escribano, Yesenia; Mazzotti, Frank J.

    2014-01-01

    Climate change poses new challenges for natural resource managers. Predictive modeling of species–environment relationships using climate envelope models can enhance our understanding of climate change effects on biodiversity, assist in assessment of invasion risk by exotic organisms, and inform life-history understanding of individual species. While increasing interest has focused on the role of uncertainty in future conditions on model predictions, models also may be sensitive to the initial conditions on which they are trained. Although climate envelope models are usually trained using data on contemporary climate, we lack systematic comparisons of model performance and predictions across alternative climate data sets available for model training. Here, we seek to fill that gap by comparing variability in predictions between two contemporary climate data sets to variability in spatial predictions among three alternative projections of future climate. Overall, correlations between monthly temperature and precipitation variables were very high for both contemporary and future data. Model performance varied across algorithms, but not between two alternative contemporary climate data sets. Spatial predictions varied more among alternative general-circulation models describing future climate conditions than between contemporary climate data sets. However, we did find that climate envelope models with low Cohen's kappa scores made more discrepant spatial predictions between climate data sets for the contemporary period than did models with high Cohen's kappa scores. We suggest conservation planners evaluate multiple performance metrics and be aware of the importance of differences in initial conditions for spatial predictions from climate envelope models.

  5. Prediction model for permeability index by integrating case-based reasoning with adaptive particle swarm optimization

    Institute of Scientific and Technical Information of China (English)

    Zhu Hongqiu; Yang Chunhua; Gui Weihua

    2009-01-01

    To effectively predict the permeability index of smelting process in the imperial smelting furnace, an intelligent prediction model is proposed. It integrates the case-based reasoning (CBR) with adaptive particle swarm optimization (PSO). The number of nearest neighbors and the weighted features vector are optimized online using the adaptive PSO to improve the prediction accuracy of CBR. The adaptive inertia weight and mutation operation are used to overcome the premature convergence of the PSO. The proposed method is validated a compared with the basic weighted CBR. The results show that the proposed model has higher prediction accuracy and better performance than the basic CBR model.

  6. Comparing predicted and observed spatial boundaries of geologic phenomena: Automated Proximity and Conformity Analysis applied to ice sheet reconstructions

    Science.gov (United States)

    Napieralski, Jacob; Li, Yingkui; Harbor, Jon

    2006-02-01

    Comparing predicted with observed geologic data is a central element of many aspects of research in the geosciences, e.g., comparing numerical ice sheet models with geomorphic data to test ice sheet model parameters and accuracy. However, the ability to verify predictions using empirical data has been limited by the lack of objective techniques that provide systematic comparison and statistical assessment of the goodness of correspondence between predictions of spatial and temporal patterns of geologic phenomena and the field evidence. Much of this problem arises from the inability to quantify the level of agreement between straight or curvilinear features, such as between the modeled extent of some geologic phenomenon and the field evidence for the extent of the phenomenon. Automated Proximity and Conformity Analysis (APCA) addresses this challenge using a system of Geographic Information System-based buffering that determines the general proximity and parallel conformity between linear features. APCA results indicate which modeled output fits empirical data, based on the distance and angle between features. As a result, various model outputs can be sorted according to overall level of agreement by comparison with one or multiple features from field evidence, based on proximity and conformity values. In an example application drawn from glacial geomorphology, APCA is integrated into an overall model verification process that includes matching modeled ice sheets to known marginal positions and ice flow directions, among other parameters. APCA is not limited to ice sheet or glacier models, but can be applied to many geoscience areas where the extent or geometry of modeled results need to be compared against field observations, such as debris flows, tsunami run-out, lava flows, or flood extents.

  7. RegPredict: an integrated system for regulon inference in prokaryotes by comparative genomics approach

    Energy Technology Data Exchange (ETDEWEB)

    Novichkov, Pavel S.; Rodionov, Dmitry A.; Stavrovskaya, Elena D.; Novichkova, Elena S.; Kazakov, Alexey E.; Gelfand, Mikhail S.; Arkin, Adam P.; Mironov, Andrey A.; Dubchak, Inna

    2010-05-26

    RegPredict web server is designed to provide comparative genomics tools for reconstruction and analysis of microbial regulons using comparative genomics approach. The server allows the user to rapidly generate reference sets of regulons and regulatory motif profiles in a group of prokaryotic genomes. The new concept of a cluster of co-regulated orthologous operons allows the user to distribute the analysis of large regulons and to perform the comparative analysis of multiple clusters independently. Two major workflows currently implemented in RegPredict are: (i) regulon reconstruction for a known regulatory motif and (ii) ab initio inference of a novel regulon using several scenarios for the generation of starting gene sets. RegPredict provides a comprehensive collection of manually curated positional weight matrices of regulatory motifs. It is based on genomic sequences, ortholog and operon predictions from the MicrobesOnline. An interactive web interface of RegPredict integrates and presents diverse genomic and functional information about the candidate regulon members from several web resources. RegPredict is freely accessible at http://regpredict.lbl.gov.

  8. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.;

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...... problem. Moreover, to reduce the computation time and improve the controller's performance, a fuzzy predictive filter is introduced. With the purpose of testing the developed EMPC, a simulation controlling the temperature levels of an intelligent office building (PowerFlexHouse), with and without fuzzy...

  9. Predictive modeling and reducing cyclic variability in autoignition engines

    Energy Technology Data Exchange (ETDEWEB)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  10. Lattice Boltzmann modeling of directional wetting: Comparing simulations to experiments

    NARCIS (Netherlands)

    Jansen, H.P.; Sotthewes, K.; Swigchem, van J.; Zandvliet, H.J.W.; Kooij, E.S.

    2013-01-01

    Lattice Boltzmann Modeling (LBM) simulations were performed on the dynamic behavior of liquid droplets on chemically striped patterned surfaces, ultimately with the aim to develop a predictive tool enabling reliable design of future experiments. The simulations accurately mimic experimental results,

  11. Comparative molecular modelling of biologically active sterols

    Science.gov (United States)

    Baran, Mariusz; Mazerski, Jan

    2015-04-01

    Membrane sterols are targets for a clinically important antifungal agent - amphotericin B. The relatively specific antifungal action of the drug is based on a stronger interaction of amphotericin B with fungal ergosterol than with mammalian cholesterol. Conformational space occupied by six sterols has been defined using the molecular dynamics method to establish if the conformational features correspond to the preferential interaction of amphotericin B with ergosterol as compared with cholesterol. The compounds studied were chosen on the basis of structural features characteristic for cholesterol and ergosterol and on available experimental data on the ability to form complexes with the antibiotic. Statistical analysis of the data obtained has been performed. The results show similarity of the conformational spaces occupied by all the sterols tested. This suggests that the conformational differences of sterol molecules are not the major feature responsible for the differential sterol - drug affinity.

  12. The utility of comparative models and the local model quality for protein crystal structure determination by Molecular Replacement

    Directory of Open Access Journals (Sweden)

    Pawlowski Marcin

    2012-11-01

    Full Text Available Abstract Background Computational models of protein structures were proved to be useful as search models in Molecular Replacement (MR, a common method to solve the phase problem faced by macromolecular crystallography. The success of MR depends on the accuracy of a search model. Unfortunately, this parameter remains unknown until the final structure of the target protein is determined. During the last few years, several Model Quality Assessment Programs (MQAPs that predict the local accuracy of theoretical models have been developed. In this article, we analyze whether the application of MQAPs improves the utility of theoretical models in MR. Results For our dataset of 615 search models, the real local accuracy of a model increases the MR success ratio by 101% compared to corresponding polyalanine templates. On the contrary, when local model quality is not utilized in MR, the computational models solved only 4.5% more MR searches than polyalanine templates. For the same dataset of the 615 models, a workflow combining MR with predicted local accuracy of a model found 45% more correct solution than polyalanine templates. To predict such accuracy MetaMQAPclust, a “clustering MQAP” was used. Conclusions Using comparative models only marginally increases the MR success ratio in comparison to polyalanine structures of templates. However, the situation changes dramatically once comparative models are used together with their predicted local accuracy. A new functionality was added to the GeneSilico Fold Prediction Metaserver in order to build models that are more useful for MR searches. Additionally, we have developed a simple method, AmIgoMR (Am I good for MR?, to predict if an MR search with a template-based model for a given template is likely to find the correct solution.

  13. Intelligent predictive model of ventilating capacity of imperial smelt furnace

    Institute of Scientific and Technical Information of China (English)

    唐朝晖; 胡燕瑜; 桂卫华; 吴敏

    2003-01-01

    In order to know the ventilating capacity of imperial smelt furnace (ISF), and increase the output of plumbum, an intelligent modeling method based on gray theory and artificial neural networks(ANN) is proposed, in which the weight values in the integrated model can be adjusted automatically. An intelligent predictive model of the ventilating capacity of the ISF is established and analyzed by the method. The simulation results and industrial applications demonstrate that the predictive model is close to the real plant, the relative predictive error is 0.72%, which is 50% less than the single model, leading to a notable increase of the output of plumbum.

  14. A Prediction Model of the Capillary Pressure J-Function

    Science.gov (United States)

    Xu, W. S.; Luo, P. Y.; Sun, L.; Lin, N.

    2016-01-01

    The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701

  15. Adaptation of Predictive Models to PDA Hand-Held Devices

    Directory of Open Access Journals (Sweden)

    Lin, Edward J

    2008-01-01

    Full Text Available Prediction models using multiple logistic regression are appearing with increasing frequency in the medical literature. Problems associated with these models include the complexity of computations when applied in their pure form, and lack of availability at the bedside. Personal digital assistant (PDA hand-held devices equipped with spreadsheet software offer the clinician a readily available and easily applied means of applying predictive models at the bedside. The purposes of this article are to briefly review regression as a means of creating predictive models and to describe a method of choosing and adapting logistic regression models to emergency department (ED clinical practice.

  16. A model to predict the power output from wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Riso National Lab., Roskilde (Denmark)

    1997-12-31

    This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.

  17. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  18. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of new technologies

  19. Development of residual stress prediction model in pipe weldment

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Yun Yong; Lim, Se Young; Choi, Kang Hyeuk; Cho, Young Sam; Lim, Jae Hyuk [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    2002-03-15

    When Leak Before Break(LBB) concepts is applied to high energy piping of nuclear power plants, residual weld stresses is a important variable. The main purpose of his research is to develop the numerical model which can predict residual weld stresses. Firstly, basic theories were described which need to numerical analysis of welding parts. Before the analysis of pipe, welding of a flat plate was analyzed and compared. Appling the data of used pipes, thermal/mechanical analysis were accomplished and computed temperature gradient and residual stress distribution. For thermal analysis, proper heat flux was regarded as the heat source and convection/radiation heat transfer were considered at surfaces. The residual stresses were counted from the computed temperature gradient and they were compared and verified with a result of another research.

  20. comparative analysis of two mathematical models for prediction of ...

    African Journals Online (AJOL)

    (3). Where F = compressive strength of concrete, c/w = cement/water ratio and A, B = Em- prirical constants. .... factor space is a (q − 1) dimensional simplex lattice. ... tation of lines or planes joining the assumed .... PSEUDO COORDINATES.

  1. Model Predictive Control of Buoy Type Wave Energy Converter

    DEFF Research Database (Denmark)

    Soltani, Mohsen; Sichani, Mahdi Teimouri; Mirzaei, Mahmood

    2014-01-01

    The paper introduces the Wavestar wave energy converter and presents the implementation of model predictive controller that maximizes the power generation. The ocean wave power is extracted using a hydraulic electric generator which is connected to an oscillating buoy. The power generator is an a....... This approach is then taken into account and an MPC controller is designed for a model WEC and implemented on a numerical example. Further, the power outtake of this controller is compared to the optimal controller as an indicator of the performance of the designed controller.......The paper introduces the Wavestar wave energy converter and presents the implementation of model predictive controller that maximizes the power generation. The ocean wave power is extracted using a hydraulic electric generator which is connected to an oscillating buoy. The power generator...... is an additive device attached to the buoy which may include damping, stiffness or similar terms hence will affect the dynamic motion of the buoy. Therefore such a device can be seen as a closed-loop controller. The objective of the wave energy converter is to harvest as much energy from sea as possible...

  2. REALIGNED MODEL PREDICTIVE CONTROL OF A PROPYLENE DISTILLATION COLUMN

    Directory of Open Access Journals (Sweden)

    A. I. Hinojosa

    Full Text Available Abstract In the process industry, advanced controllers usually aim at an economic objective, which usually requires closed-loop stability and constraints satisfaction. In this paper, the application of a MPC in the optimization structure of an industrial Propylene/Propane (PP splitter is tested with a controller based on a state space model, which is suitable for heavily disturbed environments. The simulation platform is based on the integration of the commercial dynamic simulator Dynsim® and the rigorous steady-state optimizer ROMeo® with the real-time facilities of Matlab. The predictive controller is the Infinite Horizon Model Predictive Control (IHMPC, based on a state-space model that that does not require the use of a state observer because the non-minimum state is built with the past inputs and outputs. The controller considers the existence of zone control of the outputs and optimizing targets for the inputs. We verify that the controller is efficient to control the propylene distillation system in a disturbed scenario when compared with a conventional controller based on a state observer. The simulation results show a good performance in terms of stability of the controller and rejection of large disturbances in the composition of the feed of the propylene distillation column.

  3. Model Predictive Control of Buoy Type Wave Energy Converter

    DEFF Research Database (Denmark)

    Soltani, Mohsen N.; Sichani, Mahdi T.; Mirzaei, Mahmood

    2014-01-01

    The paper introduces the Wavestar wave energy converter and presents the implementation of model predictive controller that maximizes the power generation. The ocean wave power is extracted using a hydraulic electric generator which is connected to an oscillating buoy. The power generator...... is an additive device attached to the buoy which may include damping, stiffness or similar terms hence will affect the dynamic motion of the buoy. Therefore such a device can be seen as a closed-loop controller. The objective of the wave energy converter is to harvest as much energy from sea as possible....... This approach is then taken into account and an MPC controller is designed for a model wave energy converter and implemented on a numerical example. Further, the power outtake of this controller is compared to the optimal controller as an indicator of the performance of the designed controller....

  4. Exchange Rate Prediction using Neural – Genetic Model

    Directory of Open Access Journals (Sweden)

    MECHGOUG Raihane

    2012-10-01

    Full Text Available Neural network have successfully used for exchange rate forecasting. However, due to a large number of parameters to be estimated empirically, it is not a simple task to select the appropriate neural network architecture for exchange rate forecasting problem.Researchers often overlook the effect of neural network parameters on the performance of neural network forecasting. The performance of neural network is critically dependant on the learning algorithms, thenetwork architecture and the choice of the control parameters. Even when a suitable setting of parameters (weight can be found, the ability of the resulting network to generalize the data not seen during learning may be far from optimal. For these reasons it seemslogical and attractive to apply genetic algorithms. Genetic algorithms may provide a useful tool for automating the design of neural network. The empirical results on foreign exchange rate prediction indicate that the proposed hybrid model exhibits effectively improved accuracy, when is compared with some other time series forecasting models.

  5. Predicting Career Advancement with Structural Equation Modelling

    Science.gov (United States)

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  6. Predicting Career Advancement with Structural Equation Modelling

    Science.gov (United States)

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  7. Modeling and prediction of surgical procedure times

    NARCIS (Netherlands)

    P.S. Stepaniak (Pieter); C. Heij (Christiaan); G. de Vries (Guus)

    2009-01-01

    textabstractAccurate prediction of medical operation times is of crucial importance for cost efficient operation room planning in hospitals. This paper investigates the possible dependence of procedure times on surgeon factors like age, experience, gender, and team composition. The effect of these f

  8. Predictive functional control based on fuzzy T-S model for HVAC systems temperature control

    Institute of Scientific and Technical Information of China (English)

    Hongli L(U); Lei JIA; Shulan KONG; Zhaosheng ZHANG

    2007-01-01

    In heating,ventilating and air-conditioning(HVAC)systems,there exist severe nonlinearity,time-varying nature,disturbances and uncertainties.A new predictive functional control based on Takagi-Sugeno(T-S)fuzzy model was proposed to control HVAC systems.The T-S fuzzy model of stabilized controlled process was obtained using the least squares method,then on the basis of global linear predictive model from T-S fuzzy model,the process was controlled by the predictive functional controller.Especially the feedback regulation part was developed to compensate uncertainties of fuzzy predictive model.Finally simulation test results in HVAC systems control applications showed that the proposed fuzzy model predictive functional control improves tracking effect and robustness.Compared with the conventional PID controller,this control strategy has the advantages of less overshoot and shorter setting time,etc.

  9. Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.

    Science.gov (United States)

    Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh

    2014-07-01

    This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management.

  10. Prediction Model of Sewing Technical Condition by Grey Neural Network

    Institute of Scientific and Technical Information of China (English)

    DONG Ying; FANG Fang; ZHANG Wei-yuan

    2007-01-01

    The grey system theory and the artificial neural network technology were applied to predict the sewing technical condition. The representative parameters, such as needle, stitch, were selected. Prediction model was established based on the different fabrics' mechanical properties that measured by KES instrument. Grey relevant degree analysis was applied to choose the input parameters of the neural network. The result showed that prediction model has good precision. The average relative error was 4.08% for needle and 4.25% for stitch.

  11. Active diagnosis of hybrid systems - A model predictive approach

    OpenAIRE

    2009-01-01

    A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and faulty outputs constrained by tolerable performance requirements. As in standard model predictive control, the first element of the optimal input is applied to the system and the whole procedure is repeate...

  12. Seasonal-to-decadal predictability in the Nordic Seas and Arctic with the Norwegian Climate Prediction Model

    Science.gov (United States)

    Counillon, Francois; Kimmritz, Madlen; Keenlyside, Noel; Wang, Yiguo; Bethke, Ingo

    2017-04-01

    The Norwegian Climate Prediction Model combines the Norwegian Earth System Model and the Ensemble Kalman Filter data assimilation method. The prediction skills of different versions of the system (with 30 members) are tested in the Nordic Seas and the Arctic region. Comparing the hindcasts branched from a SST-only assimilation run with a free ensemble run of 30 members, we are able to dissociate the predictability rooted in the external forcing from the predictability harvest from SST derived initial conditions. The latter adds predictability in the North Atlantic subpolar gyre and the Nordic Seas regions and overall there is very little degradation or forecast drift. Combined assimilation of SST and T-S profiles further improves the prediction skill in the Nordic Seas and into the Arctic. These lead to multi-year predictability in the high-latitudes. Ongoing developments of strongly coupled assimilation (ocean and sea ice) of ice concentration in idealized twin experiment will be shown, as way to further enhance prediction skill in the Arctic.

  13. Testing and analysis of internal hardwood log defect prediction models

    Science.gov (United States)

    R. Edward. Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  14. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  15. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  16. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  17. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  18. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  19. Prediction of Frost Occurrences Using Statistical Modeling Approaches

    Directory of Open Access Journals (Sweden)

    Hyojin Lee

    2016-01-01

    Full Text Available We developed the frost prediction models in spring in Korea using logistic regression and decision tree techniques. Hit Rate (HR, Probability of Detection (POD, and False Alarm Rate (FAR from both models were calculated and compared. Threshold values for the logistic regression models were selected to maximize HR and POD and minimize FAR for each station, and the split for the decision tree models was stopped when change in entropy was relatively small. Average HR values were 0.92 and 0.91 for logistic regression and decision tree techniques, respectively, average POD values were 0.78 and 0.80 for logistic regression and decision tree techniques, respectively, and average FAR values were 0.22 and 0.28 for logistic regression and decision tree techniques, respectively. The average numbers of selected explanatory variables were 5.7 and 2.3 for logistic regression and decision tree techniques, respectively. Fewer explanatory variables can be more appropriate for operational activities to provide a timely warning for the prevention of the frost damages to agricultural crops. We concluded that the decision tree model can be more useful for the timely warning system. It is recommended that the models should be improved to reflect local topological features.

  20. Comparing the predictive value of multiple cognitive, affective, and motor tasks after rodent traumatic brain injury.

    Science.gov (United States)

    Zhao, Zaorui; Loane, David J; Murray, Michael G; Stoica, Bogdan A; Faden, Alan I

    2012-10-10

    Controlled cortical impact injury (CCI) is a widely-used, clinically-relevant model of traumatic brain injury (TBI). Although functional outcomes have been used for years in this model, little work has been done to compare the predictive value of various cognitive and sensorimotor assessment tests, singly or in combination. Such information would be particularly useful for assessing mechanisms of injury or therapeutic interventions. Following isoflurane anesthesia, C57BL/6 mice were subjected to sham, mild (5.0 m/sec), moderate (6.0 m/sec), or severe (7.5 m/sec) CCI. A battery of behavioral tests were evaluated and compared, including the standard Morris water maze (sMWM), reversal Morris water maze (rMWM), novel object recognition (NOR), passive avoidance (PA), tail-suspension (TS), beam walk (BW), and open-field locomotor activity. The BW task, performed at post-injury days (PID) 0, 1, 3, 7, 14, 21, and 28, showed good discrimination as a function of injury severity. The sMWM and rMWM tests (PID 14-23), as well as NOR (PID 24 and 25), effectively discriminated spatial and novel object learning and memory across injury severity levels. Notably, the rMWM showed the greatest separation between mild and moderate/severe injury. PA (PID 27 and 28) and TS (PID 24) also reflected differences across injury levels, but to a lesser degree. We also compared individual functional measures with histological outcomes such as lesion volume and neuronal cell loss across anatomical regions. In addition, we created a novel composite behavioral score index from individual complementary behavioral scores, and it provided superior discrimination across injury severities compared to individual tests. In summary, this study demonstrates the feasibility of using a larger number of complementary functional outcome behavioral tests than those traditionally employed to follow post-traumatic recovery after TBI, and suggests that the composite score may be a helpful tool for screening