WorldWideScience

Sample records for model predictions generally

  1. Generalized Predictive Control and Neural Generalized Predictive Control

    Directory of Open Access Journals (Sweden)

    Sadhana CHIDRAWAR

    2008-12-01

    Full Text Available As Model Predictive Control (MPC relies on the predictive Control using a multilayer feed forward network as the plants linear model is presented. In using Newton-Raphson as the optimization algorithm, the number of iterations needed for convergence is significantly reduced from other techniques. This paper presents a detailed derivation of the Generalized Predictive Control and Neural Generalized Predictive Control with Newton-Raphson as minimization algorithm. Taking three separate systems, performances of the system has been tested. Simulation results show the effect of neural network on Generalized Predictive Control. The performance comparison of this three system configurations has been given in terms of ISE and IAE.

  2. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  3. Explained variation and predictive accuracy in general parametric statistical models: the role of model misspecification

    DEFF Research Database (Denmark)

    Rosthøj, Susanne; Keiding, Niels

    2004-01-01

    When studying a regression model measures of explained variation are used to assess the degree to which the covariates determine the outcome of interest. Measures of predictive accuracy are used to assess the accuracy of the predictions based on the covariates and the regression model. We give a ...... a detailed and general introduction to the two measures and the estimation procedures. The framework we set up allows for a study of the effect of misspecification on the quantities estimated. We also introduce a generalization to survival analysis....

  4. Artificial neural network models for prediction of cardiovascular autonomic dysfunction in general Chinese population

    Science.gov (United States)

    2013-01-01

    Background The present study aimed to develop an artificial neural network (ANN) based prediction model for cardiovascular autonomic (CA) dysfunction in the general population. Methods We analyzed a previous dataset based on a population sample consisted of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN analysis. Performances of these prediction models were evaluated in the validation set. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with CA dysfunction (P < 0.05). The mean area under the receiver-operating curve was 0.762 (95% CI 0.732–0.793) for prediction model developed using ANN analysis. The mean sensitivity, specificity, positive and negative predictive values were similar in the prediction models was 0.751, 0.665, 0.330 and 0.924, respectively. All HL statistics were less than 15.0. Conclusion ANN is an effective tool for developing prediction models with high value for predicting CA dysfunction among the general population. PMID:23902963

  5. Bayesian prediction of spatial count data using generalized linear mixed models

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge

    2002-01-01

    Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...

  6. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

    Science.gov (United States)

    A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

  7. Entanglement model of homeopathy as an example of generalized entanglement predicted by weak quantum theory.

    Science.gov (United States)

    Walach, H

    2003-08-01

    Homeopathy is scientifically banned, both for lack of consistent empirical findings, but more so for lack of a sound theoretical model to explain its purported effects. This paper makes an attempt to introduce an explanatory idea based on a generalized version of quantum mechanics (QM), the weak quantum theory (WQT). WQT uses the algebraic formalism of QM proper, but drops some restrictions and definitions typical for QM. This results in a general axiomatic framework similar to QM, but more generalized and applicable to all possible systems. Most notably, WQT predicts entanglement, which in QM is known as Einstein-Podolsky-Rosen (EPR) correlatedness within quantum systems. According to WQT, this entanglement is not only tied to quantum systems, but is to be expected whenever a global and a local variable describing a system are complementary. This idea is used here to reconstruct homeopathy as an exemplification of generalized entanglement as predicted by WQT. It transpires that homeopathy uses two instances of generalized entanglement: one between the remedy and the original substance (potentiation principle) and one between the individual symptoms of a patient and the general symptoms of a remedy picture (similarity principle). By bringing these two elements together, double entanglement ensues, which is reminiscent of cryptographic and teleportation applications of entanglement in QM proper. Homeopathy could be a macroscopic analogue to quantum teleportation. This model is exemplified and some predictions are derived, which make it possible to test the model. Copyright 2003 S. Karger GmbH, Freiburg

  8. Multi-year predictability in a coupled general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Power, Scott; Colman, Rob [Bureau of Meteorology Research Centre, Melbourne, VIC (Australia)

    2006-02-01

    Multi-year to decadal variability in a 100-year integration of a BMRC coupled atmosphere-ocean general circulation model (CGCM) is examined. The fractional contribution made by the decadal component generally increases with depth and latitude away from surface waters in the equatorial Indo-Pacific Ocean. The relative importance of decadal variability is enhanced in off-equatorial ''wings'' in the subtropical eastern Pacific. The model and observations exhibit ''ENSO-like'' decadal patterns. Analytic results are derived, which show that the patterns can, in theory, occur in the absence of any predictability beyond ENSO time-scales. In practice, however, modification to this stochastic view is needed to account for robust differences between ENSO-like decadal patterns and their interannual counterparts. An analysis of variability in the CGCM, a wind-forced shallow water model, and a simple mixed layer model together with existing and new theoretical results are used to improve upon this stochastic paradigm and to provide a new theory for the origin of decadal ENSO-like patterns like the Interdecadal Pacific Oscillation and Pacific Decadal Oscillation. In this theory, ENSO-driven wind-stress variability forces internal equatorially-trapped Kelvin waves that propagate towards the eastern boundary. Kelvin waves can excite reflected internal westward propagating equatorially-trapped Rossby waves (RWs) and coastally-trapped waves (CTWs). CTWs have no impact on the off-equatorial sub-surface ocean outside the coastal wave guide, whereas the RWs do. If the frequency of the incident wave is too high, then only CTWs are excited. At lower frequencies, both CTWs and RWs can be excited. The lower the frequency, the greater the fraction of energy transmitted to RWs. This lowers the characteristic frequency of variability off the equator relative to its equatorial counterpart. Both the eastern boundary interactions and the accumulation of

  9. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Science.gov (United States)

    King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin

    2011-01-01

    Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  10. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Directory of Open Access Journals (Sweden)

    Michael King

    Full Text Available Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL for the development of hazardous drinking in safe drinkers.A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women.69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873. The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51. External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846 and Hedge's g of 0.68 (95% CI 0.57, 0.78.The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  11. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    Science.gov (United States)

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  12. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

    International Nuclear Information System (INIS)

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

    2014-01-01

    Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography

  13. Predicting the multi-domain progression of Parkinson's disease: a Bayesian multivariate generalized linear mixed-effect model.

    Science.gov (United States)

    Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei

    2017-09-25

    It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).

  14. Generalized Predictive Control for Non-Stationary Systems

    DEFF Research Database (Denmark)

    Palsson, Olafur Petur; Madsen, Henrik; Søgaard, Henning Tangen

    1994-01-01

    This paper shows how the generalized predictive control (GPC) can be extended to non-stationary (time-varying) systems. If the time-variation is slow, then the classical GPC can be used in context with an adaptive estimation procedure of a time-invariant ARIMAX model. However, in this paper prior...... knowledge concerning the nature of the parameter variations is assumed available. The GPC is based on the assumption that the prediction of the system output can be expressed as a linear combination of present and future controls. Since the Diophantine equation cannot be used due to the time......-variation of the parameters, the optimal prediction is found as the general conditional expectation of the system output. The underlying model is of an ARMAX-type instead of an ARIMAX-type as in the original version of the GPC (Clarke, D. W., C. Mohtadi and P. S. Tuffs (1987). Automatica, 23, 137-148) and almost all later...

  15. Predicting mastitis in dairy cows using neural networks and generalized additive models

    DEFF Research Database (Denmark)

    Anantharama Ankinakatte, Smitha; Norberg, Elise; Løvendahl, Peter

    2013-01-01

    The aim of this paper is to develop and compare methods for early detection of oncoming mastitis with automated recorded data. The data were collected at the Danish Cattle Research Center (Tjele, Denmark). As indicators of mastitis, electrical conductivity (EC), somatic cell scores (SCS), lactate...... that combines residual components into a score to improve the model. To develop and verify the model, the data are randomly divided into training and validation data sets. To predict the occurrence of mastitis, neural network models (NNs) and generalized additive models (GAMs) are developed using the training...... classification with all indicators, using individual residuals rather than factor scores. When SCS is excluded, GAMs shows better classification result when milk yield is also excluded. In conclusion, the study shows that NNs and GAMs are similar in their ability to detect mastitis, a sensitivity of almost 75...

  16. Predicting Cost/Reliability/Maintainability of Advanced General Aviation Avionics Equipment

    Science.gov (United States)

    Davis, M. R.; Kamins, M.; Mooz, W. E.

    1978-01-01

    A methodology is provided for assisting NASA in estimating the cost, reliability, and maintenance (CRM) requirements for general avionics equipment operating in the 1980's. Practical problems of predicting these factors are examined. The usefulness and short comings of different approaches for modeling coast and reliability estimates are discussed together with special problems caused by the lack of historical data on the cost of maintaining general aviation avionics. Suggestions are offered on how NASA might proceed in assessing cost reliability CRM implications in the absence of reliable generalized predictive models.

  17. Spatially explicit models, generalized reproduction numbers and the prediction of patterns of waterborne disease

    Science.gov (United States)

    Rinaldo, A.; Gatto, M.; Mari, L.; Casagrandi, R.; Righetto, L.; Bertuzzo, E.; Rodriguez-Iturbe, I.

    2012-12-01

    Metacommunity and individual-based theoretical models are studied in the context of the spreading of infections of water-borne diseases along the ecological corridors defined by river basins and networks of human mobility. The overarching claim is that mathematical models can indeed provide predictive insight into the course of an ongoing epidemic, potentially aiding real-time emergency management in allocating health care resources and by anticipating the impact of alternative interventions. To support the claim, we examine the ex-post reliability of published predictions of the 2010-2011 Haiti cholera outbreak from four independent modeling studies that appeared almost simultaneously during the unfolding epidemic. For each modeled epidemic trajectory, it is assessed how well predictions reproduced the observed spatial and temporal features of the outbreak to date. The impact of different approaches is considered to the modeling of the spatial spread of V. cholera, the mechanics of cholera transmission and in accounting for the dynamics of susceptible and infected individuals within different local human communities. A generalized model for Haitian epidemic cholera and the related uncertainty is thus constructed and applied to the year-long dataset of reported cases now available. Specific emphasis will be dedicated to models of human mobility, a fundamental infection mechanism. Lessons learned and open issues are discussed and placed in perspective, supporting the conclusion that, despite differences in methods that can be tested through model-guided field validation, mathematical modeling of large-scale outbreaks emerges as an essential component of future cholera epidemic control. Although explicit spatial modeling is made routinely possible by widespread data mapping of hydrology, transportation infrastructure, population distribution, and sanitation, the precise condition under which a waterborne disease epidemic can start in a spatially explicit setting is

  18. Generalized additive models used to predict species abundance in the Gulf of Mexico: an ecosystem modeling tool.

    Directory of Open Access Journals (Sweden)

    Michael Drexler

    Full Text Available Spatially explicit ecosystem models of all types require an initial allocation of biomass, often in areas where fisheries independent abundance estimates do not exist. A generalized additive modelling (GAM approach is used to describe the abundance of 40 species groups (i.e. functional groups across the Gulf of Mexico (GoM using a large fisheries independent data set (SEAMAP and climate scale oceanographic conditions. Predictor variables included in the model are chlorophyll a, sediment type, dissolved oxygen, temperature, and depth. Despite the presence of a large number of zeros in the data, a single GAM using a negative binomial distribution was suitable to make predictions of abundance for multiple functional groups. We present an example case study using pink shrimp (Farfantepenaeus duroarum and compare the results to known distributions. The model successfully predicts the known areas of high abundance in the GoM, including those areas where no data was inputted into the model fitting. Overall, the model reliably captures areas of high and low abundance for the large majority of functional groups observed in SEAMAP. The result of this method allows for the objective setting of spatial distributions for numerous functional groups across a modeling domain, even where abundance data may not exist.

  19. Comparing artificial neural networks, general linear models and support vector machines in building predictive models for small interfering RNAs.

    Directory of Open Access Journals (Sweden)

    Kyle A McQuisten

    2009-10-01

    Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are

  20. Neural Generalized Predictive Control of a non-linear Process

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1998-01-01

    The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability qu...... detail and discuss the implementation difficulties. The neural generalized predictive controller is tested on a pneumatic servo sys-tem.......The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability...... qualities. The controller is a non-linear version of the well-known generalized predictive controller developed in linear control theory. It involves minimization of a cost function which in the present case has to be done numerically. Therefore, we develop the numerical algorithms necessary in substantial...

  1. Risk assessment models to predict caries recurrence after oral rehabilitation under general anaesthesia: a pilot study.

    Science.gov (United States)

    Lin, Yai-Tin; Kalhan, Ashish Chetan; Lin, Yng-Tzer Joseph; Kalhan, Tosha Ashish; Chou, Chein-Chin; Gao, Xiao Li; Hsu, Chin-Ying Stephen

    2018-05-08

    Oral rehabilitation under general anaesthesia (GA), commonly employed to treat high caries-risk children, has been associated with high economic and individual/family burden, besides high post-GA caries recurrence rates. As there is no caries prediction model available for paediatric GA patients, this study was performed to build caries risk assessment/prediction models using pre-GA data and to explore mid-term prognostic factors for early identification of high-risk children prone to caries relapse post-GA oral rehabilitation. Ninety-two children were identified and recruited with parental consent before oral rehabilitation under GA. Biopsychosocial data collection at baseline and the 6-month follow-up were conducted using questionnaire (Q), microbiological assessment (M) and clinical examination (C). The prediction models constructed using data collected from Q, Q + M and Q + M + C demonstrated an accuracy of 72%, 78% and 82%, respectively. Furthermore, of the 83 (90.2%) patients recalled 6 months after GA intervention, recurrent caries was identified in 54.2%, together with reduced bacterial counts, lower plaque index and increased percentage of children toothbrushing for themselves (all P < 0.05). Additionally, meal-time and toothbrushing duration were shown, through bivariate analyses, to be significant prognostic determinants for caries recurrence (both P < 0.05). Risk assessment/prediction models built using pre-GA data may be promising in identifying high-risk children prone to post-GA caries recurrence, although future internal and external validation of predictive models is warranted. © 2018 FDI World Dental Federation.

  2. Prediction of cloud droplet number in a general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-04-01

    We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.

  3. Gravitational redshift of galaxies in clusters as predicted by general relativity.

    Science.gov (United States)

    Wojtak, Radosław; Hansen, Steen H; Hjorth, Jens

    2011-09-28

    The theoretical framework of cosmology is mainly defined by gravity, of which general relativity is the current model. Recent tests of general relativity within the Lambda Cold Dark Matter (ΛCDM) model have found a concordance between predictions and the observations of the growth rate and clustering of the cosmic web. General relativity has not hitherto been tested on cosmological scales independently of the assumptions of the ΛCDM model. Here we report an observation of the gravitational redshift of light coming from galaxies in clusters at the 99 per cent confidence level, based on archival data. Our measurement agrees with the predictions of general relativity and its modification created to explain cosmic acceleration without the need for dark energy (the f(R) theory), but is inconsistent with alternative models designed to avoid the presence of dark matter. © 2011 Macmillan Publishers Limited. All rights reserved

  4. Predictability and interpretability of hybrid link-level crash frequency models for urban arterials compared to cluster-based and general negative binomial regression models.

    Science.gov (United States)

    Najaf, Pooya; Duddu, Venkata R; Pulugurtha, Srinivas S

    2018-03-01

    Machine learning (ML) techniques have higher prediction accuracy compared to conventional statistical methods for crash frequency modelling. However, their black-box nature limits the interpretability. The objective of this research is to combine both ML and statistical methods to develop hybrid link-level crash frequency models with high predictability and interpretability. For this purpose, M5' model trees method (M5') is introduced and applied to classify the crash data and then calibrate a model for each homogenous class. The data for 1134 and 345 randomly selected links on urban arterials in the city of Charlotte, North Carolina was used to develop and validate models, respectively. The outputs from the hybrid approach are compared with the outputs from cluster-based negative binomial regression (NBR) and general NBR models. Findings indicate that M5' has high predictability and is very reliable to interpret the role of different attributes on crash frequency compared to other developed models.

  5. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  6. Use of Ocean Remote Sensing Data to Enhance Predictions with a Coupled General Circulation Model

    Science.gov (United States)

    Rienecker, Michele M.

    1999-01-01

    Surface height, sea surface temperature and surface wind observations from satellites have given a detailed time sequence of the initiation and evolution of the 1997/98 El Nino. The data have beet complementary to the subsurface TAO moored data in their spatial resolution and extent. The impact of satellite observations on seasonal prediction in the tropical Pacific using a coupled ocean-atmosphere general circulation model will be presented.

  7. Explicit prediction of ice clouds in general circulation models

    Science.gov (United States)

    Kohler, Martin

    1999-11-01

    Although clouds play extremely important roles in the radiation budget and hydrological cycle of the Earth, there are large quantitative uncertainties in our understanding of their generation, maintenance and decay mechanisms, representing major obstacles in the development of reliable prognostic cloud water schemes for General Circulation Models (GCMs). Recognizing their relative neglect in the past, both observationally and theoretically, this work places special focus on ice clouds. A recent version of the UCLA - University of Utah Cloud Resolving Model (CRM) that includes interactive radiation is used to perform idealized experiments to study ice cloud maintenance and decay mechanisms under various conditions in term of: (1) background static stability, (2) background relative humidity, (3) rate of cloud ice addition over a fixed initial time-period and (4) radiation: daytime, nighttime and no-radiation. Radiation is found to have major effects on the life-time of layer-clouds. Optically thick ice clouds decay significantly slower than expected from pure microphysical crystal fall-out (taucld = 0.9--1.4 h as opposed to no-motion taumicro = 0.5--0.7 h). This is explained by the upward turbulent fluxes of water induced by IR destabilization, which partially balance the downward transport of water by snowfall. Solar radiation further slows the ice-water decay by destruction of the inversion above cloud-top and the resulting upward transport of water. Optically thin ice clouds, on the other hand, may exhibit even longer life-times (>1 day) in the presence of radiational cooling. The resulting saturation mixing ratio reduction provides for a constant cloud ice source. These CRM results are used to develop a prognostic cloud water scheme for the UCLA-GCM. The framework is based on the bulk water phase model of Ose (1993). The model predicts cloud liquid water and cloud ice separately, and which is extended to split the ice phase into suspended cloud ice (predicted

  8. Predicting glycated hemoglobin levels in the non-diabetic general population

    DEFF Research Database (Denmark)

    Rauh, Simone P; Heymans, Martijn W; Koopman, Anitra D M

    2017-01-01

    AIMS/HYPOTHESIS: To develop a prediction model that can predict HbA1c levels after six years in the non-diabetic general population, including previously used readily available predictors. METHODS: Data from 5,762 initially non-diabetic subjects from three population-based cohorts (Hoorn Study, I...

  9. The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun

    2017-01-01

    Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.

  10. Predicting stem borer density in maize using RapidEye data and generalized linear models

    Science.gov (United States)

    Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le

    2017-05-01

    Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.

  11. Application of General Regression Neural Network to the Prediction of LOD Change

    Science.gov (United States)

    Zhang, Xiao-Hong; Wang, Qi-Jie; Zhu, Jian-Jun; Zhang, Hao

    2012-01-01

    Traditional methods for predicting the change in length of day (LOD change) are mainly based on some linear models, such as the least square model and autoregression model, etc. However, the LOD change comprises complicated non-linear factors and the prediction effect of the linear models is always not so ideal. Thus, a kind of non-linear neural network — general regression neural network (GRNN) model is tried to make the prediction of the LOD change and the result is compared with the predicted results obtained by taking advantage of the BP (back propagation) neural network model and other models. The comparison result shows that the application of the GRNN to the prediction of the LOD change is highly effective and feasible.

  12. Musite, a tool for global prediction of general and kinase-specific phosphorylation sites.

    Science.gov (United States)

    Gao, Jianjiong; Thelen, Jay J; Dunker, A Keith; Xu, Dong

    2010-12-01

    Reversible protein phosphorylation is one of the most pervasive post-translational modifications, regulating diverse cellular processes in various organisms. High throughput experimental studies using mass spectrometry have identified many phosphorylation sites, primarily from eukaryotes. However, the vast majority of phosphorylation sites remain undiscovered, even in well studied systems. Because mass spectrometry-based experimental approaches for identifying phosphorylation events are costly, time-consuming, and biased toward abundant proteins and proteotypic peptides, in silico prediction of phosphorylation sites is potentially a useful alternative strategy for whole proteome annotation. Because of various limitations, current phosphorylation site prediction tools were not well designed for comprehensive assessment of proteomes. Here, we present a novel software tool, Musite, specifically designed for large scale predictions of both general and kinase-specific phosphorylation sites. We collected phosphoproteomics data in multiple organisms from several reliable sources and used them to train prediction models by a comprehensive machine-learning approach that integrates local sequence similarities to known phosphorylation sites, protein disorder scores, and amino acid frequencies. Application of Musite on several proteomes yielded tens of thousands of phosphorylation site predictions at a high stringency level. Cross-validation tests show that Musite achieves some improvement over existing tools in predicting general phosphorylation sites, and it is at least comparable with those for predicting kinase-specific phosphorylation sites. In Musite V1.0, we have trained general prediction models for six organisms and kinase-specific prediction models for 13 kinases or kinase families. Although the current pretrained models were not correlated with any particular cellular conditions, Musite provides a unique functionality for training customized prediction models

  13. A residual life prediction model based on the generalized σ -N curved surface

    Directory of Open Access Journals (Sweden)

    Zongwen AN

    2016-06-01

    Full Text Available In order to investigate change rule of the residual life of structure under random repeated load, firstly, starting from the statistic meaning of random repeated load, the joint probability density function of maximum stress and minimum stress is derived based on the characteristics of order statistic (maximum order statistic and minimum order statistic; then, based on the equation of generalized σ -N curved surface, considering the influence of load cycles number on fatigue life, a relationship among minimum stress, maximum stress and residual life, that is the σmin(n- σmax(n-Nr(n curved surface model, is established; finally, the validity of the proposed model is demonstrated by a practical case. The result shows that the proposed model can reflect the influence of maximum stress and minimum stress on residual life of structure under random repeated load, which can provide a theoretical basis for life prediction and reliability assessment of structure.

  14. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  15. Tapping generalized essentialism to predict outgroup prejudices.

    Science.gov (United States)

    Hodson, Gordon; Skorska, Malvina N

    2015-06-01

    Psychological essentialism, the perception that groups possess inherent properties binding them and differentiating them from others, is theoretically relevant to predicting prejudice. Recent developments isolate two key dimensions: essentialistic entitativity (EE; groups as unitary, whole, entity-like) and essentialistic naturalness (EN; groups as fixed and immutable). We introduce a novel question: does tapping the covariance between EE and EN, rather than pitting them against each other, boost prejudice prediction? In Study 1 (re-analysis of Roets & Van Hiel, 2011b, Samples 1-3, in Belgium) and Study 2 (new Canadian data) their common/shared variance, modelled as generalized essentialism, doubles the predictive power relative to regression-based approaches with regard to racism (but not anti-gay or -schizophrenic prejudices). Theoretical implications are discussed. © 2014 The British Psychological Society.

  16. A statistical intercomparison of temperature and precipitation predicted by four general circulation models with historical data

    International Nuclear Information System (INIS)

    Grotch, S.L.

    1991-01-01

    This study is a detailed intercomparison of the results produced by four general circulation models (GCMs) that have been used to estimate the climatic consequences of a doubling of the CO 2 concentration. Two variables, surface air temperature and precipitation, annually and seasonally averaged, are compared for both the current climate and for the predicted equilibrium changes after a doubling of the atmospheric CO 2 concentration. The major question considered here is: how well do the predictions from different GCMs agree with each other and with historical climatology over different areal extents, from the global scale down to the range of only several gridpoints? Although the models often agree well when estimating averages over large areas, substantial disagreements become apparent as the spatial scale is reduced. At scales below continental, the correlations observed between different model predictions are often very poor. The implications of this work for investigation of climatic impacts on a regional scale are profound. For these two important variables, at least, the poor agreement between model simulations of the current climate on the regional scale calls into question the ability of these models to quantitatively estimate future climatic change on anything approaching the scale of a few (< 10) gridpoints, which is essential if these results are to be used in meaningful resource-assessment studies. A stronger cooperative effort among the different modeling groups will be necessary to assure that we are getting better agreement for the right reasons, a prerequisite for improving confidence in model projections. 11 refs.; 10 figs

  17. A statistical intercomparison of temperature and precipitation predicted by four general circulation models with historical data

    International Nuclear Information System (INIS)

    Grotch, S.L.

    1990-01-01

    This study is a detailed intercomparison of the results produced by four general circulation models (GCMs) that have been used to estimate the climatic consequences of a doubling of the CO 2 concentration. Two variables, surface air temperature and precipitation, annually and seasonally averaged, are compared for both the current climate and for the predicted equilibrium changes after a doubling of the atmospheric CO 2 concentration. The major question considered here is: how well do the predictions from different GCMs agree with each other and with historical climatology over different areal extents, from the global scale down to the range of only several gridpoints? Although the models often agree well when estimating averages over large areas, substantial disagreements become apparent as the spatial scale is reduced. At scales below continental, the correlations observed between different model predictions are often very poor. The implications of this work for investigation of climatic impacts on a regional scale are profound. For these two important variables, at least, the poor agreement between model simulations of the current climate on the regional scale calls into question the ability of these models to quantitatively estimate future climatic change on anything approaching the scale of a few (< 10) gridpoints, which is essential if these results are to be used in meaningful resource-assessment studies. A stronger cooperative effort among the different modeling groups will be necessary to assure that we are getting better agreement for the right reasons, a prerequisite for improving confidence in model projections

  18. A mechanistic model for predicting flow-assisted and general corrosion of carbon steel in reactor primary coolants

    Energy Technology Data Exchange (ETDEWEB)

    Lister, D. [University of New Brunswick, Fredericton, NB (Canada). Dept. of Chemical Engineering; Lang, L.C. [Atomic Energy of Canada Ltd., Chalk River Lab., ON (Canada)

    2002-07-01

    Flow-assisted corrosion (FAC) of carbon steel in high-temperature lithiated water can be described with a model that invokes dissolution of the protective oxide film and erosion of oxide particles that are loosened as a result. General corrosion under coolant conditions where oxide is not dissolved is described as well. In the model, the electrochemistry of magnetite dissolution and precipitation and the effect of particle size on solubility move the dependence on film thickness of the diffusion processes (and therefore the corrosion rate) away from reciprocal. Particle erosion under dissolving conditions is treated stochastically and depends upon the fluid shear stress at the surface. The corrosion rate dependence on coolant flow under FAC conditions then becomes somewhat less than that arising purely from fluid shear (proportional to the velocity squared). Under non-dissolving conditions, particle erosion occurs infrequently and general corrosion is almost unaffected by flow For application to a CANDU primary circuit and its feeders, the model was bench-marked against the outlet feeder S08 removed from the Point Lepreau reactor, which furnished one value of film thickness and one of corrosion rate for a computed average coolant velocity. Several constants and parameters in the model had to be assumed or were optimised, since values for them were not available. These uncertainties are no doubt responsible for the rather high values of potential that evolved as steps in the computation. The model predicts film thickness development and corrosion rate for the whole range of coolant velocities in outlet feeders very well. In particular, the detailed modelling of FAC in the complex geometry of one outlet feeder (F11) is in good agreement with measurements. When the particle erosion computations are inserted in the balance equations for the circuit, realistic values of crud level are obtained. The model also predicts low corrosion rates and thick oxide films for inlet

  19. A mechanistic model for predicting flow-assisted and general corrosion of carbon steel in reactor primary coolants

    International Nuclear Information System (INIS)

    Lister, D.

    2002-01-01

    Flow-assisted corrosion (FAC) of carbon steel in high-temperature lithiated water can be described with a model that invokes dissolution of the protective oxide film and erosion of oxide particles that are loosened as a result. General corrosion under coolant conditions where oxide is not dissolved is described as well. In the model, the electrochemistry of magnetite dissolution and precipitation and the effect of particle size on solubility move the dependence on film thickness of the diffusion processes (and therefore the corrosion rate) away from reciprocal. Particle erosion under dissolving conditions is treated stochastically and depends upon the fluid shear stress at the surface. The corrosion rate dependence on coolant flow under FAC conditions then becomes somewhat less than that arising purely from fluid shear (proportional to the velocity squared). Under non-dissolving conditions, particle erosion occurs infrequently and general corrosion is almost unaffected by flow For application to a CANDU primary circuit and its feeders, the model was bench-marked against the outlet feeder S08 removed from the Point Lepreau reactor, which furnished one value of film thickness and one of corrosion rate for a computed average coolant velocity. Several constants and parameters in the model had to be assumed or were optimised, since values for them were not available. These uncertainties are no doubt responsible for the rather high values of potential that evolved as steps in the computation. The model predicts film thickness development and corrosion rate for the whole range of coolant velocities in outlet feeders very well. In particular, the detailed modelling of FAC in the complex geometry of one outlet feeder (F11) is in good agreement with measurements. When the particle erosion computations are inserted in the balance equations for the circuit, realistic values of crud level are obtained. The model also predicts low corrosion rates and thick oxide films for inlet

  20. Concentration addition, independent action and generalized concentration addition models for mixture effect prediction of sex hormone synthesis in vitro.

    Directory of Open Access Journals (Sweden)

    Niels Hadrup

    Full Text Available Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA, independent action (IA and generalized concentration addition (GCA models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot

  1. An Intelligent Model for Stock Market Prediction

    Directory of Open Access Journals (Sweden)

    IbrahimM. Hamed

    2012-08-01

    Full Text Available This paper presents an intelligent model for stock market signal prediction using Multi-Layer Perceptron (MLP Artificial Neural Networks (ANN. Blind source separation technique, from signal processing, is integrated with the learning phase of the constructed baseline MLP ANN to overcome the problems of prediction accuracy and lack of generalization. Kullback Leibler Divergence (KLD is used, as a learning algorithm, because it converges fast and provides generalization in the learning mechanism. Both accuracy and efficiency of the proposed model were confirmed through the Microsoft stock, from wall-street market, and various data sets, from different sectors of the Egyptian stock market. In addition, sensitivity analysis was conducted on the various parameters of the model to ensure the coverage of the generalization issue. Finally, statistical significance was examined using ANOVA test.

  2. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  3. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  4. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  5. General predictive control using the delta operator

    DEFF Research Database (Denmark)

    Jensen, Morten Rostgaard; Poulsen, Niels Kjølstad; Ravn, Ole

    1993-01-01

    This paper deals with two-discrete-time operators, the conventional forward shift-operator and the δ-operator. Both operators are treated in view of construction of suitable solutions to the Diophantine equation for the purpose of prediction. A general step-recursive scheme is presented. Finally...... a general predictive control (GPC) is formulated and applied adaptively to a continuous-time plant...

  6. Statistical models for expert judgement and wear prediction

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1994-01-01

    This thesis studies the statistical analysis of expert judgements and prediction of wear. The point of view adopted is the one of information theory and Bayesian statistics. A general Bayesian framework for analyzing both the expert judgements and wear prediction is presented. Information theoretic interpretations are given for some averaging techniques used in the determination of consensus distributions. Further, information theoretic models are compared with a Bayesian model. The general Bayesian framework is then applied in analyzing expert judgements based on ordinal comparisons. In this context, the value of information lost in the ordinal comparison process is analyzed by applying decision theoretic concepts. As a generalization of the Bayesian framework, stochastic filtering models for wear prediction are formulated. These models utilize the information from condition monitoring measurements in updating the residual life distribution of mechanical components. Finally, the application of stochastic control models in optimizing operational strategies for inspected components are studied. Monte-Carlo simulation methods, such as the Gibbs sampler and the stochastic quasi-gradient method, are applied in the determination of posterior distributions and in the solution of stochastic optimization problems. (orig.) (57 refs., 7 figs., 1 tab.)

  7. Group spike-and-slab lasso generalized linear models for disease prediction and associated genes detection by incorporating pathway information.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun

    2018-03-15

    Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.

  8. General predictive model of friction behavior regimes for metal contacts based on the formation stability and evolution of nanocrystalline surface films.

    Energy Technology Data Exchange (ETDEWEB)

    Argibay, Nicolas [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Cheng, Shengfeng [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Sawyer, W. G. [Univ. of Florida, Gainesville, FL (United States); Michael, Joseph R. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Chandross, Michael E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-09-01

    The prediction of macro-scale friction and wear behavior based on first principles and material properties has remained an elusive but highly desirable target for tribologists and material scientists alike. Stochastic processes (e.g. wear), statistically described parameters (e.g. surface topography) and their evolution tend to defeat attempts to establish practical general correlations between fundamental nanoscale processes and macro-scale behaviors. We present a model based on microstructural stability and evolution for the prediction of metal friction regimes, founded on recently established microstructural deformation mechanisms of nanocrystalline metals, that relies exclusively on material properties and contact stress models. We show through complementary experimental and simulation results that this model overcomes longstanding practical challenges and successfully makes accurate and consistent predictions of friction transitions for a wide range of contact conditions. This framework not only challenges the assumptions of conventional causal relationships between hardness and friction, and between friction and wear, but also suggests a pathway for the design of higher performance metal alloys.

  9. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  10. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  11. Simulation modelling in agriculture: General considerations. | R.I. ...

    African Journals Online (AJOL)

    A computer simulation model is a detailed working hypothesis about a given system. The computer does all the necessary arithmetic when the hypothesis is invoked to predict the future behaviour of the simulated system under given conditions.A general pragmatic approach to model building is discussed; techniques are ...

  12. Evaluation and comparison of predictive individual-level general surrogates.

    Science.gov (United States)

    Gabriel, Erin E; Sachs, Michael C; Halloran, M Elizabeth

    2018-07-01

    An intermediate response measure that accurately predicts efficacy in a new setting at the individual level could be used both for prediction and personalized medical decisions. In this article, we define a predictive individual-level general surrogate (PIGS), which is an individual-level intermediate response that can be used to accurately predict individual efficacy in a new setting. While methods for evaluating trial-level general surrogates, which are predictors of trial-level efficacy, have been developed previously, few, if any, methods have been developed to evaluate individual-level general surrogates, and no methods have formalized the use of cross-validation to quantify the expected prediction error. Our proposed method uses existing methods of individual-level surrogate evaluation within a given clinical trial setting in combination with cross-validation over a set of clinical trials to evaluate surrogate quality and to estimate the absolute prediction error that is expected in a new trial setting when using a PIGS. Simulations show that our method performs well across a variety of scenarios. We use our method to evaluate and to compare candidate individual-level general surrogates over a set of multi-national trials of a pentavalent rotavirus vaccine.

  13. A Robust Practical Generalized Predictive Control for BoilerSuper Heater Temperature Control

    OpenAIRE

    Zaki Maki Mohialdeen

    2015-01-01

    A practical method of robust generalized predictive controller (GPC) application is developed using a combination of Ziegler-Nichols type functions relating the GPC controller parameters to a first order with time delay process parameters and a model matching controller. The GPC controller and the model matching controller are used in a master/slave configuration, with the GPC as the master controller and the model matching controller as the slave controlle...

  14. A model to predict the beginning of the pollen season

    DEFF Research Database (Denmark)

    Toldam-Andersen, Torben Bo

    1991-01-01

    for fruit trees are generally applicable, and give a reasonable description of the growth processes of other trees. This type of model can therefore be of value in predicting the start of the pollen season. The predicted dates were generally within 3-5 days of the observed. Finally the possibility of frost...

  15. An Improved Generalized Predictive Control in a Robust Dynamic Partial Least Square Framework

    Directory of Open Access Journals (Sweden)

    Jin Xin

    2015-01-01

    Full Text Available To tackle the sensitivity to outliers in system identification, a new robust dynamic partial least squares (PLS model based on an outliers detection method is proposed in this paper. An improved radial basis function network (RBFN is adopted to construct the predictive model from inputs and outputs dataset, and a hidden Markov model (HMM is applied to detect the outliers. After outliers are removed away, a more robust dynamic PLS model is obtained. In addition, an improved generalized predictive control (GPC with the tuning weights under dynamic PLS framework is proposed to deal with the interaction which is caused by the model mismatch. The results of two simulations demonstrate the effectiveness of proposed method.

  16. An Application to the Prediction of LOD Change Based on General Regression Neural Network

    Science.gov (United States)

    Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.

    2011-07-01

    Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.

  17. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  18. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  19. Differential Prediction Generalization in College Admissions Testing

    Science.gov (United States)

    Aguinis, Herman; Culpepper, Steven A.; Pierce, Charles A.

    2016-01-01

    We introduce the concept of "differential prediction generalization" in the context of college admissions testing. Specifically, we assess the extent to which predicted first-year college grade point average (GPA) based on high-school grade point average (HSGPA) and SAT scores depends on a student's ethnicity and gender and whether this…

  20. The effects of model and data complexity on predictions from species distributions models

    DEFF Research Database (Denmark)

    García-Callejas, David; Bastos, Miguel

    2016-01-01

    How complex does a model need to be to provide useful predictions is a matter of continuous debate across environmental sciences. In the species distributions modelling literature, studies have demonstrated that more complex models tend to provide better fits. However, studies have also shown...... that predictive performance does not always increase with complexity. Testing of species distributions models is challenging because independent data for testing are often lacking, but a more general problem is that model complexity has never been formally described in such studies. Here, we systematically...

  1. Calibration and validation of a general infiltration model

    Science.gov (United States)

    Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.

    1999-08-01

    A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.

  2. Use of a Machine-learning Method for Predicting Highly Cited Articles Within General Radiology Journals.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Doshi, Ankur M; Ginocchio, Luke A; Aphinyanaphongs, Yindalon

    2016-12-01

    This study aimed to assess the performance of a text classification machine-learning model in predicting highly cited articles within the recent radiological literature and to identify the model's most influential article features. We downloaded from PubMed the title, abstract, and medical subject heading terms for 10,065 articles published in 25 general radiology journals in 2012 and 2013. Three machine-learning models were applied to predict the top 10% of included articles in terms of the number of citations to the article in 2014 (reflecting the 2-year time window in conventional impact factor calculations). The model having the highest area under the curve was selected to derive a list of article features (words) predicting high citation volume, which was iteratively reduced to identify the smallest possible core feature list maintaining predictive power. Overall themes were qualitatively assigned to the core features. The regularized logistic regression (Bayesian binary regression) model had highest performance, achieving an area under the curve of 0.814 in predicting articles in the top 10% of citation volume. We reduced the initial 14,083 features to 210 features that maintain predictivity. These features corresponded with topics relating to various imaging techniques (eg, diffusion-weighted magnetic resonance imaging, hyperpolarized magnetic resonance imaging, dual-energy computed tomography, computed tomography reconstruction algorithms, tomosynthesis, elastography, and computer-aided diagnosis), particular pathologies (prostate cancer; thyroid nodules; hepatic adenoma, hepatocellular carcinoma, non-alcoholic fatty liver disease), and other topics (radiation dose, electroporation, education, general oncology, gadolinium, statistics). Machine learning can be successfully applied to create specific feature-based models for predicting articles likely to achieve high influence within the radiological literature. Copyright © 2016 The Association of University

  3. Predicting dyadic adjustment from general and relationship-specific beliefs.

    Science.gov (United States)

    DeBord, J; Romans, J S; Krieshok, T

    1996-05-01

    The cognitive mediation model of human psychological functioning has received increasing attention by researchers studying the role of cognitive variables in relationship distress. This study is an examination of the role of general irrational beliefs, as measured by the Irrational Beliefs Test (IBT; Jones, 1968), and relationship-specific irrational beliefs, as measured by the Relationship Belief Questionnaire (RBQ; Romans & DeBord, 1994), in predicting the perceived quality of relationships by married or cohabiting couples. Results indicated that respondents who reported higher levels of relationship-specific irrational beliefs also reported higher levels of dyadic adjustment; but contrary to expectation, higher levels of general irrational beliefs correlated with lower levels of dyadic adjustment. Implications of these findings are discussed in relation to the depressive realism hypothesis.

  4. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  5. A general unified non-equilibrium model for predicting saturated and subcooled critical two-phase flow rates through short and long tubes

    International Nuclear Information System (INIS)

    Fraser, D.W.H.; Abdelmessih, A.H.

    1995-01-01

    A general unified model is developed to predict one-component critical two-phase pipe flow. Modelling of the two-phase flow is accomplished by describing the evolution of the flow between the location of flashing inception and the exit (critical) plane. The model approximates the nonequilibrium phase change process via thermodynamic equilibrium paths. Included are the relative effects of varying the location of flashing inception, pipe geometry, fluid properties and length to diameter ratio. The model predicts that a range of critical mass fluxes exist and is bound by a maximum and minimum value for a given thermodynamic state. This range is more pronounced at lower subcooled stagnation states and can be attributed to the variation in the location of flashing inception. The model is based on the results of an experimental study of the critical two-phase flow of saturated and subcooled water through long tubes. In that study, the location of flashing inception was accurately controlled and adjusted through the use of a new device. The data obtained revealed that for fixed stagnation conditions, the maximum critical mass flux occurred with flashing inception located near the pipe exit; while minimum critical mass fluxes occurred with the flashing front located further upstream. Available data since 1970 for both short and long tubes over a wide range of conditions are compared with the model predictions. This includes test section L/D ratios from 25 to 300 and covers a temperature and pressure range of 110 to 280 degrees C and 0.16 to 6.9 MPa. respectively. The predicted maximum and minimum critical mass fluxes show an excellent agreement with the range observed in the experimental data

  6. A general unified non-equilibrium model for predicting saturated and subcooled critical two-phase flow rates through short and long tubes

    Energy Technology Data Exchange (ETDEWEB)

    Fraser, D.W.H. [Univ. of British Columbia (Canada); Abdelmessih, A.H. [Univ. of Toronto, Ontario (Canada)

    1995-09-01

    A general unified model is developed to predict one-component critical two-phase pipe flow. Modelling of the two-phase flow is accomplished by describing the evolution of the flow between the location of flashing inception and the exit (critical) plane. The model approximates the nonequilibrium phase change process via thermodynamic equilibrium paths. Included are the relative effects of varying the location of flashing inception, pipe geometry, fluid properties and length to diameter ratio. The model predicts that a range of critical mass fluxes exist and is bound by a maximum and minimum value for a given thermodynamic state. This range is more pronounced at lower subcooled stagnation states and can be attributed to the variation in the location of flashing inception. The model is based on the results of an experimental study of the critical two-phase flow of saturated and subcooled water through long tubes. In that study, the location of flashing inception was accurately controlled and adjusted through the use of a new device. The data obtained revealed that for fixed stagnation conditions, the maximum critical mass flux occurred with flashing inception located near the pipe exit; while minimum critical mass fluxes occurred with the flashing front located further upstream. Available data since 1970 for both short and long tubes over a wide range of conditions are compared with the model predictions. This includes test section L/D ratios from 25 to 300 and covers a temperature and pressure range of 110 to 280{degrees}C and 0.16 to 6.9 MPa. respectively. The predicted maximum and minimum critical mass fluxes show an excellent agreement with the range observed in the experimental data.

  7. International Competition and Inequality: A Generalized Ricardian Model

    OpenAIRE

    Adolfo Figueroa

    2014-01-01

    Why does the gap in real wage rates persist between the First World and the Third World after so many years of increasing globalization? The standard neoclassical trade model predicts that real wage rates will be equalized with international trade, whereas the standard Ricardian trade model does not. Facts are thus consistent with the Ricardian model. However, this model leaves undetermined income distribution. The objective of this paper is to fill this gap by developing a generalized Ricard...

  8. Hierarchical models for informing general biomass equations with felled tree data

    Science.gov (United States)

    Brian J. Clough; Matthew B. Russell; Christopher W. Woodall; Grant M. Domke; Philip J. Radtke

    2015-01-01

    We present a hierarchical framework that uses a large multispecies felled tree database to inform a set of general models for predicting tree foliage biomass, with accompanying uncertainty, within the FIA database. Results suggest significant prediction uncertainty for individual trees and reveal higher errors when predicting foliage biomass for larger trees and for...

  9. Development of a general model to predict the rate of radionuclide release (source term) from a low-level waste shallow land burial facility

    International Nuclear Information System (INIS)

    Sullivan, T.M.; Kempf, C.R.; Suen, C.J.; Mughabghab, S.M.

    1988-01-01

    Federal Code of Regulations 10 CFR 61 requires that any near surface disposal site be capable of being characterized, analyzed, and modeled. The objective of this program is to assist NRC in developing the ability to model a disposal site that conforms to these regulations. In particular, a general computer model capable of predicting the quantity and rate of radionuclide release from a shallow land burial trench, i.e., the source term, is being developed. The framework for this general model has been developed and consists of four basic compartments that represent the major processes that influence release. These compartments are: water flow, container degradation, release from the waste packages, and radionuclide transport. Models for water flow and radionuclide transport rely on the use of the computer codes FEMWATER and FEMWASTE. These codes are generally regarded as being state-of-the-art and required little modification for their application to this project. Models for container degradation and release from waste packages have been specifically developed for this project. This paper provides a brief description of the models being used in the source term project and examples of their use over a range of potential conditions. 13 refs

  10. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).

    Science.gov (United States)

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  11. Using a Prediction Model to Manage Cyber Security Threats

    Directory of Open Access Journals (Sweden)

    Venkatesh Jaganathan

    2015-01-01

    Full Text Available Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  12. Using a Prediction Model to Manage Cyber Security Threats.

    Science.gov (United States)

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  13. Using a Prediction Model to Manage Cyber Security Threats

    Science.gov (United States)

    Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization. PMID:26065024

  14. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  15. Comparison of pause predictions of two sequence-dependent transcription models

    International Nuclear Information System (INIS)

    Bai, Lu; Wang, Michelle D

    2010-01-01

    Two recent theoretical models, Bai et al (2004, 2007) and Tadigotla et al (2006), formulated thermodynamic explanations of sequence-dependent transcription pausing by RNA polymerase (RNAP). The two models differ in some basic assumptions and therefore make different yet overlapping predictions for pause locations, and different predictions on pause kinetics and mechanisms. Here we present a comprehensive comparison of the two models. We show that while they have comparable predictive power of pause locations at low NTP concentrations, the Bai et al model is more accurate than Tadigotla et al at higher NTP concentrations. The pausing kinetics predicted by Bai et al is also consistent with time-course transcription reactions, while Tadigotla et al is unsuited for this type of kinetic prediction. More importantly, the two models in general predict different pausing mechanisms even for the same pausing sites, and the Bai et al model provides an explanation more consistent with recent single molecule observations

  16. Generalized Predictive Control of Dynamic Systems with Rigid-Body Modes

    Science.gov (United States)

    Kvaternik, Raymond G.

    2013-01-01

    Numerical simulations to assess the effectiveness of Generalized Predictive Control (GPC) for active control of dynamic systems having rigid-body modes are presented. GPC is a linear, time-invariant, multi-input/multi-output predictive control method that uses an ARX model to characterize the system and to design the controller. Although the method can accommodate both embedded (implicit) and explicit feedforward paths for incorporation of disturbance effects, only the case of embedded feedforward in which the disturbances are assumed to be unknown is considered here. Results from numerical simulations using mathematical models of both a free-free three-degree-of-freedom mass-spring-dashpot system and the XV-15 tiltrotor research aircraft are presented. In regulation mode operation, which calls for zero system response in the presence of disturbances, the simulations showed reductions of nearly 100%. In tracking mode operations, where the system is commanded to follow a specified path, the GPC controllers produced the desired responses, even in the presence of disturbances.

  17. General Theory versus ENA Theory: Comparing Their Predictive Accuracy and Scope.

    Science.gov (United States)

    Ellis, Lee; Hoskin, Anthony; Hartley, Richard; Walsh, Anthony; Widmayer, Alan; Ratnasingam, Malini

    2015-12-01

    General theory attributes criminal behavior primarily to low self-control, whereas evolutionary neuroandrogenic (ENA) theory envisions criminality as being a crude form of status-striving promoted by high brain exposure to androgens. General theory predicts that self-control will be negatively correlated with risk-taking, while ENA theory implies that these two variables should actually be positively correlated. According to ENA theory, traits such as pain tolerance and muscularity will be positively associated with risk-taking and criminality while general theory makes no predictions concerning these relationships. Data from Malaysia and the United States are used to test 10 hypotheses derived from one or both of these theories. As predicted by both theories, risk-taking was positively correlated with criminality in both countries. However, contrary to general theory and consistent with ENA theory, the correlation between self-control and risk-taking was positive in both countries. General theory's prediction of an inverse correlation between low self-control and criminality was largely supported by the U.S. data but only weakly supported by the Malaysian data. ENA theory's predictions of positive correlations between pain tolerance, muscularity, and offending were largely confirmed. For the 10 hypotheses tested, ENA theory surpassed general theory in predictive scope and accuracy. © The Author(s) 2014.

  18. Proposal of computation chart for general use for diffusion prediction of discharged warm water

    International Nuclear Information System (INIS)

    Wada, Akira; Kadoyu, Masatake

    1976-01-01

    The authors have developed the unique simulation analysis method using the numerical models for the prediction of discharged warm water diffusion. At the present stage, the method is adopted for the precise analysis computation in order to make the prediction of the diffusion of discharged warm water at each survey point, but instead of this method, it is strongly requested that some simple and easy prediction methods should be established. For the purpose of meeting this demand, in this report, the computation chart for general use is given to predict simply the diffusion range of discharged warm water, after classifying the semi-infinite sea region into several flow patterns according to the sea conditions and conducting the systematic simulation analysis with the numerical model of each pattern, respectively. (1) Establishment of the computation conditions: The special sea region was picked up as the area to be investigated, which is semi-infinite facing the outer sea and along the rectilineal coast line from many sea regions surrounding Japan, and from the viewpoint of the flow and the diffusion characteristics, the sea region was classified into three patterns. 51 cases in total various parameters were obtained, and finally the simulation analysis was performed. (2) Drawing up the general use chart: 28 sheets of the computation chart for general use were drawn, which are available for computing the approximate temperature rise caused by the discharged warm water diffusion. The example of Anegasaki Thermal Power Station is given. (Kako, I.)

  19. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  20. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  1. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  2. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs.

    Directory of Open Access Journals (Sweden)

    Qifan Kuang

    Full Text Available Early and accurate identification of adverse drug reactions (ADRs is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs.In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper.Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  3. Prediction of Periodontitis Occurrence: Influence of Classification and Sociodemographic and General Health Information

    DEFF Research Database (Denmark)

    Manzolli Leite, Fabio Renato; Peres, Karen Glazer; Do, Loc Giang

    2017-01-01

    BACKGROUND: Prediction of periodontitis development is challenging. Use of oral health-related data alone, especially in a young population, might underestimate disease risk. This study investigates accuracy of oral, systemic, and socioeconomic data on estimating periodontitis development...... in a population-based prospective cohort. METHODS: General health history and sociodemographic information were collected throughout the life-course of individuals. Oral examinations were performed at ages 24 and 31 years in the Pelotas 1982 birth cohort. Periodontitis at age 31 years according to six...... classifications was used as the gold standard to compute area under the receiver operating characteristic curve (AUC). Multivariable binomial regression models were used to evaluate the effects of oral health, general health, and socioeconomic characteristics on accuracy of periodontitis development prediction...

  4. Evaluation of two models for predicting elemental accumulation by arthropods

    International Nuclear Information System (INIS)

    Webster, J.R.; Crossley, D.A. Jr.

    1978-01-01

    Two different models have been proposed for predicting elemental accumulation by arthropods. Parameters of both models can be quantified from radioisotope elimination experiments. Our analysis of the 2 models shows that both predict identical elemental accumulation for a whole organism, though differing in the accumulation in body and gut. We quantified both models with experimental data from 134 Cs and 85 Sr elimination by crickets. Computer simulations of radioisotope accumulation were then compared with actual accumulation experiments. Neither model showed exact fit to the experimental data, though both showed the general pattern of elemental accumulation

  5. A NEW GENERAL 3DOF QUASI-STEADY AERODYNAMIC INSTABILITY MODEL

    DEFF Research Database (Denmark)

    Gjelstrup, Henrik; Larsen, Allan; Georgakis, Christos

    2008-01-01

    but can generally be applied for aerodynamic instability prediction for prismatic bluff bodies. The 3DOF, which make up the movement of the model, are the displacements in the XY-plane and the rotation around the bluff body’s rotational axis. The proposed model incorporates inertia coupling between...

  6. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  7. A Statistical Evaluation of Atmosphere-Ocean General Circulation Models: Complexity vs. Simplicity

    OpenAIRE

    Robert K. Kaufmann; David I. Stern

    2004-01-01

    The principal tools used to model future climate change are General Circulation Models which are deterministic high resolution bottom-up models of the global atmosphere-ocean system that require large amounts of supercomputer time to generate results. But are these models a cost-effective way of predicting future climate change at the global level? In this paper we use modern econometric techniques to evaluate the statistical adequacy of three general circulation models (GCMs) by testing thre...

  8. Modeling of chemical exergy of agricultural biomass using improved general regression neural network

    International Nuclear Information System (INIS)

    Huang, Y.W.; Chen, M.Q.; Li, Y.; Guo, J.

    2016-01-01

    A comprehensive evaluation for energy potential contained in agricultural biomass was a vital step for energy utilization of agricultural biomass. The chemical exergy of typical agricultural biomass was evaluated based on the second law of thermodynamics. The chemical exergy was significantly influenced by C and O elements rather than H element. The standard entropy of the samples also was examined based on their element compositions. Two predicted models of the chemical exergy were developed, which referred to a general regression neural network model based upon the element composition, and a linear model based upon the high heat value. An auto-refinement algorithm was firstly developed to improve the performance of regression neural network model. The developed general regression neural network model with K-fold cross-validation had a better ability for predicting the chemical exergy than the linear model, which had lower predicted errors (±1.5%). - Highlights: • Chemical exergies of agricultural biomass were evaluated based upon fifty samples. • Values for the standard entropy of agricultural biomass samples were calculated. • A linear relationship between chemical exergy and HHV of samples was detected. • An improved GRNN prediction model for the chemical exergy of biomass was developed.

  9. A stratiform cloud parameterization for General Circulation Models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  10. Spatial prediction of landslide susceptibility using an adaptive neuro-fuzzy inference system combined with frequency ratio, generalized additive model, and support vector machine techniques

    Science.gov (United States)

    Chen, Wei; Pourghasemi, Hamid Reza; Panahi, Mahdi; Kornejady, Aiding; Wang, Jiale; Xie, Xiaoshen; Cao, Shubo

    2017-11-01

    The spatial prediction of landslide susceptibility is an important prerequisite for the analysis of landslide hazards and risks in any area. This research uses three data mining techniques, such as an adaptive neuro-fuzzy inference system combined with frequency ratio (ANFIS-FR), a generalized additive model (GAM), and a support vector machine (SVM), for landslide susceptibility mapping in Hanyuan County, China. In the first step, in accordance with a review of the previous literature, twelve conditioning factors, including slope aspect, altitude, slope angle, topographic wetness index (TWI), plan curvature, profile curvature, distance to rivers, distance to faults, distance to roads, land use, normalized difference vegetation index (NDVI), and lithology, were selected. In the second step, a collinearity test and correlation analysis between the conditioning factors and landslides were applied. In the third step, we used three advanced methods, namely, ANFIS-FR, GAM, and SVM, for landslide susceptibility modeling. Subsequently, the results of their accuracy were validated using a receiver operating characteristic curve. The results showed that all three models have good prediction capabilities, while the SVM model has the highest prediction rate of 0.875, followed by the ANFIS-FR and GAM models with prediction rates of 0.851 and 0.846, respectively. Thus, the landslide susceptibility maps produced in the study area can be applied for management of hazards and risks in landslide-prone Hanyuan County.

  11. Chemical structure-based predictive model for methanogenic anaerobic biodegradation potential.

    Science.gov (United States)

    Meylan, William; Boethling, Robert; Aronson, Dallas; Howard, Philip; Tunkel, Jay

    2007-09-01

    Many screening-level models exist for predicting aerobic biodegradation potential from chemical structure, but anaerobic biodegradation generally has been ignored by modelers. We used a fragment contribution approach to develop a model for predicting biodegradation potential under methanogenic anaerobic conditions. The new model has 37 fragments (substructures) and classifies a substance as either fast or slow, relative to the potential to be biodegraded in the "serum bottle" anaerobic biodegradation screening test (Organization for Economic Cooperation and Development Guideline 311). The model correctly classified 90, 77, and 91% of the chemicals in the training set (n = 169) and two independent validation sets (n = 35 and 23), respectively. Accuracy of predictions of fast and slow degradation was equal for training-set chemicals, but fast-degradation predictions were less accurate than slow-degradation predictions for the validation sets. Analysis of the signs of the fragment coefficients for this and the other (aerobic) Biowin models suggests that in the context of simple group contribution models, the majority of positive and negative structural influences on ultimate degradation are the same for aerobic and methanogenic anaerobic biodegradation.

  12. Comparison of the models of financial distress prediction

    Directory of Open Access Journals (Sweden)

    Jiří Omelka

    2013-01-01

    Full Text Available Prediction of the financial distress is generally supposed as approximation if a business entity is closed on bankruptcy or at least on serious financial problems. Financial distress is defined as such a situation when a company is not able to satisfy its liabilities in any forms, or when its liabilities are higher than its assets. Classification of financial situation of business entities represents a multidisciplinary scientific issue that uses not only the economic theoretical bases but interacts to the statistical, respectively to econometric approaches as well.The first models of financial distress prediction have originated in the sixties of the 20th century. One of the most known is the Altman’s model followed by a range of others which are constructed on more or less conformable bases. In many existing models it is possible to find common elements which could be marked as elementary indicators of potential financial distress of a company. The objective of this article is, based on the comparison of existing models of prediction of financial distress, to define the set of basic indicators of company’s financial distress at conjoined identification of their critical aspects. The sample defined this way will be a background for future research focused on determination of one-dimensional model of financial distress prediction which would subsequently become a basis for construction of multi-dimensional prediction model.

  13. Predicting carcinogenicity of diverse chemicals using probabilistic neural network modeling approaches

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com [Academy of Scientific and Innovative Research, Council of Scientific and Industrial Research, New Delhi (India); Environmental Chemistry Division, CSIR-Indian Institute of Toxicology Research, Post Box 80, Mahatma Gandhi Marg, Lucknow 226 001 (India); Gupta, Shikha; Rai, Premanjali [Academy of Scientific and Innovative Research, Council of Scientific and Industrial Research, New Delhi (India); Environmental Chemistry Division, CSIR-Indian Institute of Toxicology Research, Post Box 80, Mahatma Gandhi Marg, Lucknow 226 001 (India)

    2013-10-15

    Robust global models capable of discriminating positive and non-positive carcinogens; and predicting carcinogenic potency of chemicals in rodents were developed. The dataset of 834 structurally diverse chemicals extracted from Carcinogenic Potency Database (CPDB) was used which contained 466 positive and 368 non-positive carcinogens. Twelve non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals and nonlinearity in the data were evaluated using Tanimoto similarity index and Brock–Dechert–Scheinkman statistics. Probabilistic neural network (PNN) and generalized regression neural network (GRNN) models were constructed for classification and function optimization problems using the carcinogenicity end point in rat. Validation of the models was performed using the internal and external procedures employing a wide series of statistical checks. PNN constructed using five descriptors rendered classification accuracy of 92.09% in complete rat data. The PNN model rendered classification accuracies of 91.77%, 80.70% and 92.08% in mouse, hamster and pesticide data, respectively. The GRNN constructed with nine descriptors yielded correlation coefficient of 0.896 between the measured and predicted carcinogenic potency with mean squared error (MSE) of 0.44 in complete rat data. The rat carcinogenicity model (GRNN) applied to the mouse and hamster data yielded correlation coefficient and MSE of 0.758, 0.71 and 0.760, 0.46, respectively. The results suggest for wide applicability of the inter-species models in predicting carcinogenic potency of chemicals. Both the PNN and GRNN (inter-species) models constructed here can be useful tools in predicting the carcinogenicity of new chemicals for regulatory purposes. - Graphical abstract: Figure (a) shows classification accuracies (positive and non-positive carcinogens) in rat, mouse, hamster, and pesticide data yielded by optimal PNN model. Figure (b) shows generalization and predictive

  14. Topics in the generalized vector dominance model

    International Nuclear Information System (INIS)

    Chavin, S.

    1976-01-01

    Two topics are covered in the generalized vector dominance model. In the first topic a model is constructed for dilepton production in hadron-hadron interactions based on the idea of generalized vector-dominance. It is argued that in the high mass region the generalized vector-dominance model and the Drell-Yan parton model are alternative descriptions of the same underlying physics. In the low mass regions the models differ; the vector-dominance approach predicts a greater production of dileptons. It is found that the high mass vector mesons which are the hallmark of the generalized vector-dominance model make little contribution to the large yield of leptons observed in the transverse-momentum range 1 less than p/sub perpendicular/ less than 6 GeV. The recently measured hadronic parameters lead one to believe that detailed fits to the data are possible under the model. The possibility was expected, and illustrated with a simple model the extreme sensitivity of the large-p/sub perpendicular/ lepton yield to the large-transverse-momentum tail of vector-meson production. The second topic is an attempt to explain the mysterious phenomenon of photon shadowing in nuclei utilizing the contribution of the longitudinally polarized photon. It is argued that if the scalar photon anti-shadows, it could compensate for the transverse photon, which is presumed to shadow. It is found in a very simple model that the scalar photon could indeed anti-shadow. The principal feature of the model is a cancellation of amplitudes. The scheme is consistent with scalar photon-nucleon data as well. The idea is tested with two simple GVDM models and finds that the anti-shadowing contribution of the scalar photon is not sufficient to compensate for the contribution of the transverse photon. It is found doubtful that the scalar photon makes a significant contribution to the total photon-nuclear cross section

  15. A stratiform cloud parameterization for general circulation models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in general circulation models (GCMs) is widely recognized as a major limitation in applying these models to predictions of global climate change. The purpose of this project is to develop in GCMs a stratiform cloud parameterization that expresses clouds in terms of bulk microphysical properties and their subgrid variability. Various clouds variables and their interactions are summarized. Precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  16. Crash data modeling with a generalized estimator.

    Science.gov (United States)

    Ye, Zhirui; Xu, Yueru; Lord, Dominique

    2018-05-11

    The investigation of relationships between traffic crashes and relevant factors is important in traffic safety management. Various methods have been developed for modeling crash data. In real world scenarios, crash data often display the characteristics of over-dispersion. However, on occasions, some crash datasets have exhibited under-dispersion, especially in cases where the data are conditioned upon the mean. The commonly used models (such as the Poisson and the NB regression models) have associated limitations to cope with various degrees of dispersion. In light of this, a generalized event count (GEC) model, which can be generally used to handle over-, equi-, and under-dispersed data, is proposed in this study. This model was first applied to case studies using data from Toronto, characterized by over-dispersion, and then to crash data from railway-highway crossings in Korea, characterized with under-dispersion. The results from the GEC model were compared with those from the Negative binomial and the hyper-Poisson models. The cases studies show that the proposed model provides good performance for crash data characterized with over- and under-dispersion. Moreover, the proposed model simplifies the modeling process and the prediction of crash data. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Generalized versus non-generalized neural network model for multi-lead inflow forecasting at Aswan High Dam

    Directory of Open Access Journals (Sweden)

    A. El-Shafie

    2011-03-01

    Full Text Available Artificial neural networks (ANN have been found efficient, particularly in problems where characteristics of the processes are stochastic and difficult to describe using explicit mathematical models. However, time series prediction based on ANN algorithms is fundamentally difficult and faces problems. One of the major shortcomings is the search for the optimal input pattern in order to enhance the forecasting capabilities for the output. The second challenge is the over-fitting problem during the training procedure and this occurs when ANN loses its generalization. In this research, autocorrelation and cross correlation analyses are suggested as a method for searching the optimal input pattern. On the other hand, two generalized methods namely, Regularized Neural Network (RNN and Ensemble Neural Network (ENN models are developed to overcome the drawbacks of classical ANN models. Using Generalized Neural Network (GNN helped avoid over-fitting of training data which was observed as a limitation of classical ANN models. Real inflow data collected over the last 130 years at Lake Nasser was used to train, test and validate the proposed model. Results show that the proposed GNN model outperforms non-generalized neural network and conventional auto-regressive models and it could provide accurate inflow forecasting.

  18. Link prediction via generalized coupled tensor factorisation

    DEFF Research Database (Denmark)

    Ermiş, Beyza; Evrim, Acar Ataman; Taylan Cemgil, A.

    2012-01-01

    and higher-order tensors. We propose to use an approach based on probabilistic interpretation of tensor factorisation models, i.e., Generalised Coupled Tensor Factorisation, which can simultaneously fit a large class of tensor models to higher-order tensors/matrices with com- mon latent factors using...... different loss functions. Numerical experiments demonstrate that joint analysis of data from multiple sources via coupled factorisation improves the link prediction performance and the selection of right loss function and tensor model is crucial for accurately predicting missing links....

  19. One Layer Nonlinear Economic Closed-Loop Generalized Predictive Control for a Wastewater Treatment Plant

    Directory of Open Access Journals (Sweden)

    Hicham El bahja

    2018-04-01

    Full Text Available The main scope of this paper is the proposal of a new single layer Nonlinear Economic Closed-Loop Generalized Predictive Control (NECLGPC as an efficient advanced control technique for improving economics in the operation of nonlinear plants. Instead of the classic dual-mode MPC (model predictive controller schemes, where the terminal control law defined in the terminal region is obtained offline solving a linear quadratic regulator problem, here the terminal control law in the NECLGPC is determined online by an unconstrained Nonlinear Generalized Predictive Control (NGPC. In order to make the optimization problem more tractable two considerations have been made in the present work. Firstly, the prediction model consisting of a nonlinear phenomenological model of the plant is expressed with linear structure and state dependent matrices. Secondly, instead of including the nonlinear economic cost in the objective function, an approximation of the reduced gradient of the economic function is used. These assumptions allow us to design an economic unconstrained nonlinear GPC analytically and to state the NECLGPC allow for the design of an economic problem as a QP (Quadratic Programing problem each sampling time. Four controllers based on GPC that differ in designs and structures are compared with the proposed control technique in terms of process performance and energy costs. Particularly, the methodology is implemented in the N-Removal process of a Wastewater Treatment Plant (WWTP and the results prove the efficiency of the method and that it can be used profitably in practical cases.

  20. Risk Prediction Models in Psychiatry: Toward a New Frontier for the Prevention of Mental Illnesses.

    Science.gov (United States)

    Bernardini, Francesco; Attademo, Luigi; Cleary, Sean D; Luther, Charles; Shim, Ruth S; Quartesan, Roberto; Compton, Michael T

    2017-05-01

    We conducted a systematic, qualitative review of risk prediction models designed and tested for depression, bipolar disorder, generalized anxiety disorder, posttraumatic stress disorder, and psychotic disorders. Our aim was to understand the current state of research on risk prediction models for these 5 disorders and thus future directions as our field moves toward embracing prediction and prevention. Systematic searches of the entire MEDLINE electronic database were conducted independently by 2 of the authors (from 1960 through 2013) in July 2014 using defined search criteria. Search terms included risk prediction, predictive model, or prediction model combined with depression, bipolar, manic depressive, generalized anxiety, posttraumatic, PTSD, schizophrenia, or psychosis. We identified 268 articles based on the search terms and 3 criteria: published in English, provided empirical data (as opposed to review articles), and presented results pertaining to developing or validating a risk prediction model in which the outcome was the diagnosis of 1 of the 5 aforementioned mental illnesses. We selected 43 original research reports as a final set of articles to be qualitatively reviewed. The 2 independent reviewers abstracted 3 types of data (sample characteristics, variables included in the model, and reported model statistics) and reached consensus regarding any discrepant abstracted information. Twelve reports described models developed for prediction of major depressive disorder, 1 for bipolar disorder, 2 for generalized anxiety disorder, 4 for posttraumatic stress disorder, and 24 for psychotic disorders. Most studies reported on sensitivity, specificity, positive predictive value, negative predictive value, and area under the (receiver operating characteristic) curve. Recent studies demonstrate the feasibility of developing risk prediction models for psychiatric disorders (especially psychotic disorders). The field must now advance by (1) conducting more large

  1. Modeling the frequency of opposing left-turn conflicts at signalized intersections using generalized linear regression models.

    Science.gov (United States)

    Zhang, Xin; Liu, Pan; Chen, Yuguang; Bai, Lu; Wang, Wei

    2014-01-01

    The primary objective of this study was to identify whether the frequency of traffic conflicts at signalized intersections can be modeled. The opposing left-turn conflicts were selected for the development of conflict predictive models. Using data collected at 30 approaches at 20 signalized intersections, the underlying distributions of the conflicts under different traffic conditions were examined. Different conflict-predictive models were developed to relate the frequency of opposing left-turn conflicts to various explanatory variables. The models considered include a linear regression model, a negative binomial model, and separate models developed for four traffic scenarios. The prediction performance of different models was compared. The frequency of traffic conflicts follows a negative binominal distribution. The linear regression model is not appropriate for the conflict frequency data. In addition, drivers behaved differently under different traffic conditions. Accordingly, the effects of conflicting traffic volumes on conflict frequency vary across different traffic conditions. The occurrences of traffic conflicts at signalized intersections can be modeled using generalized linear regression models. The use of conflict predictive models has potential to expand the uses of surrogate safety measures in safety estimation and evaluation.

  2. Comparison of the capacity of two biotic ligand models to predict chronic copper toxicity to two Daphnia magna clones and formulation of a generalized bioavailability model.

    Science.gov (United States)

    Van Regenmortel, Tina; Janssen, Colin R; De Schamphelaere, Karel A C

    2015-07-01

    Although it is increasingly recognized that biotic ligand models (BLMs) are valuable in the risk assessment of metals in aquatic systems, the use of 2 differently structured and parameterized BLMs (1 in the United States and another in the European Union) to obtain bioavailability-based chronic water quality criteria for copper is worthy of further investigation. In the present study, the authors evaluated the predictive capacity of these 2 BLMs for a large dataset of chronic copper toxicity data with 2 Daphnia magna clones, termed K6 and ARO. One BLM performed best with clone K6 data, whereas the other performed best with clone ARO data. In addition, there was an important difference between the 2 BLMs in how they predicted the bioavailability of copper as a function of pH. These modeling results suggested that the effect of pH on chronic copper toxicity is different between the 2 clones considered, which was confirmed with additional chronic toxicity experiments. Finally, because fundamental differences in model structure between the 2 BLMs made it impossible to create an average BLM, a generalized bioavailability model (gBAM) was developed. Of the 3 gBAMs developed, the authors recommend the use of model gBAM-C(uni), which combines a log-linear relation between the 21-d median effective concentration (expressed as free Cu(2+) ion activity) and pH, with more conventional BLM-type competition constants for sodium, calcium, and magnesium. This model can be considered a first step in further improving the accuracy of chronic toxicity predictions of copper as a function of water chemistry (for a variety of Daphnia magna clones), even beyond the robustness of the current BLMs used in regulatory applications. © 2015 SETAC.

  3. Fractional-Order Generalized Predictive Control: Application for Low-Speed Control of Gasoline-Propelled Cars

    Directory of Open Access Journals (Sweden)

    M. Romero

    2013-01-01

    Full Text Available There is an increasing interest in using fractional calculus applied to control theory generalizing classical control strategies as the PID controller and developing new ones with the intention of taking advantage of characteristics supplied by this mathematical tool for the controller definition. In this work, the fractional generalization of the successful and spread control strategy known as model predictive control is applied to drive autonomously a gasoline-propelled vehicle at low speeds. The vehicle is a Citroën C3 Pluriel that was modified to act over the throttle and brake pedals. Its highly nonlinear dynamics are an excellent test bed for applying beneficial characteristics of fractional predictive formulation to compensate unmodeled dynamics and external disturbances.

  4. Predictive modeling of coupled multi-physics systems: I. Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel

    2014-01-01

    Highlights: • We developed “predictive modeling of coupled multi-physics systems (PMCMPS)”. • PMCMPS reduces predicted uncertainties in predicted model responses and parameters. • PMCMPS treats efficiently very large coupled systems. - Abstract: This work presents an innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS).” This methodology takes into account fully the coupling terms between the systems but requires only the computational resources that would be needed to perform predictive modeling on each system separately. The PMCMPS methodology uses the maximum entropy principle to construct an optimal approximation of the unknown a priori distribution based on a priori known mean values and uncertainties characterizing the parameters and responses for both multi-physics models. This “maximum entropy”-approximate a priori distribution is combined, using Bayes’ theorem, with the “likelihood” provided by the multi-physics simulation models. Subsequently, the posterior distribution thus obtained is evaluated using the saddle-point method to obtain analytical expressions for the optimally predicted values for the multi-physics models parameters and responses along with corresponding reduced uncertainties. Noteworthy, the predictive modeling methodology for the coupled systems is constructed such that the systems can be considered sequentially rather than simultaneously, while preserving exactly the same results as if the systems were treated simultaneously. Consequently, very large coupled systems, which could perhaps exceed available computational resources if treated simultaneously, can be treated with the PMCMPS methodology presented in this work sequentially and without any loss of generality or information, requiring just the resources that would be needed if the systems were treated sequentially

  5. Generalized predictive control in the delta-domain

    DEFF Research Database (Denmark)

    Lauritsen, Morten Bach; Jensen, Morten Rostgaard; Poulsen, Niels Kjølstad

    1995-01-01

    This paper describes new approaches to generalized predictive control formulated in the delta (δ) domain. A new δ-domain version of the continuous-time emulator-based predictor is presented. It produces the optimal estimate in the deterministic case whenever the predictor order is chosen greater...... than or equal to the number of future predicted samples, however a “good” estimate is usually obtained in a much longer range of samples. This is particularly advantageous at fast sampling rates where a “conventional” predictor is bound to become very computationally demanding. Two controllers...

  6. Modeling for prediction of restrained shrinkage effect in concrete repair

    International Nuclear Information System (INIS)

    Yuan Yingshu; Li Guo; Cai Yue

    2003-01-01

    A general model of autogenous shrinkage caused by chemical reaction (chemical shrinkage) is developed by means of Arrhenius' law and a degree of chemical reaction. Models of tensile creep and relaxation modulus are built based on a viscoelastic, three-element model. Tests of free shrinkage and tensile creep were carried out to determine some coefficients in the models. Two-dimensional FEM analysis based on the models and other constitutions can predict the development of tensile strength and cracking. Three groups of patch-repaired beams were designed for analysis and testing. The prediction from the analysis shows agreement with the test results. The cracking mechanism after repair is discussed

  7. Spatial downscaling of soil prediction models based on weighted generalized additive models in smallholder farm settings.

    Science.gov (United States)

    Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P; Nair, Vimala D

    2017-09-11

    Digital soil mapping (DSM) is gaining momentum as a technique to help smallholder farmers secure soil security and food security in developing regions. However, communications of the digital soil mapping information between diverse audiences become problematic due to the inconsistent scale of DSM information. Spatial downscaling can make use of accessible soil information at relatively coarse spatial resolution to provide valuable soil information at relatively fine spatial resolution. The objective of this research was to disaggregate the coarse spatial resolution soil exchangeable potassium (K ex ) and soil total nitrogen (TN) base map into fine spatial resolution soil downscaled map using weighted generalized additive models (GAMs) in two smallholder villages in South India. By incorporating fine spatial resolution spectral indices in the downscaling process, the soil downscaled maps not only conserve the spatial information of coarse spatial resolution soil maps but also depict the spatial details of soil properties at fine spatial resolution. The results of this study demonstrated difference between the fine spatial resolution downscaled maps and fine spatial resolution base maps is smaller than the difference between coarse spatial resolution base maps and fine spatial resolution base maps. The appropriate and economical strategy to promote the DSM technique in smallholder farms is to develop the relatively coarse spatial resolution soil prediction maps or utilize available coarse spatial resolution soil maps at the regional scale and to disaggregate these maps to the fine spatial resolution downscaled soil maps at farm scale.

  8. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  9. Prediction of Coal Face Gas Concentration by Multi-Scale Selective Ensemble Hybrid Modeling

    Directory of Open Access Journals (Sweden)

    WU Xiang

    2014-06-01

    Full Text Available A selective ensemble hybrid modeling prediction method based on wavelet transformation is proposed to improve the fitting and generalization capability of the existing prediction models of the coal face gas concentration, which has a strong stochastic volatility. Mallat algorithm was employed for the multi-scale decomposition and single-scale reconstruction of the gas concentration time series. Then, it predicted every subsequence by sparsely weighted multi unstable ELM(extreme learning machine predictor within method SERELM(sparse ensemble regressors of ELM. At last, it superimposed the predicted values of these models to obtain the predicted values of the original sequence. The proposed method takes advantage of characteristics of multi scale analysis of wavelet transformation, accuracy and fast characteristics of ELM prediction and the generalization ability of L1 regularized selective ensemble learning method. The results show that the forecast accuracy has large increase by using the proposed method. The average relative error is 0.65%, the maximum relative error is 4.16% and the probability of relative error less than 1% reaches 0.785.

  10. A general phenomenological model for work function

    Science.gov (United States)

    Brodie, I.; Chou, S. H.; Yuan, H.

    2014-07-01

    A general phenomenological model is presented for obtaining the zero Kelvin work function of any crystal facet of metals and semiconductors, both clean and covered with a monolayer of electropositive atoms. It utilizes the known physical structure of the crystal and the Fermi energy of the two-dimensional electron gas assumed to form on the surface. A key parameter is the number of electrons donated to the surface electron gas per surface lattice site or adsorbed atom, which is taken to be an integer. Initially this is found by trial and later justified by examining the state of the valence electrons of the relevant atoms. In the case of adsorbed monolayers of electropositive atoms a satisfactory justification could not always be found, particularly for cesium, but a trial value always predicted work functions close to the experimental values. The model can also predict the variation of work function with temperature for clean crystal facets. The model is applied to various crystal faces of tungsten, aluminium, silver, and select metal oxides, and most demonstrate good fits compared to available experimental values.

  11. Predictive power of task orientation, general self-efficacy and self-determined motivation on fun and boredom

    Directory of Open Access Journals (Sweden)

    Lorena Ruiz-González

    2015-12-01

    Full Text Available Abstract The aim of this study was to test the predictive power of dispositional orientations, general self-efficacy and self-determined motivation on fun and boredom in physical education classes, with a sample of 459 adolescents between 13 and 18 with a mean age of 15 years (SD = 0.88. The adolescents responded to four Likert scales: Perceptions of Success Questionnaire, General Self-Efficacy Scale, Sport Motivation Scale and Intrinsic Satisfaction Questionnaire in Sport. The results showed the structural regression model showed that task orientation and general self-efficacy positively predicted self-determined motivation and this in turn positively predicted more fun and less boredom in physical education classes. Consequently, the promotion of an educational task-oriented environment where learners perceive their progress and make them feel more competent, will allow them to overcome the intrinsically motivated tasks, and therefore they will have more fun. Pedagogical implications for less boredom and more fun in physical education classes are discussed.

  12. Verification and improvement of a predictive model for radionuclide migration

    International Nuclear Information System (INIS)

    Miller, C.W.; Benson, L.V.; Carnahan, C.L.

    1982-01-01

    Prediction of the rates of migration of contaminant chemical species in groundwater flowing through toxic waste repositories is essential to the assessment of a repository's capability of meeting standards for release rates. A large number of chemical transport models, of varying degrees of complexity, have been devised for the purpose of providing this predictive capability. In general, the transport of dissolved chemical species through a water-saturated porous medium is influenced by convection, diffusion/dispersion, sorption, formation of complexes in the aqueous phase, and chemical precipitation. The reliability of predictions made with the models which omit certain of these processes is difficult to assess. A numerical model, CHEMTRN, has been developed to determine which chemical processes govern radionuclide migration. CHEMTRN builds on a model called MCCTM developed previously by Lichtner and Benson

  13. Stability analysis of embedded nonlinear predictor neural generalized predictive controller

    Directory of Open Access Journals (Sweden)

    Hesham F. Abdel Ghaffar

    2014-03-01

    Full Text Available Nonlinear Predictor-Neural Generalized Predictive Controller (NGPC is one of the most advanced control techniques that are used with severe nonlinear processes. In this paper, a hybrid solution from NGPC and Internal Model Principle (IMP is implemented to stabilize nonlinear, non-minimum phase, variable dead time processes under high disturbance values over wide range of operation. Also, the superiority of NGPC over linear predictive controllers, like GPC, is proved for severe nonlinear processes over wide range of operation. The necessary conditions required to stabilize NGPC is derived using Lyapunov stability analysis for nonlinear processes. The NGPC stability conditions and improvement in disturbance suppression are verified by both simulation using Duffing’s nonlinear equation and real-time using continuous stirred tank reactor. Up to our knowledge, the paper offers the first hardware embedded Neural GPC which has been utilized to verify NGPC–IMP improvement in realtime.

  14. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  15. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  16. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  17. A general circulation model (GCM) parameterization of Pinatubo aerosols

    Energy Technology Data Exchange (ETDEWEB)

    Lacis, A.A.; Carlson, B.E.; Mishchenko, M.I. [NASA Goddard Institute for Space Studies, New York, NY (United States)

    1996-04-01

    The June 1991 volcanic eruption of Mt. Pinatubo is the largest and best documented global climate forcing experiment in recorded history. The time development and geographical dispersion of the aerosol has been closely monitored and sampled. Based on preliminary estimates of the Pinatubo aerosol loading, general circulation model predictions of the impact on global climate have been made.

  18. Generalized outcome-based strategy classification: comparing deterministic and probabilistic choice models.

    Science.gov (United States)

    Hilbig, Benjamin E; Moshagen, Morten

    2014-12-01

    Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.

  19. Implementation of a PETN failure model using ARIA's general chemistry framework

    Energy Technology Data Exchange (ETDEWEB)

    Hobbs, Michael L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-01

    We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model, implementation, and validation.

  20. Initialization and Predictability of a Coupled ENSO Forecast Model

    Science.gov (United States)

    Chen, Dake; Zebiak, Stephen E.; Cane, Mark A.; Busalacchi, Antonio J.

    1997-01-01

    The skill of a coupled ocean-atmosphere model in predicting ENSO has recently been improved using a new initialization procedure in which initial conditions are obtained from the coupled model, nudged toward observations of wind stress. The previous procedure involved direct insertion of wind stress observations, ignoring model feedback from ocean to atmosphere. The success of the new scheme is attributed to its explicit consideration of ocean-atmosphere coupling and the associated reduction of "initialization shock" and random noise. The so-called spring predictability barrier is eliminated, suggesting that such a barrier is not intrinsic to the real climate system. Initial attempts to generalize the nudging procedure to include SST were not successful; possible explanations are offered. In all experiments forecast skill is found to be much higher for the 1980s than for the 1970s and 1990s, suggesting decadal variations in predictability.

  1. Stable isotopes of fossil teeth corroborate key general circulation model predictions for the Last Glacial Maximum in North America

    Science.gov (United States)

    Kohn, Matthew J.; McKay, Moriah

    2010-11-01

    Oxygen isotope data provide a key test of general circulation models (GCMs) for the Last Glacial Maximum (LGM) in North America, which have otherwise proved difficult to validate. High δ18O pedogenic carbonates in central Wyoming have been interpreted to indicate increased summer precipitation sourced from the Gulf of Mexico. Here we show that tooth enamel δ18O of large mammals, which is strongly correlated with local water and precipitation δ18O, is lower during the LGM in Wyoming, not higher. Similar data from Texas, California, Florida and Arizona indicate higher δ18O values than in the Holocene, which is also predicted by GCMs. Tooth enamel data closely validate some recent models of atmospheric circulation and precipitation δ18O, including an increase in the proportion of winter precipitation for central North America, and summer precipitation in the southern US, but suggest aridity can bias pedogenic carbonate δ18O values significantly.

  2. A stepwise model to predict monthly streamflow

    Science.gov (United States)

    Mahmood Al-Juboori, Anas; Guven, Aytac

    2016-12-01

    In this study, a stepwise model empowered with genetic programming is developed to predict the monthly flows of Hurman River in Turkey and Diyalah and Lesser Zab Rivers in Iraq. The model divides the monthly flow data to twelve intervals representing the number of months in a year. The flow of a month, t is considered as a function of the antecedent month's flow (t - 1) and it is predicted by multiplying the antecedent monthly flow by a constant value called K. The optimum value of K is obtained by a stepwise procedure which employs Gene Expression Programming (GEP) and Nonlinear Generalized Reduced Gradient Optimization (NGRGO) as alternative to traditional nonlinear regression technique. The degree of determination and root mean squared error are used to evaluate the performance of the proposed models. The results of the proposed model are compared with the conventional Markovian and Auto Regressive Integrated Moving Average (ARIMA) models based on observed monthly flow data. The comparison results based on five different statistic measures show that the proposed stepwise model performed better than Markovian model and ARIMA model. The R2 values of the proposed model range between 0.81 and 0.92 for the three rivers in this study.

  3. Recent and past musical activity predicts cognitive aging variability: direct comparison with general lifestyle activities.

    Science.gov (United States)

    Hanna-Pladdy, Brenda; Gajewski, Byron

    2012-01-01

    Studies evaluating the impact of modifiable lifestyle factors on cognition offer potential insights into sources of cognitive aging variability. Recently, we reported an association between extent of musical instrumental practice throughout the life span (greater than 10 years) on preserved cognitive functioning in advanced age. These findings raise the question of whether there are training-induced brain changes in musicians that can transfer to non-musical cognitive abilities to allow for compensation of age-related cognitive declines. However, because of the relationship between engagement in general lifestyle activities and preserved cognition, it remains unclear whether these findings are specifically driven by musical training or the types of individuals likely to engage in greater activities in general. The current study controlled for general activity level in evaluating cognition between musicians and nomusicians. Also, the timing of engagement (age of acquisition, past versus recent) was assessed in predictive models of successful cognitive aging. Seventy age and education matched older musicians (>10 years) and non-musicians (ages 59-80) were evaluated on neuropsychological tests and general lifestyle activities. Musicians scored higher on tests of phonemic fluency, verbal working memory, verbal immediate recall, visuospatial judgment, and motor dexterity, but did not differ in other general leisure activities. Partition analyses were conducted on significant cognitive measures to determine aspects of musical training predictive of enhanced cognition. The first partition analysis revealed education best predicted visuospatial functions in musicians, followed by recent musical engagement which offset low education. In the second partition analysis, early age of musical acquisition (memory in musicians, while analyses for other measures were not predictive. Recent and past musical activity, but not general lifestyle activities, predicted variability

  4. Verification and Validation of a Three-Dimensional Generalized Composite Material Model

    Science.gov (United States)

    Hoffarth, Canio; Harrington, Joseph; Rajan, Subramaniam D.; Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Blankenhorn, Gunther

    2015-01-01

    A general purpose orthotropic elasto-plastic computational constitutive material model has been developed to improve predictions of the response of composites subjected to high velocity impact. The three-dimensional orthotropic elasto-plastic composite material model is being implemented initially for solid elements in LS-DYNA as MAT213. In order to accurately represent the response of a composite, experimental stress-strain curves are utilized as input, allowing for a more general material model that can be used on a variety of composite applications. The theoretical details are discussed in a companion paper. This paper documents the implementation, verification and qualitative validation of the material model using the T800-F3900 fiber/resin composite material

  5. Global vegetation change predicted by the modified Budyko model

    Energy Technology Data Exchange (ETDEWEB)

    Monserud, R.A.; Tchebakova, N.M.; Leemans, R. (US Department of Agriculture, Moscow, ID (United States). Intermountain Research Station, Forest Service)

    1993-09-01

    A modified Budyko global vegetation model is used to predict changes in global vegetation patterns resulting from climate change (CO[sub 2] doubling). Vegetation patterns are predicted using a model based on a dryness index and potential evaporation determined by solving radiation balance equations. Climate change scenarios are derived from predictions from four General Circulation Models (GCM's) of the atmosphere (GFDL, GISS, OSU, and UKMO). All four GCM scenarios show similar trends in vegetation shifts and in areas that remain stable, although the UKMO scenario predicts greater warming than the others. Climate change maps produced by all four GCM scenarios show good agreement with the current climate vegetation map for the globe as a whole, although over half of the vegetation classes show only poor to fair agreement. The most stable areas are Desert and Ice/Polar Desert. Because most of the predicted warming is concentrated in the Boreal and Temperate zones, vegetation there is predicted to undergo the greatest change. Most vegetation classes in the Subtropics and Tropics are predicted to expand. Any shift in the Tropics favouring either Forest over Savanna, or vice versa, will be determined by the magnitude of the increased precipitation accompanying global warming. Although the model predicts equilibrium conditions to which many plant species cannot adjust (through migration or microevolution) in the 50-100 y needed for CO[sub 2] doubling, it is not clear if projected global warming will result in drastic or benign vegetation change. 72 refs., 3 figs., 3 tabs.

  6. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  7. Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model

    Directory of Open Access Journals (Sweden)

    C. Narendra Babu

    2015-07-01

    Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.

  8. Optimal model-free prediction from multivariate time series

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V.; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  9. A predictive model for the behavior of radionuclides in lake systems

    International Nuclear Information System (INIS)

    Monte, L.

    1993-01-01

    This paper describes a predictive model for the behavior of 137Cs in lacustrine systems. The model was tested by comparing its predictions to contamination data collected in various lakes in Europe and North America. The migration of 137Cs from catchment basin and from bottom sediments to lake water was discussed in detail; these two factors influence the time behavior of contamination in lake water. The contributions to the levels of radionuclide concentrations in water, due to the above factors, generally increase in the long run. The uncertainty of the model, used as a generic tool for prediction of the levels of contamination in lake water, was evaluated. Data sets of water contamination analyzed in the present work suggest that the model uncertainty, at a 68% confidence level, is a factor 1.9

  10. A guide to developing resource selection functions from telemetry data using generalized estimating equations and generalized linear mixed models

    Directory of Open Access Journals (Sweden)

    Nicola Koper

    2012-03-01

    Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.

  11. Incorporating shape constraints in generalized additive modelling of the height-diameter relationship for Norway spruce

    Directory of Open Access Journals (Sweden)

    Natalya Pya

    2016-02-01

    Full Text Available Background: Measurements of tree heights and diameters are essential in forest assessment and modelling. Tree heights are used for estimating timber volume, site index and other important variables related to forest growth and yield, succession and carbon budget models. However, the diameter at breast height (dbh can be more accurately obtained and at lower cost, than total tree height. Hence, generalized height-diameter (h-d models that predict tree height from dbh, age and other covariates are needed. For a more flexible but biologically plausible estimation of covariate effects we use shape constrained generalized additive models as an extension of existing h-d model approaches. We use causal site parameters such as index of aridity to enhance the generality and causality of the models and to enable predictions under projected changeable climatic conditions. Methods: We develop unconstrained generalized additive models (GAM and shape constrained generalized additive models (SCAM for investigating the possible effects of tree-specific parameters such as tree age, relative diameter at breast height, and site-specific parameters such as index of aridity and sum of daily mean temperature during vegetation period, on the h-d relationship of forests in Lower Saxony, Germany. Results: Some of the derived effects, e.g. effects of age, index of aridity and sum of daily mean temperature have significantly non-linear pattern. The need for using SCAM results from the fact that some of the model effects show partially implausible patterns especially at the boundaries of data ranges. The derived model predicts monotonically increasing levels of tree height with increasing age and temperature sum and decreasing aridity and social rank of a tree within a stand. The definition of constraints leads only to marginal or minor decline in the model statistics like AIC. An observed structured spatial trend in tree height is modelled via 2-dimensional surface

  12. Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks

    Science.gov (United States)

    Kanevski, Mikhail

    2015-04-01

    The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press

  13. A Note on the Use of Mixture Models for Individual Prediction.

    Science.gov (United States)

    Cole, Veronica T; Bauer, Daniel J

    Mixture models capture heterogeneity in data by decomposing the population into latent subgroups, each of which is governed by its own subgroup-specific set of parameters. Despite the flexibility and widespread use of these models, most applications have focused solely on making inferences for whole or sub-populations, rather than individual cases. The current article presents a general framework for computing marginal and conditional predicted values for individuals using mixture model results. These predicted values can be used to characterize covariate effects, examine the fit of the model for specific individuals, or forecast future observations from previous ones. Two empirical examples are provided to demonstrate the usefulness of individual predicted values in applications of mixture models. The first example examines the relative timing of initiation of substance use using a multiple event process survival mixture model whereas the second example evaluates changes in depressive symptoms over adolescence using a growth mixture model.

  14. PVT characterization and viscosity modeling and prediction of crude oils

    DEFF Research Database (Denmark)

    Cisneros, Eduardo Salvador P.; Dalberg, Anders; Stenby, Erling Halfdan

    2004-01-01

    In previous works, the general, one-parameter friction theory (f-theory), models have been applied to the accurate viscosity modeling of reservoir fluids. As a base, the f-theory approach requires a compositional characterization procedure for the application of an equation of state (EOS), in most...... pressure, is also presented. The combination of the mass characterization scheme presented in this work and the f-theory, can also deliver accurate viscosity modeling results. Additionally, depending on how extensive the compositional characterization is, the approach,presented in this work may also...... deliver accurate viscosity predictions. The modeling approach presented in this work can deliver accurate viscosity and density modeling and prediction results over wide ranges of reservoir conditions, including the compositional changes induced by recovery processes such as gas injection....

  15. Generalized network modeling of capillary-dominated two-phase flow.

    Science.gov (United States)

    Raeini, Ali Q; Bijeljic, Branko; Blunt, Martin J

    2018-02-01

    We present a generalized network model for simulating capillary-dominated two-phase flow through porous media at the pore scale. Three-dimensional images of the pore space are discretized using a generalized network-described in a companion paper [A. Q. Raeini, B. Bijeljic, and M. J. Blunt, Phys. Rev. E 96, 013312 (2017)2470-004510.1103/PhysRevE.96.013312]-which comprises pores that are divided into smaller elements called half-throats and subsequently into corners. Half-throats define the connectivity of the network at the coarsest level, connecting each pore to half-throats of its neighboring pores from their narrower ends, while corners define the connectivity of pore crevices. The corners are discretized at different levels for accurate calculation of entry pressures, fluid volumes, and flow conductivities that are obtained using direct simulation of flow on the underlying image. This paper discusses the two-phase flow model that is used to compute the averaged flow properties of the generalized network, including relative permeability and capillary pressure. We validate the model using direct finite-volume two-phase flow simulations on synthetic geometries, and then present a comparison of the model predictions with a conventional pore-network model and experimental measurements of relative permeability in the literature.

  16. Generalized network modeling of capillary-dominated two-phase flow

    Science.gov (United States)

    Raeini, Ali Q.; Bijeljic, Branko; Blunt, Martin J.

    2018-02-01

    We present a generalized network model for simulating capillary-dominated two-phase flow through porous media at the pore scale. Three-dimensional images of the pore space are discretized using a generalized network—described in a companion paper [A. Q. Raeini, B. Bijeljic, and M. J. Blunt, Phys. Rev. E 96, 013312 (2017), 10.1103/PhysRevE.96.013312]—which comprises pores that are divided into smaller elements called half-throats and subsequently into corners. Half-throats define the connectivity of the network at the coarsest level, connecting each pore to half-throats of its neighboring pores from their narrower ends, while corners define the connectivity of pore crevices. The corners are discretized at different levels for accurate calculation of entry pressures, fluid volumes, and flow conductivities that are obtained using direct simulation of flow on the underlying image. This paper discusses the two-phase flow model that is used to compute the averaged flow properties of the generalized network, including relative permeability and capillary pressure. We validate the model using direct finite-volume two-phase flow simulations on synthetic geometries, and then present a comparison of the model predictions with a conventional pore-network model and experimental measurements of relative permeability in the literature.

  17. Unscented Kalman Filter-Trained Neural Networks for Slip Model Prediction

    Science.gov (United States)

    Li, Zhencai; Wang, Yang; Liu, Zhen

    2016-01-01

    The purpose of this work is to investigate the accurate trajectory tracking control of a wheeled mobile robot (WMR) based on the slip model prediction. Generally, a nonholonomic WMR may increase the slippage risk, when traveling on outdoor unstructured terrain (such as longitudinal and lateral slippage of wheels). In order to control a WMR stably and accurately under the effect of slippage, an unscented Kalman filter and neural networks (NNs) are applied to estimate the slip model in real time. This method exploits the model approximating capabilities of nonlinear state–space NN, and the unscented Kalman filter is used to train NN’s weights online. The slip parameters can be estimated and used to predict the time series of deviation velocity, which can be used to compensate control inputs of a WMR. The results of numerical simulation show that the desired trajectory tracking control can be performed by predicting the nonlinear slip model. PMID:27467703

  18. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  19. Resource-estimation models and predicted discovery

    International Nuclear Information System (INIS)

    Hill, G.W.

    1982-01-01

    Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)

  20. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  1. Improved Predictions of the Geographic Distribution of Invasive Plants Using Climatic Niche Models

    Science.gov (United States)

    Ramírez-Albores, Jorge E.; Bustamante, Ramiro O.

    2016-01-01

    Climatic niche models for invasive plants are usually constructed with occurrence records taken from literature and collections. Because these data neither discriminate among life-cycle stages of plants (adult or juvenile) nor the origin of individuals (naturally established or man-planted), the resulting models may mispredict the distribution ranges of these species. We propose that more accurate predictions could be obtained by modelling climatic niches with data of naturally established individuals, particularly with occurrence records of juvenile plants because this would restrict the predictions of models to those sites where climatic conditions allow the recruitment of the species. To test this proposal, we focused on the Peruvian peppertree (Schinus molle), a South American species that has largely invaded Mexico. Three climatic niche models were constructed for this species using high-resolution dataset gathered in the field. The first model included all occurrence records, irrespective of the life-cycle stage or origin of peppertrees (generalized niche model). The second model only included occurrence records of naturally established mature individuals (adult niche model), while the third model was constructed with occurrence records of naturally established juvenile plants (regeneration niche model). When models were compared, the generalized climatic niche model predicted the presence of peppertrees in sites located farther beyond the climatic thresholds that naturally established individuals can tolerate, suggesting that human activities influence the distribution of this invasive species. The adult and regeneration climatic niche models concurred in their predictions about the distribution of peppertrees, suggesting that naturally established adult trees only occur in sites where climatic conditions allow the recruitment of juvenile stages. These results support the proposal that climatic niches of invasive plants should be modelled with data of

  2. A generalized conditional heteroscedastic model for temperature downscaling

    Science.gov (United States)

    Modarres, R.; Ouarda, T. B. M. J.

    2014-11-01

    This study describes a method for deriving the time varying second order moment, or heteroscedasticity, of local daily temperature and its association to large Coupled Canadian General Circulation Models predictors. This is carried out by applying a multivariate generalized autoregressive conditional heteroscedasticity (MGARCH) approach to construct the conditional variance-covariance structure between General Circulation Models (GCMs) predictors and maximum and minimum temperature time series during 1980-2000. Two MGARCH specifications namely diagonal VECH and dynamic conditional correlation (DCC) are applied and 25 GCM predictors were selected for a bivariate temperature heteroscedastic modeling. It is observed that the conditional covariance between predictors and temperature is not very strong and mostly depends on the interaction between the random process governing temporal variation of predictors and predictants. The DCC model reveals a time varying conditional correlation between GCM predictors and temperature time series. No remarkable increasing or decreasing change is observed for correlation coefficients between GCM predictors and observed temperature during 1980-2000 while weak winter-summer seasonality is clear for both conditional covariance and correlation. Furthermore, the stationarity and nonlinearity Kwiatkowski-Phillips-Schmidt-Shin (KPSS) and Brock-Dechert-Scheinkman (BDS) tests showed that GCM predictors, temperature and their conditional correlation time series are nonlinear but stationary during 1980-2000 according to BDS and KPSS test results. However, the degree of nonlinearity of temperature time series is higher than most of the GCM predictors.

  3. On distributed model predictive control for vehicle platooning with a recursive feasibility guarantee

    NARCIS (Netherlands)

    Shi, Shengling; Lazar, Mircea

    2017-01-01

    This paper proposes a distributed model predictive control algorithm for vehicle platooning and more generally networked systems in a chain structure. The distributed models of the vehicle platoon are coupled through the input of the preceding vehicles. Using the principles of robust model

  4. Generalized Linear Models in Vehicle Insurance

    Directory of Open Access Journals (Sweden)

    Silvie Kafková

    2014-01-01

    Full Text Available Actuaries in insurance companies try to find the best model for an estimation of insurance premium. It depends on many risk factors, e.g. the car characteristics and the profile of the driver. In this paper, an analysis of the portfolio of vehicle insurance data using a generalized linear model (GLM is performed. The main advantage of the approach presented in this article is that the GLMs are not limited by inflexible preconditions. Our aim is to predict the relation of annual claim frequency on given risk factors. Based on a large real-world sample of data from 57 410 vehicles, the present study proposed a classification analysis approach that addresses the selection of predictor variables. The models with different predictor variables are compared by analysis of deviance and Akaike information criterion (AIC. Based on this comparison, the model for the best estimate of annual claim frequency is chosen. All statistical calculations are computed in R environment, which contains stats package with the function for the estimation of parameters of GLM and the function for analysis of deviation.

  5. Comparing stream-specific to generalized temperature models to guide salmonid management in a changing climate

    Science.gov (United States)

    Andrew K. Carlson,; William W. Taylor,; Hartikainen, Kelsey M.; Dana M. Infante,; Beard, Douglas; Lynch, Abigail

    2017-01-01

    Global climate change is predicted to increase air and stream temperatures and alter thermal habitat suitability for growth and survival of coldwater fishes, including brook charr (Salvelinus fontinalis), brown trout (Salmo trutta), and rainbow trout (Oncorhynchus mykiss). In a changing climate, accurate stream temperature modeling is increasingly important for sustainable salmonid management throughout the world. However, finite resource availability (e.g. funding, personnel) drives a tradeoff between thermal model accuracy and efficiency (i.e. cost-effective applicability at management-relevant spatial extents). Using different projected climate change scenarios, we compared the accuracy and efficiency of stream-specific and generalized (i.e. region-specific) temperature models for coldwater salmonids within and outside the State of Michigan, USA, a region with long-term stream temperature data and productive coldwater fisheries. Projected stream temperature warming between 2016 and 2056 ranged from 0.1 to 3.8 °C in groundwater-dominated streams and 0.2–6.8 °C in surface-runoff dominated systems in the State of Michigan. Despite their generally lower accuracy in predicting exact stream temperatures, generalized models accurately projected salmonid thermal habitat suitability in 82% of groundwater-dominated streams, including those with brook charr (80% accuracy), brown trout (89% accuracy), and rainbow trout (75% accuracy). In contrast, generalized models predicted thermal habitat suitability in runoff-dominated streams with much lower accuracy (54%). These results suggest that, amidst climate change and constraints in resource availability, generalized models are appropriate to forecast thermal conditions in groundwater-dominated streams within and outside Michigan and inform regional-level salmonid management strategies that are practical for coldwater fisheries managers, policy makers, and the public. We recommend fisheries professionals reserve resource

  6. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  7. Outcome Prediction in Mathematical Models of Immune Response to Infection.

    Directory of Open Access Journals (Sweden)

    Manuel Mai

    Full Text Available Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.

  8. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  9. Predicting first-grade mathematics achievement: The contributions of domain-general cognitive abilities, nonverbal number sense, and early number competence.

    Directory of Open Access Journals (Sweden)

    Caroline eHornung

    2014-04-01

    Full Text Available Early number competence, grounded in number-specific and domain-general cognitive abilities, is theorized to lay the foundation for later math achievement. Few longitudinal studies have tested a comprehensive model for early math development. Using structural equation modeling and mediation analyses, the present work examined the influence of kindergarteners’ nonverbal number sense and domain-general abilities i.e., working memory, fluid intelligence, and receptive vocabulary and their early number competence (i.e., symbolic number skills on first grade math achievement (arithmetic, shape and space skills, and number line estimation assessed one year later. Latent regression models revealed that nonverbal number sense and working memory are central building blocks for developing early number competence in kindergarten and that early number competence is key for first grade math achievement. After controlling for early number competence, fluid intelligence significantly predicted arithmetic and number line estimation while receptive vocabulary significantly predicted shape and space skills. In sum we suggest that early math achievement draws on different constellations of number-specific and domain-general mechanisms.

  10. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  11. A state-based probabilistic model for tumor respiratory motion prediction

    International Nuclear Information System (INIS)

    Kalet, Alan; Sandison, George; Schmitz, Ruth; Wu Huanmei

    2010-01-01

    general HMM-type predictive models. RMS errors for the time average model approach the theoretical limit of the HMM, and predicted state sequences are well correlated with sequences known to fit the data.

  12. MOTORCYCLE CRASH PREDICTION MODEL FOR NON-SIGNALIZED INTERSECTIONS

    Directory of Open Access Journals (Sweden)

    S. HARNEN

    2003-01-01

    Full Text Available This paper attempts to develop a prediction model for motorcycle crashes at non-signalized intersections on urban roads in Malaysia. The Generalized Linear Modeling approach was used to develop the model. The final model revealed that an increase in motorcycle and non-motorcycle flows entering an intersection is associated with an increase in motorcycle crashes. Non-motorcycle flow on major road had the greatest effect on the probability of motorcycle crashes. Approach speed, lane width, number of lanes, shoulder width and land use were also found to be significant in explaining motorcycle crashes. The model should assist traffic engineers to decide the need for appropriate intersection treatment that specifically designed for non-exclusive motorcycle lane facilities.

  13. Relative sensitivity analysis of the predictive properties of sloppy models.

    Science.gov (United States)

    Myasnikova, Ekaterina; Spirov, Alexander

    2018-01-25

    Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called "sloppy" parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill's, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.

  14. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    Science.gov (United States)

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.

  15. Thermospheric tides simulated by the national center for atmospheric research thermosphere-ionosphere general circulation model at equinox

    International Nuclear Information System (INIS)

    Fesen, C.G.; Roble, R.G.; Ridley, E.C.

    1993-01-01

    The authors use the National Center for Atmospheric Research (NCAR) thermosphere/ionosphere general circulation model (TIGCM) to model tides and dynamics in the thermosphere. This model incorporates the latest advances in the thermosphere general circulation model. Model results emphasized the 70 degree W longitude region to overlap a series of incoherent radar scatter installations. Data and the model are available on data bases. The results of this theoretical modeling are compared with available data, and with prediction of more empirical models. In general there is broad agreement within the comparisons

  16. A general model for metabolic scaling in self-similar asymmetric networks.

    Directory of Open Access Journals (Sweden)

    Alexander Byers Brummer

    2017-03-01

    Full Text Available How a particular attribute of an organism changes or scales with its body size is known as an allometry. Biological allometries, such as metabolic scaling, have been hypothesized to result from selection to maximize how vascular networks fill space yet minimize internal transport distances and resistances. The West, Brown, Enquist (WBE model argues that these two principles (space-filling and energy minimization are (i general principles underlying the evolution of the diversity of biological networks across plants and animals and (ii can be used to predict how the resulting geometry of biological networks then governs their allometric scaling. Perhaps the most central biological allometry is how metabolic rate scales with body size. A core assumption of the WBE model is that networks are symmetric with respect to their geometric properties. That is, any two given branches within the same generation in the network are assumed to have identical lengths and radii. However, biological networks are rarely if ever symmetric. An open question is: Does incorporating asymmetric branching change or influence the predictions of the WBE model? We derive a general network model that relaxes the symmetric assumption and define two classes of asymmetrically bifurcating networks. We show that asymmetric branching can be incorporated into the WBE model. This asymmetric version of the WBE model results in several theoretical predictions for the structure, physiology, and metabolism of organisms, specifically in the case for the cardiovascular system. We show how network asymmetry can now be incorporated in the many allometric scaling relationships via total network volume. Most importantly, we show that the 3/4 metabolic scaling exponent from Kleiber's Law can still be attained within many asymmetric networks.

  17. Model Predictive Control for Connected Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Kaijiang Yu

    2015-01-01

    Full Text Available This paper presents a new model predictive control system for connected hybrid electric vehicles to improve fuel economy. The new features of this study are as follows. First, the battery charge and discharge profile and the driving velocity profile are simultaneously optimized. One is energy management for HEV for Pbatt; the other is for the energy consumption minimizing problem of acc control of two vehicles. Second, a system for connected hybrid electric vehicles has been developed considering varying drag coefficients and the road gradients. Third, the fuel model of a typical hybrid electric vehicle is developed using the maps of the engine efficiency characteristics. Fourth, simulations and analysis (under different parameters, i.e., road conditions, vehicle state of charge, etc. are conducted to verify the effectiveness of the method to achieve higher fuel efficiency. The model predictive control problem is solved using numerical computation method: continuation and generalized minimum residual method. Computer simulation results reveal improvements in fuel economy using the proposed control method.

  18. The Prediction of Drought-Related Tree Mortality in Vegetation Models

    Science.gov (United States)

    Schwinning, S.; Jensen, J.; Lomas, M. R.; Schwartz, B.; Woodward, F. I.

    2013-12-01

    Drought-related tree die-off events at regional scales have been reported from all wooded continents and it has been suggested that their frequency may be increasing. The prediction of these drought-related die-off events from regional to global scales has been recognized as a critical need for the conservation of forest resources and improving the prediction of climate-vegetation interactions. However, there is no conceptual consensus on how to best approach the quantitative prediction of tree mortality. Current models use a variety of mechanisms to represent demographic events. Mortality is modeled to represent a number of different processes, including death by fire, wind throw, extreme temperatures, and self-thinning, and each vegetation model differs in the emphasis they place on specific mechanisms. Dynamic global vegetation models generally operate on the assumption of incremental vegetation shift due to changes in the carbon economy of plant functional types and proportional effects on recruitment, growth, competition and mortality, but this may not capture sudden and sweeping tree death caused by extreme weather conditions. We tested several different approaches to predicting tree mortality within the framework of the Sheffield Dynamic Global Vegetation Model. We applied the model to the state of Texas, USA, which in 2011 experienced extreme drought conditions, causing the death of an estimated 300 million trees statewide. We then compared predicted to actual mortality to determine which algorithms most accurately predicted geographical variation in tree mortality. We discuss implications regarding the ongoing debate on the causes of tree death.

  19. Morphometry Predicts Early GFR Change in Primary Proteinuric Glomerulopathies: A Longitudinal Cohort Study Using Generalized Estimating Equations.

    Directory of Open Access Journals (Sweden)

    Kevin V Lemley

    Full Text Available Most predictive models of kidney disease progression have not incorporated structural data. If structural variables have been used in models, they have generally been only semi-quantitative.We examined the predictive utility of quantitative structural parameters measured on the digital images of baseline kidney biopsies from the NEPTUNE study of primary proteinuric glomerulopathies. These variables were included in longitudinal statistical models predicting the change in estimated glomerular filtration rate (eGFR over up to 55 months of follow-up.The participants were fifty-six pediatric and adult subjects from the NEPTUNE longitudinal cohort study who had measurements made on their digital biopsy images; 25% were African-American, 70% were male and 39% were children; 25 had focal segmental glomerular sclerosis, 19 had minimal change disease, and 12 had membranous nephropathy. We considered four different sets of candidate predictors, each including four quantitative structural variables (for example, mean glomerular tuft area, cortical density of patent glomeruli and two of the principal components from the correlation matrix of six fractional cortical areas-interstitium, atrophic tubule, intact tubule, blood vessel, sclerotic glomerulus, and patent glomerulus along with 13 potentially confounding demographic and clinical variables (such as race, age, diagnosis, and baseline eGFR, quantitative proteinuria and BMI. We used longitudinal linear models based on these 17 variables to predict the change in eGFR over up to 55 months. All 4 models had a leave-one-out cross-validated R2 of about 62%.Several combinations of quantitative structural variables were significantly and strongly associated with changes in eGFR. The structural variables were generally stronger than any of the confounding variables, other than baseline eGFR. Our findings suggest that quantitative assessment of diagnostic renal biopsies may play a role in estimating the baseline

  20. Predicting adsorptive removal of chlorophenol from aqueous solution using artificial intelligence based modeling approaches.

    Science.gov (United States)

    Singh, Kunwar P; Gupta, Shikha; Ojha, Priyanka; Rai, Premanjali

    2013-04-01

    The research aims to develop artificial intelligence (AI)-based model to predict the adsorptive removal of 2-chlorophenol (CP) in aqueous solution by coconut shell carbon (CSC) using four operational variables (pH of solution, adsorbate concentration, temperature, and contact time), and to investigate their effects on the adsorption process. Accordingly, based on a factorial design, 640 batch experiments were conducted. Nonlinearities in experimental data were checked using Brock-Dechert-Scheimkman (BDS) statistics. Five nonlinear models were constructed to predict the adsorptive removal of CP in aqueous solution by CSC using four variables as input. Performances of the constructed models were evaluated and compared using statistical criteria. BDS statistics revealed strong nonlinearity in experimental data. Performance of all the models constructed here was satisfactory. Radial basis function network (RBFN) and multilayer perceptron network (MLPN) models performed better than generalized regression neural network, support vector machines, and gene expression programming models. Sensitivity analysis revealed that the contact time had highest effect on adsorption followed by the solution pH, temperature, and CP concentration. The study concluded that all the models constructed here were capable of capturing the nonlinearity in data. A better generalization and predictive performance of RBFN and MLPN models suggested that these can be used to predict the adsorption of CP in aqueous solution using CSC.

  1. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  2. Analytical prediction model for non-symmetric fatigue crack growth in Fibre Metal Laminates

    NARCIS (Netherlands)

    Wang, W.; Rans, C.D.; Benedictus, R.

    2017-01-01

    This paper proposes an analytical model for predicting the non-symmetric crack growth and accompanying delamination growth in FMLs. The general approach of this model applies Linear Elastic Fracture Mechanics, the principle of superposition, and displacement compatibility based on the

  3. A new risk prediction model for critical care: the Intensive Care National Audit & Research Centre (ICNARC) model.

    Science.gov (United States)

    Harrison, David A; Parry, Gareth J; Carpenter, James R; Short, Alasdair; Rowan, Kathy

    2007-04-01

    To develop a new model to improve risk prediction for admissions to adult critical care units in the UK. Prospective cohort study. The setting was 163 adult, general critical care units in England, Wales, and Northern Ireland, December 1995 to August 2003. Patients were 216,626 critical care admissions. None. The performance of different approaches to modeling physiologic measurements was evaluated, and the best methods were selected to produce a new physiology score. This physiology score was combined with other information relating to the critical care admission-age, diagnostic category, source of admission, and cardiopulmonary resuscitation before admission-to develop a risk prediction model. Modeling interactions between diagnostic category and physiology score enabled the inclusion of groups of admissions that are frequently excluded from risk prediction models. The new model showed good discrimination (mean c index 0.870) and fit (mean Shapiro's R 0.665, mean Brier's score 0.132) in 200 repeated validation samples and performed well when compared with recalibrated versions of existing published risk prediction models in the cohort of patients eligible for all models. The hypothesis of perfect fit was rejected for all models, including the Intensive Care National Audit & Research Centre (ICNARC) model, as is to be expected in such a large cohort. The ICNARC model demonstrated better discrimination and overall fit than existing risk prediction models, even following recalibration of these models. We recommend it be used to replace previously published models for risk adjustment in the UK.

  4. Model predictive control of the solid oxide fuel cell stack temperature with models based on experimental data

    Science.gov (United States)

    Pohjoranta, Antti; Halinen, Matias; Pennanen, Jari; Kiviaho, Jari

    2015-03-01

    Generalized predictive control (GPC) is applied to control the maximum temperature in a solid oxide fuel cell (SOFC) stack and the temperature difference over the stack. GPC is a model predictive control method and the models utilized in this work are ARX-type (autoregressive with extra input), multiple input-multiple output, polynomial models that were identified from experimental data obtained from experiments with a complete SOFC system. The proposed control is evaluated by simulation with various input-output combinations, with and without constraints. A comparison with conventional proportional-integral-derivative (PID) control is also made. It is shown that if only the stack maximum temperature is controlled, a standard PID controller can be used to obtain output performance comparable to that obtained with the significantly more complex model predictive controller. However, in order to control the temperature difference over the stack, both the stack minimum and the maximum temperature need to be controlled and this cannot be done with a single PID controller. In such a case the model predictive controller provides a feasible and effective solution.

  5. Risk prediction model for colorectal cancer: National Health Insurance Corporation study, Korea.

    Science.gov (United States)

    Shin, Aesun; Joo, Jungnam; Yang, Hye-Ryung; Bak, Jeongin; Park, Yunjin; Kim, Jeongseon; Oh, Jae Hwan; Nam, Byung-Ho

    2014-01-01

    Incidence and mortality rates of colorectal cancer have been rapidly increasing in Korea during last few decades. Development of risk prediction models for colorectal cancer in Korean men and women is urgently needed to enhance its prevention and early detection. Gender specific five-year risk prediction models were developed for overall colorectal cancer, proximal colon cancer, distal colon cancer, colon cancer and rectal cancer. The model was developed using data from a population of 846,559 men and 479,449 women who participated in health examinations by the National Health Insurance Corporation. Examinees were 30-80 years old and free of cancer in the baseline years of 1996 and 1997. An independent population of 547,874 men and 415,875 women who participated in 1998 and 1999 examinations was used to validate the model. Model validation was done by evaluating its performance in terms of discrimination and calibration ability using the C-statistic and Hosmer-Lemeshow-type chi-square statistics. Age, body mass index, serum cholesterol, family history of cancer, and alcohol consumption were included in all models for men, whereas age, height, and meat intake frequency were included in all models for women. Models showed moderately good discrimination ability with C-statistics between 0.69 and 0.78. The C-statistics were generally higher in the models for men, whereas the calibration abilities were generally better in the models for women. Colorectal cancer risk prediction models were developed from large-scale, population-based data. Those models can be used for identifying high risk groups and developing preventive intervention strategies for colorectal cancer.

  6. A general model for the scaling of offspring size and adult size.

    Science.gov (United States)

    Falster, Daniel S; Moles, Angela T; Westoby, Mark

    2008-09-01

    Understanding evolutionary coordination among different life-history traits is a key challenge for ecology and evolution. Here we develop a general quantitative model predicting how offspring size should scale with adult size by combining a simple model for life-history evolution with a frequency-dependent survivorship model. The key innovation is that larger offspring are afforded three different advantages during ontogeny: higher survivorship per time, a shortened juvenile phase, and advantage during size-competitive growth. In this model, it turns out that size-asymmetric advantage during competition is the factor driving evolution toward larger offspring sizes. For simplified and limiting cases, the model is shown to produce the same predictions as the previously existing theory on which it is founded. The explicit treatment of different survival advantages has biologically important new effects, mainly through an interaction between total maternal investment in reproduction and the duration of competitive growth. This goes on to explain alternative allometries between log offspring size and log adult size, as observed in mammals (slope = 0.95) and plants (slope = 0.54). Further, it suggests how these differences relate quantitatively to specific biological processes during recruitment. In these ways, the model generalizes across previous theory and provides explanations for some differences between major taxa.

  7. A prediction algorithm for first onset of major depression in the general population: development and validation.

    Science.gov (United States)

    Wang, JianLi; Sareen, Jitender; Patten, Scott; Bolton, James; Schmitz, Norbert; Birney, Arden

    2014-05-01

    Prediction algorithms are useful for making clinical decisions and for population health planning. However, such prediction algorithms for first onset of major depression do not exist. The objective of this study was to develop and validate a prediction algorithm for first onset of major depression in the general population. Longitudinal study design with approximate 3-year follow-up. The study was based on data from a nationally representative sample of the US general population. A total of 28 059 individuals who participated in Waves 1 and 2 of the US National Epidemiologic Survey on Alcohol and Related Conditions and who had not had major depression at Wave 1 were included. The prediction algorithm was developed using logistic regression modelling in 21 813 participants from three census regions. The algorithm was validated in participants from the 4th census region (n=6246). Major depression occurred since Wave 1 of the National Epidemiologic Survey on Alcohol and Related Conditions, assessed by the Alcohol Use Disorder and Associated Disabilities Interview Schedule-diagnostic and statistical manual for mental disorders IV. A prediction algorithm containing 17 unique risk factors was developed. The algorithm had good discriminative power (C statistics=0.7538, 95% CI 0.7378 to 0.7699) and excellent calibration (F-adjusted test=1.00, p=0.448) with the weighted data. In the validation sample, the algorithm had a C statistic of 0.7259 and excellent calibration (Hosmer-Lemeshow χ(2)=3.41, p=0.906). The developed prediction algorithm has good discrimination and calibration capacity. It can be used by clinicians, mental health policy-makers and service planners and the general public to predict future risk of having major depression. The application of the algorithm may lead to increased personalisation of treatment, better clinical decisions and more optimal mental health service planning.

  8. Performance of third-trimester combined screening model for prediction of adverse perinatal outcome.

    Science.gov (United States)

    Miranda, J; Triunfo, S; Rodriguez-Lopez, M; Sairanen, M; Kouru, H; Parra-Saavedra, M; Crovetto, F; Figueras, F; Crispi, F; Gratacós, E

    2017-09-01

    To explore the potential value of third-trimester combined screening for the prediction of adverse perinatal outcome (APO) in the general population and among small-for-gestational-age (SGA) fetuses. This was a nested case-control study within a prospective cohort of 1590 singleton gestations undergoing third-trimester evaluation (32 + 0 to 36 + 6 weeks' gestation). Maternal baseline characteristics, mean arterial blood pressure, fetoplacental ultrasound and circulating biochemical markers (placental growth factor (PlGF), lipocalin-2, unconjugated estriol and inhibin A) were assessed in all women who subsequently had an APO (n = 148) and in a control group without perinatal complications (n = 902). APO was defined as the occurrence of stillbirth, umbilical artery cord blood pH < 7.15, 5-min Apgar score < 7 or emergency operative delivery for fetal distress. Logistic regression models were developed for the prediction of APO in the general population and among SGA cases (defined as customized birth weight < 10 th centile). The prevalence of APO was 9.3% in the general population and 27.4% among SGA cases. In the general population, a combined screening model including a-priori risk (maternal characteristics), estimated fetal weight (EFW) centile, umbilical artery pulsatility index (UA-PI), estriol and PlGF achieved a detection rate for APO of 26% (area under receiver-operating characteristics curve (AUC), 0.59 (95% CI, 0.54-0.65)), at a 10% false-positive rate (FPR). Among SGA cases, a model including a-priori risk, EFW centile, UA-PI, cerebroplacental ratio, estriol and PlGF predicted 62% of APO (AUC, 0.86 (95% CI, 0.80-0.92)) at a FPR of 10%. The use of fetal ultrasound and maternal biochemical markers at 32-36 weeks provides a poor prediction of APO in the general population. Although it remains limited, the performance of the screening model is improved when applied to fetuses with suboptimal fetal growth. Copyright © 2016 ISUOG. Published by John Wiley & Sons

  9. A general framework for predicting delayed responses of ecological communities to habitat loss.

    Science.gov (United States)

    Chen, Youhua; Shen, Tsung-Jen

    2017-04-20

    Although biodiversity crisis at different spatial scales has been well recognised, the phenomena of extinction debt and immigration credit at a crossing-scale context are, at best, unclear. Based on two community patterns, regional species abundance distribution (SAD) and spatial abundance distribution (SAAD), Kitzes and Harte (2015) presented a macroecological framework for predicting post-disturbance delayed extinction patterns in the entire ecological community. In this study, we further expand this basic framework to predict diverse time-lagged effects of habitat destruction on local communities. Specifically, our generalisation of KH's model could address the questions that could not be answered previously: (1) How many species are subjected to delayed extinction in a local community when habitat is destructed in other areas? (2) How do rare or endemic species contribute to extinction debt or immigration credit of the local community? (3) How will species differ between two local areas? From the demonstrations using two SAD models (single-parameter lognormal and logseries), the predicted patterns of the debt, credit, and change in the fraction of unique species can vary, but with consistencies and depending on several factors. The general framework deepens the understanding of the theoretical effects of habitat loss on community dynamic patterns in local samples.

  10. A generalization of the bond fluctuation model to viscoelastic environments

    International Nuclear Information System (INIS)

    Fritsch, Christian C

    2014-01-01

    A lattice-based simulation method for polymer diffusion in a viscoelastic medium is presented. This method combines the eight-site bond fluctuation model with an algorithm for the simulation of fractional Brownian motion on the lattice. The method applies to unentangled self-avoiding chains and is probed for anomalous diffusion exponents α between 0.7 and 1.0. The simulation results are in very good agreement with the predictions of the generalized Rouse model of a self-avoiding chain polymer in a viscoelastic medium. (paper)

  11. Development of a General Form CO2 and Brine Flux Input Model

    Energy Technology Data Exchange (ETDEWEB)

    Mansoor, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sun, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carroll, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-08-01

    The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probe variability in key parameters. This report presents the procedures used to develop a generalized model for CO2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.

  12. Analyzing the capacity of the Daphnia magna and Pseudokirchneriella subcapitata bioavailability models to predict chronic zinc toxicity at high pH and low calcium concentrations and formulation of a generalized bioavailability model for D. magna.

    Science.gov (United States)

    Van Regenmortel, Tina; Berteloot, Olivier; Janssen, Colin R; De Schamphelaere, Karel A C

    2017-10-01

    Risk assessment in the European Union implements Zn bioavailability models to derive predicted-no-effect concentrations for Zn. These models are validated within certain boundaries (i.e., pH ≤ 8 and Ca concentrations ≥ 5mg/L), but a substantial fraction of the European surface waters falls outside these boundaries. Therefore, we evaluated whether the chronic Zn biotic ligand model (BLM) for Daphnia magna and the chronic bioavailability model for Pseudokirchneriella subcapitata could be extrapolated to pH > 8 and Ca concentrations model can accurately predict Zn toxicity for Ca concentrations down to 0.8 mg/L and pH values up to 8.5. Because the chronic Zn BLM for D. magna could not be extrapolated beyond its validity boundaries for pH, a generalized bioavailability model (gBAM) was developed. Of 4 gBAMs developed, we recommend the use of gBAM-D, which combines a log-linear relation between the 21-d median effective concentrations (expressed as free Zn 2+ ion activity) and pH, with more conventional BLM-type competition constants for Na, Ca, and Mg. This model is a first step in further improving the accuracy of chronic toxicity predictions of Zn as a function of water chemistry, which can decrease the uncertainty in implementing the bioavailability-based predicted-no-effect concentration in the risk assessment of high-pH and low-Ca concentration regions in Europe. Environ Toxicol Chem 2017;36:2781-2798. © 2017 SETAC. © 2017 SETAC.

  13. Patterns and causes of species richness: a general simulation model for macroecology

    DEFF Research Database (Denmark)

    Gotelli, Nicholas J; Anderson, Marti J; Arita, Hector T

    2009-01-01

    to a mechanistic understanding of the patterns. During the past two decades, macroecologists have successfully addressed technical problems posed by spatial autocorrelation, intercorrelation of predictor variables and non-linearity. However, curve-fitting approaches are problematic because most theoretical models...... in macroecology do not make quantitative predictions, and they do not incorporate interactions among multiple forces. As an alternative, we propose a mechanistic modelling approach. We describe computer simulation models of the stochastic origin, spread, and extinction of species' geographical ranges...... in an environmentally heterogeneous, gridded domain and describe progress to date regarding their implementation. The output from such a general simulation model (GSM) would, at a minimum, consist of the simulated distribution of species ranges on a map, yielding the predicted number of species in each grid cell...

  14. The in-training examination: an analysis of its predictive value on performance on the general pediatrics certification examination.

    Science.gov (United States)

    Althouse, Linda A; McGuinness, Gail A

    2008-09-01

    This study investigates the predictive validity of the In-Training Examination (ITE). Although studies have confirmed the predictive validity of ITEs in other medical specialties, no study has been done for general pediatrics. Each year, residents in accredited pediatric training programs take the ITE as a self-assessment instrument. The ITE is similar to the American Board of Pediatrics General Pediatrics Certifying Examination. First-time takers of the certifying examination over a 5-year period who took at least 1 ITE examination were included in the sample. Regression models analyzed the predictive value of the ITE. The predictive power of the ITE in the first training year is minimal. However, the predictive power of the ITE increases each year, providing the greatest power in the third year of training. Even though ITE scores provide information regarding the likelihood of passing the certification examination, the data should be used with caution, particularly in the first training year. Other factors also must be considered when predicting performance on the certification examination. This study continues to support the ITE as an assessment tool for program directors, as well as a means of providing residents with feedback regarding their acquisition of pediatric knowledge.

  15. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  16. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    Science.gov (United States)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  17. Improved time series prediction with a new method for selection of model parameters

    International Nuclear Information System (INIS)

    Jade, A M; Jayaraman, V K; Kulkarni, B D

    2006-01-01

    A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)

  18. A Generalized Orthotropic Elasto-Plastic Material Model for Impact Analysis

    Science.gov (United States)

    Hoffarth, Canio

    Composite materials are now beginning to provide uses hitherto reserved for metals in structural systems such as airframes and engine containment systems, wraps for repair and rehabilitation, and ballistic/blast mitigation systems. These structural systems are often subjected to impact loads and there is a pressing need for accurate prediction of deformation, damage and failure. There are numerous material models that have been developed to analyze the dynamic impact response of polymer matrix composites. However, there are key features that are missing in those models that prevent them from providing accurate predictive capabilities. In this dissertation, a general purpose orthotropic elasto-plastic computational constitutive material model has been developed to predict the response of composites subjected to high velocity impacts. The constitutive model is divided into three components - deformation model, damage model and failure model, with failure to be added at a later date. The deformation model generalizes the Tsai-Wu failure criteria and extends it using a strain-hardening-based orthotropic yield function with a non-associative flow rule. A strain equivalent formulation is utilized in the damage model that permits plastic and damage calculations to be uncoupled and capture the nonlinear unloading and local softening of the stress-strain response. A diagonal damage tensor is defined to account for the directionally dependent variation of damage. However, in composites it has been found that loading in one direction can lead to damage in multiple coordinate directions. To account for this phenomena, the terms in the damage matrix are semi-coupled such that the damage in a particular coordinate direction is a function of the stresses and plastic strains in all of the coordinate directions. The overall framework is driven by experimental tabulated temperature and rate-dependent stress-strain data as well as data that characterizes the damage matrix and failure

  19. Generalized method for calculation and prediction of vapour-liquid equilibria at high pressures

    Energy Technology Data Exchange (ETDEWEB)

    Drahos, J; Wichterle, I; Hala, E

    1978-02-01

    Following the approaches of K.C. Chao and J.D. Seader (see Gas Abstr. 18,24 (1962) Jan.) and B.I. Lee, J.H. Erbar, and W.C. Edmister (see Gas Abst. 29, 73-0331), the Czechoslovak Academy of Sciences developed a generalized method for prediction of vapor-liquid equilibria in hydrocarbon mixtures containing some nonhydrocarbon gases at high pressures. The method proposed is based on three equations: (1) a generalized equation of state for vapor-phase calculations; (2) a generalized expression for the pure-liquid fugacity coefficient; and (3) an activity coefficient expression based on a surface modification of the regular solution model. The equations used contain only one partially generalized binary parameter, which was evaluated from experimental K-value data. Researchers tested the proposed method by computing K-values and pressures in binary and multicomponent systems consisting of 13 hydrocarbons and 3 nonhydrocarbon gases. The results show that the method is applicable over a wide range of conditions with a degree of accuracy comparable with that of more complicated methods.

  20. Risk prediction model for knee pain in the Nottingham community: a Bayesian modelling approach.

    Science.gov (United States)

    Fernandes, G S; Bhattacharya, A; McWilliams, D F; Ingham, S L; Doherty, M; Zhang, W

    2017-03-20

    Twenty-five percent of the British population over the age of 50 years experiences knee pain. Knee pain can limit physical ability and cause distress and bears significant socioeconomic costs. The objectives of this study were to develop and validate the first risk prediction model for incident knee pain in the Nottingham community and validate this internally within the Nottingham cohort and externally within the Osteoarthritis Initiative (OAI) cohort. A total of 1822 participants from the Nottingham community who were at risk for knee pain were followed for 12 years. Of this cohort, two-thirds (n = 1203) were used to develop the risk prediction model, and one-third (n = 619) were used to validate the model. Incident knee pain was defined as pain on most days for at least 1 month in the past 12 months. Predictors were age, sex, body mass index, pain elsewhere, prior knee injury and knee alignment. A Bayesian logistic regression model was used to determine the probability of an OR >1. The Hosmer-Lemeshow χ 2 statistic (HLS) was used for calibration, and ROC curve analysis was used for discrimination. The OAI cohort from the United States was also used to examine the performance of the model. A risk prediction model for knee pain incidence was developed using a Bayesian approach. The model had good calibration, with an HLS of 7.17 (p = 0.52) and moderate discriminative ability (ROC 0.70) in the community. Individual scenarios are given using the model. However, the model had poor calibration (HLS 5866.28, p prediction model for knee pain, regardless of underlying structural changes of knee osteoarthritis, in the community using a Bayesian modelling approach. The model appears to work well in a community-based population but not in individuals with a higher risk for knee osteoarthritis, and it may provide a convenient tool for use in primary care to predict the risk of knee pain in the general population.

  1. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model

  2. Standard-model predictions for W-pair production in electron-positron collisions

    International Nuclear Information System (INIS)

    Beenakker, W.; Denner, A.

    1994-03-01

    We review the status of the theoretical predictions for W-pair production in e + e - collisions within the electroweak standard model (SM). We first consider for on-shell W-bosons the lowest-order cross-section within the SM, the general effects of anomalous couplings, the radiative corrections within the SM, and approximations for them. Then we discuss the inclusion of finite-width effects in lowest order and the existing results for radiative corrections to off-shell W-pair production, and we outline the general strategy to calculate radiative corrections within the pole scheme. We summarize the theoretical predictions for the total and partial W-boson widths including radiative corrections and discuss the quality of an improved Born approximation. Finally we provide a general discussion of the structure-function method to calculate large logarithmic higher-order corrections associated with collinear photon radiation. (orig.)

  3. Generalized two-temperature model for coupled phonon-magnon diffusion.

    Science.gov (United States)

    Liao, Bolin; Zhou, Jiawei; Chen, Gang

    2014-07-11

    We generalize the two-temperature model [Sanders and Walton, Phys. Rev. B 15, 1489 (1977)] for coupled phonon-magnon diffusion to include the effect of the concurrent magnetization flow, with a particular emphasis on the thermal consequence of the magnon flow driven by a nonuniform magnetic field. Working within the framework of the Boltzmann transport equation, we derive the constitutive equations for coupled phonon-magnon transport driven by gradients of both temperature and external magnetic fields, and the corresponding conservation laws. Our equations reduce to the original Sanders-Walton two-temperature model under a uniform external field, but predict a new magnon cooling effect driven by a nonuniform magnetic field in a homogeneous single-domain ferromagnet. We estimate the magnitude of the cooling effect in an yttrium iron garnet, and show it is within current experimental reach. With properly optimized materials, the predicted cooling effect can potentially supplement the conventional magnetocaloric effect in cryogenic applications in the future.

  4. Comprehensive and critical review of the predictive properties of the various mass models

    International Nuclear Information System (INIS)

    Haustein, P.E.

    1984-01-01

    Since the publication of the 1975 Mass Predictions approximately 300 new atomic masses have been reported. These data come from a variety of experimental studies using diverse techniques and they span a mass range from the lightest isotopes to the very heaviest. It is instructive to compare these data with the 1975 predictions and several others (Moeller and Nix, Monahan, Serduke, Uno and Yamada which appeared latter. Extensive numerical and graphical analyses have been performed to examine the quality of the mass predictions from the various models and to identify features in these models that require correction. In general, there is only rough correlation between the ability of a particular model to reproduce the measured mass surface which had been used to refine its adjustable parameters and that model's ability to predict correctly the new masses. For some models distinct systematic features appear when the new mass data are plotted as functions of relevant physical variables. Global intercomparisons of all the models are made first, followed by several examples of types of analysis performed with individual mass models

  5. Risk prediction model for colorectal cancer: National Health Insurance Corporation study, Korea.

    Directory of Open Access Journals (Sweden)

    Aesun Shin

    Full Text Available PURPOSE: Incidence and mortality rates of colorectal cancer have been rapidly increasing in Korea during last few decades. Development of risk prediction models for colorectal cancer in Korean men and women is urgently needed to enhance its prevention and early detection. METHODS: Gender specific five-year risk prediction models were developed for overall colorectal cancer, proximal colon cancer, distal colon cancer, colon cancer and rectal cancer. The model was developed using data from a population of 846,559 men and 479,449 women who participated in health examinations by the National Health Insurance Corporation. Examinees were 30-80 years old and free of cancer in the baseline years of 1996 and 1997. An independent population of 547,874 men and 415,875 women who participated in 1998 and 1999 examinations was used to validate the model. Model validation was done by evaluating its performance in terms of discrimination and calibration ability using the C-statistic and Hosmer-Lemeshow-type chi-square statistics. RESULTS: Age, body mass index, serum cholesterol, family history of cancer, and alcohol consumption were included in all models for men, whereas age, height, and meat intake frequency were included in all models for women. Models showed moderately good discrimination ability with C-statistics between 0.69 and 0.78. The C-statistics were generally higher in the models for men, whereas the calibration abilities were generally better in the models for women. CONCLUSIONS: Colorectal cancer risk prediction models were developed from large-scale, population-based data. Those models can be used for identifying high risk groups and developing preventive intervention strategies for colorectal cancer.

  6. Comprehensive fluence model for absolute portal dose image prediction

    International Nuclear Information System (INIS)

    Chytyk, K.; McCurdy, B. M. C.

    2009-01-01

    Amorphous silicon (a-Si) electronic portal imaging devices (EPIDs) continue to be investigated as treatment verification tools, with a particular focus on intensity modulated radiation therapy (IMRT). This verification could be accomplished through a comparison of measured portal images to predicted portal dose images. A general fluence determination tailored to portal dose image prediction would be a great asset in order to model the complex modulation of IMRT. A proposed physics-based parameter fluence model was commissioned by matching predicted EPID images to corresponding measured EPID images of multileaf collimator (MLC) defined fields. The two-source fluence model was composed of a focal Gaussian and an extrafocal Gaussian-like source. Specific aspects of the MLC and secondary collimators were also modeled (e.g., jaw and MLC transmission factors, MLC rounded leaf tips, tongue and groove effect, interleaf leakage, and leaf offsets). Several unique aspects of the model were developed based on the results of detailed Monte Carlo simulations of the linear accelerator including (1) use of a non-Gaussian extrafocal fluence source function, (2) separate energy spectra used for focal and extrafocal fluence, and (3) different off-axis energy spectra softening used for focal and extrafocal fluences. The predicted energy fluence was then convolved with Monte Carlo generated, EPID-specific dose kernels to convert incident fluence to dose delivered to the EPID. Measured EPID data were obtained with an a-Si EPID for various MLC-defined fields (from 1x1 to 20x20 cm 2 ) over a range of source-to-detector distances. These measured profiles were used to determine the fluence model parameters in a process analogous to the commissioning of a treatment planning system. The resulting model was tested on 20 clinical IMRT plans, including ten prostate and ten oropharyngeal cases. The model predicted the open-field profiles within 2%, 2 mm, while a mean of 96.6% of pixels over all

  7. Introduction to generalized linear models

    CERN Document Server

    Dobson, Annette J

    2008-01-01

    Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...

  8. Metal accumulation in the earthworm Lumbricus rubellus. Model predictions compared to field data

    Science.gov (United States)

    Veltman, K.; Huijbregts, M.A.J.; Vijver, M.G.; Peijnenburg, W.J.G.M.; Hobbelen, P.H.F.; Koolhaas, J.E.; van Gestel, C.A.M.; van Vliet, P.C.J.; Jan, Hendriks A.

    2007-01-01

    The mechanistic bioaccumulation model OMEGA (Optimal Modeling for Ecotoxicological Applications) is used to estimate accumulation of zinc (Zn), copper (Cu), cadmium (Cd) and lead (Pb) in the earthworm Lumbricus rubellus. Our validation to field accumulation data shows that the model accurately predicts internal cadmium concentrations. In addition, our results show that internal metal concentrations in the earthworm are less than linearly (slope < 1) related to the total concentration in soil, while risk assessment procedures often assume the biota-soil accumulation factor (BSAF) to be constant. Although predicted internal concentrations of all metals are generally within a factor 5 compared to field data, incorporation of regulation in the model is necessary to improve predictability of the essential metals such as zinc and copper. ?? 2006 Elsevier Ltd. All rights reserved.

  9. A weighted generalized score statistic for comparison of predictive values of diagnostic tests.

    Science.gov (United States)

    Kosinski, Andrzej S

    2013-03-15

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.

  10. A Generalized Process Model of Human Action Selection and Error and its Application to Error Prediction

    Science.gov (United States)

    2014-07-01

    Macmillan & Creelman , 2005). This is a quite high degree of discriminability and it means that when the decision model predicts a probability of...ROC analysis. Pattern Recognition Letters, 27(8), 861-874. Retrieved from Google Scholar. Macmillan, N. A., & Creelman , C. D. (2005). Detection

  11. The predictive value of general movement tasks in assessing occupational task performance.

    Science.gov (United States)

    Frost, David M; Beach, Tyson A C; McGill, Stuart M; Callaghan, Jack P

    2015-01-01

    Within the context of evaluating individuals' movement behavior it is generally assumed that the tasks chosen will predict their competency to perform activities relevant to their occupation. This study sought to examine whether a battery of general tasks could be used to predict the movement patterns employed by firefighters to perform select job-specific skills. Fifty-two firefighters performed a battery of general and occupation-specific tasks that simulated the demands of firefighting. Participants' peak lumbar spine and frontal plane knee motion were compared across tasks. During 85% of all comparisons, the magnitude of spine and knee motion was greater during the general movement tasks than observed during the firefighting skills. Certain features of a worker's movement behavior may be exhibited across a range of tasks. Therefore, provided that a movement screen's tasks expose the motions of relevance for the population being tested, general evaluations could offer valuable insight into workers' movement competency or facilitate an opportunity to establish an evidence-informed intervention.

  12. Building interpretable predictive models for pediatric hospital readmission using Tree-Lasso logistic regression.

    Science.gov (United States)

    Jovanovic, Milos; Radovanovic, Sandro; Vukicevic, Milan; Van Poucke, Sven; Delibasic, Boris

    2016-09-01

    Quantification and early identification of unplanned readmission risk have the potential to improve the quality of care during hospitalization and after discharge. However, high dimensionality, sparsity, and class imbalance of electronic health data and the complexity of risk quantification, challenge the development of accurate predictive models. Predictive models require a certain level of interpretability in order to be applicable in real settings and create actionable insights. This paper aims to develop accurate and interpretable predictive models for readmission in a general pediatric patient population, by integrating a data-driven model (sparse logistic regression) and domain knowledge based on the international classification of diseases 9th-revision clinical modification (ICD-9-CM) hierarchy of diseases. Additionally, we propose a way to quantify the interpretability of a model and inspect the stability of alternative solutions. The analysis was conducted on >66,000 pediatric hospital discharge records from California, State Inpatient Databases, Healthcare Cost and Utilization Project between 2009 and 2011. We incorporated domain knowledge based on the ICD-9-CM hierarchy in a data driven, Tree-Lasso regularized logistic regression model, providing the framework for model interpretation. This approach was compared with traditional Lasso logistic regression resulting in models that are easier to interpret by fewer high-level diagnoses, with comparable prediction accuracy. The results revealed that the use of a Tree-Lasso model was as competitive in terms of accuracy (measured by area under the receiver operating characteristic curve-AUC) as the traditional Lasso logistic regression, but integration with the ICD-9-CM hierarchy of diseases provided more interpretable models in terms of high-level diagnoses. Additionally, interpretations of models are in accordance with existing medical understanding of pediatric readmission. Best performing models have

  13. Regression Model to Predict Global Solar Irradiance in Malaysia

    Directory of Open Access Journals (Sweden)

    Hairuniza Ahmed Kutty

    2015-01-01

    Full Text Available A novel regression model is developed to estimate the monthly global solar irradiance in Malaysia. The model is developed based on different available meteorological parameters, including temperature, cloud cover, rain precipitate, relative humidity, wind speed, pressure, and gust speed, by implementing regression analysis. This paper reports on the details of the analysis of the effect of each prediction parameter to identify the parameters that are relevant to estimating global solar irradiance. In addition, the proposed model is compared in terms of the root mean square error (RMSE, mean bias error (MBE, and the coefficient of determination (R2 with other models available from literature studies. Seven models based on single parameters (PM1 to PM7 and five multiple-parameter models (PM7 to PM12 are proposed. The new models perform well, with RMSE ranging from 0.429% to 1.774%, R2 ranging from 0.942 to 0.992, and MBE ranging from −0.1571% to 0.6025%. In general, cloud cover significantly affects the estimation of global solar irradiance. However, cloud cover in Malaysia lacks sufficient influence when included into multiple-parameter models although it performs fairly well in single-parameter prediction models.

  14. Combining process-based and correlative models improves predictions of climate change effects on Schistosoma mansoni transmission in eastern Africa

    Directory of Open Access Journals (Sweden)

    Anna-Sofie Stensgaard

    2016-03-01

    Full Text Available Currently, two broad types of approach for predicting the impact of climate change on vector-borne diseases can be distinguished: i empirical-statistical (correlative approaches that use statistical models of relationships between vector and/or pathogen presence and environmental factors; and ii process-based (mechanistic approaches that seek to simulate detailed biological or epidemiological processes that explicitly describe system behavior. Both have advantages and disadvantages, but it is generally acknowledged that both approaches have value in assessing the response of species in general to climate change. Here, we combine a previously developed dynamic, agentbased model of the temperature-sensitive stages of the Schistosoma mansoni and intermediate host snail lifecycles, with a statistical model of snail habitat suitability for eastern Africa. Baseline model output compared to empirical prevalence data suggest that the combined model performs better than a temperature-driven model alone, and highlights the importance of including snail habitat suitability when modeling schistosomiasis risk. There was general agreement among models in predicting changes in risk, with 24-36% of the eastern Africa region predicted to experience an increase in risk of up-to 20% as a result of increasing temperatures over the next 50 years. Vice versa the models predicted a general decrease in risk in 30-37% of the study area. The snail habitat suitability models also suggest that anthropogenically altered habitat play a vital role for the current distribution of the intermediate snail host, and hence we stress the importance of accounting for land use changes in models of future changes in schistosomiasis risk.

  15. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  16. Testing the generalized partial credit model

    OpenAIRE

    Glas, Cornelis A.W.

    1996-01-01

    The partial credit model (PCM) (G.N. Masters, 1982) can be viewed as a generalization of the Rasch model for dichotomous items to the case of polytomous items. In many cases, the PCM is too restrictive to fit the data. Several generalizations of the PCM have been proposed. In this paper, a generalization of the PCM (GPCM), a further generalization of the one-parameter logistic model, is discussed. The model is defined and the conditional maximum likelihood procedure for the method is describe...

  17. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  18. RNA secondary structure prediction with pseudoknots: Contribution of algorithm versus energy model.

    Science.gov (United States)

    Jabbari, Hosna; Wark, Ian; Montemagno, Carlo

    2018-01-01

    RNA is a biopolymer with various applications inside the cell and in biotechnology. Structure of an RNA molecule mainly determines its function and is essential to guide nanostructure design. Since experimental structure determination is time-consuming and expensive, accurate computational prediction of RNA structure is of great importance. Prediction of RNA secondary structure is relatively simpler than its tertiary structure and provides information about its tertiary structure, therefore, RNA secondary structure prediction has received attention in the past decades. Numerous methods with different folding approaches have been developed for RNA secondary structure prediction. While methods for prediction of RNA pseudoknot-free structure (structures with no crossing base pairs) have greatly improved in terms of their accuracy, methods for prediction of RNA pseudoknotted secondary structure (structures with crossing base pairs) still have room for improvement. A long-standing question for improving the prediction accuracy of RNA pseudoknotted secondary structure is whether to focus on the prediction algorithm or the underlying energy model, as there is a trade-off on computational cost of the prediction algorithm versus the generality of the method. The aim of this work is to argue when comparing different methods for RNA pseudoknotted structure prediction, the combination of algorithm and energy model should be considered and a method should not be considered superior or inferior to others if they do not use the same scoring model. We demonstrate that while the folding approach is important in structure prediction, it is not the only important factor in prediction accuracy of a given method as the underlying energy model is also as of great value. Therefore we encourage researchers to pay particular attention in comparing methods with different energy models.

  19. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. An excitable cortex and memory model successfully predicts new pseudopod dynamics.

    Directory of Open Access Journals (Sweden)

    Robert M Cooper

    Full Text Available Motile eukaryotic cells migrate with directional persistence by alternating left and right turns, even in the absence of external cues. For example, Dictyostelium discoideum cells crawl by extending distinct pseudopods in an alternating right-left pattern. The mechanisms underlying this zig-zag behavior, however, remain unknown. Here we propose a new Excitable Cortex and Memory (EC&M model for understanding the alternating, zig-zag extension of pseudopods. Incorporating elements of previous models, we consider the cell cortex as an excitable system and include global inhibition of new pseudopods while a pseudopod is active. With the novel hypothesis that pseudopod activity makes the local cortex temporarily more excitable--thus creating a memory of previous pseudopod locations--the model reproduces experimentally observed zig-zag behavior. Furthermore, the EC&M model makes four new predictions concerning pseudopod dynamics. To test these predictions we develop an algorithm that detects pseudopods via hierarchical clustering of individual membrane extensions. Data from cell-tracking experiments agrees with all four predictions of the model, revealing that pseudopod placement is a non-Markovian process affected by the dynamics of previous pseudopods. The model is also compatible with known limits of chemotactic sensitivity. In addition to providing a predictive approach to studying eukaryotic cell motion, the EC&M model provides a general framework for future models, and suggests directions for new research regarding the molecular mechanisms underlying directional persistence.

  1. Reliability prediction system based on the failure rate model for electronic components

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  2. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model.

    Science.gov (United States)

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander

    2015-04-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. Copyright © 2015 the American Physiological Society.

  3. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  4. A confirmation of the general relativistic prediction of the Lense-Thirring effect.

    Science.gov (United States)

    Ciufolini, I; Pavlis, E C

    2004-10-21

    An important early prediction of Einstein's general relativity was the advance of the perihelion of Mercury's orbit, whose measurement provided one of the classical tests of Einstein's theory. The advance of the orbital point-of-closest-approach also applies to a binary pulsar system and to an Earth-orbiting satellite. General relativity also predicts that the rotation of a body like Earth will drag the local inertial frames of reference around it, which will affect the orbit of a satellite. This Lense-Thirring effect has hitherto not been detected with high accuracy, but its detection with an error of about 1 per cent is the main goal of Gravity Probe B--an ongoing space mission using orbiting gyroscopes. Here we report a measurement of the Lense-Thirring effect on two Earth satellites: it is 99 +/- 5 per cent of the value predicted by general relativity; the uncertainty of this measurement includes all known random and systematic errors, but we allow for a total +/- 10 per cent uncertainty to include underestimated and unknown sources of error.

  5. Researches of fruit quality prediction model based on near infrared spectrum

    Science.gov (United States)

    Shen, Yulin; Li, Lian

    2018-04-01

    With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.

  6. Towards a general theory of neural computation based on prediction by single neurons.

    Directory of Open Access Journals (Sweden)

    Christopher D Fiorillo

    Full Text Available Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise". A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of

  7. Left ventricular filling pressure by septal and lateral E/e' equally predict cardiovascular events in the general population

    DEFF Research Database (Denmark)

    Wang, Joanna Nan; Biering-Sørensen, Tor; Jørgensen, Peter Godsk

    2017-01-01

    /e'lateral were equally strong predictors of cardiac events; in age- and sex-adjusted models they did not differ in AUC (septal: 0.8385, lateral: 0.8389; p = 0.94) or in continuous NRI (p = 0.84). Models using E/e'average did not improve AUC or NRI, and the intra-individual difference between sites had...... no predictive value (p = 0.79). E/e'septal was generally higher than E/e'lateral, thus age- and sex-specific normal values were reported for both sites for a population free of cardiac events during 10 years of follow-up. CONCLUSIONS: Septal and lateral E/e' are equally useful in predicting cardiac events...

  8. Predicting bottlenose dolphin distribution along Liguria coast (northwestern Mediterranean Sea) through different modeling techniques and indirect predictors.

    Science.gov (United States)

    Marini, C; Fossa, F; Paoli, C; Bellingeri, M; Gnone, G; Vassallo, P

    2015-03-01

    Habitat modeling is an important tool to investigate the quality of the habitat for a species within a certain area, to predict species distribution and to understand the ecological processes behind it. Many species have been investigated by means of habitat modeling techniques mainly to address effective management and protection policies and cetaceans play an important role in this context. The bottlenose dolphin (Tursiops truncatus) has been investigated with habitat modeling techniques since 1997. The objectives of this work were to predict the distribution of bottlenose dolphin in a coastal area through the use of static morphological features and to compare the prediction performances of three different modeling techniques: Generalized Linear Model (GLM), Generalized Additive Model (GAM) and Random Forest (RF). Four static variables were tested: depth, bottom slope, distance from 100 m bathymetric contour and distance from coast. RF revealed itself both the most accurate and the most precise modeling technique with very high distribution probabilities predicted in presence cells (90.4% of mean predicted probabilities) and with 66.7% of presence cells with a predicted probability comprised between 90% and 100%. The bottlenose distribution obtained with RF allowed the identification of specific areas with particularly high presence probability along the coastal zone; the recognition of these core areas may be the starting point to develop effective management practices to improve T. truncatus protection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  10. Generalized, Linear, and Mixed Models

    CERN Document Server

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  11. A generalized multivariate regression model for modelling ocean wave heights

    Science.gov (United States)

    Wang, X. L.; Feng, Y.; Swail, V. R.

    2012-04-01

    In this study, a generalized multivariate linear regression model is developed to represent the relationship between 6-hourly ocean significant wave heights (Hs) and the corresponding 6-hourly mean sea level pressure (MSLP) fields. The model is calibrated using the ERA-Interim reanalysis of Hs and MSLP fields for 1981-2000, and is validated using the ERA-Interim reanalysis for 2001-2010 and ERA40 reanalysis of Hs and MSLP for 1958-2001. The performance of the fitted model is evaluated in terms of Pierce skill score, frequency bias index, and correlation skill score. Being not normally distributed, wave heights are subjected to a data adaptive Box-Cox transformation before being used in the model fitting. Also, since 6-hourly data are being modelled, lag-1 autocorrelation must be and is accounted for. The models with and without Box-Cox transformation, and with and without accounting for autocorrelation, are inter-compared in terms of their prediction skills. The fitted MSLP-Hs relationship is then used to reconstruct historical wave height climate from the 6-hourly MSLP fields taken from the Twentieth Century Reanalysis (20CR, Compo et al. 2011), and to project possible future wave height climates using CMIP5 model simulations of MSLP fields. The reconstructed and projected wave heights, both seasonal means and maxima, are subject to a trend analysis that allows for non-linear (polynomial) trends.

  12. Risk assessment and remedial policy evaluation using predictive modeling

    International Nuclear Information System (INIS)

    Linkov, L.; Schell, W.R.

    1996-01-01

    As a result of nuclear industry operation and accidents, large areas of natural ecosystems have been contaminated by radionuclides and toxic metals. Extensive societal pressure has been exerted to decrease the radiation dose to the population and to the environment. Thus, in making abatement and remediation policy decisions, not only economic costs but also human and environmental risk assessments are desired. This paper introduces a general framework for risk assessment and remedial policy evaluation using predictive modeling. Ecological risk assessment requires evaluation of the radionuclide distribution in ecosystems. The FORESTPATH model is used for predicting the radionuclide fate in forest compartments after deposition as well as for evaluating the efficiency of remedial policies. Time of intervention and radionuclide deposition profile was predicted as being crucial for the remediation efficiency. Risk assessment conducted for a critical group of forest users in Belarus shows that consumption of forest products (berries and mushrooms) leads to about 0.004% risk of a fatal cancer annually. Cost-benefit analysis for forest cleanup suggests that complete removal of organic layer is too expensive for application in Belarus and a better methodology is required. In conclusion, FORESTPATH modeling framework could have wide applications in environmental remediation of radionuclides and toxic metals as well as in dose reconstruction and, risk-assessment

  13. The value of model averaging and dynamical climate model predictions for improving statistical seasonal streamflow forecasts over Australia

    Science.gov (United States)

    Pokhrel, Prafulla; Wang, Q. J.; Robertson, David E.

    2013-10-01

    Seasonal streamflow forecasts are valuable for planning and allocation of water resources. In Australia, the Bureau of Meteorology employs a statistical method to forecast seasonal streamflows. The method uses predictors that are related to catchment wetness at the start of a forecast period and to climate during the forecast period. For the latter, a predictor is selected among a number of lagged climate indices as candidates to give the "best" model in terms of model performance in cross validation. This study investigates two strategies for further improvement in seasonal streamflow forecasts. The first is to combine, through Bayesian model averaging, multiple candidate models with different lagged climate indices as predictors, to take advantage of different predictive strengths of the multiple models. The second strategy is to introduce additional candidate models, using rainfall and sea surface temperature predictions from a global climate model as predictors. This is to take advantage of the direct simulations of various dynamic processes. The results show that combining forecasts from multiple statistical models generally yields more skillful forecasts than using only the best model and appears to moderate the worst forecast errors. The use of rainfall predictions from the dynamical climate model marginally improves the streamflow forecasts when viewed over all the study catchments and seasons, but the use of sea surface temperature predictions provide little additional benefit.

  14. Cosmological models in the generalized Einstein action

    International Nuclear Information System (INIS)

    Arbab, A.I.

    2007-12-01

    We have studied the evolution of the Universe in the generalized Einstein action of the form R + β R 2 , where R is the scalar curvature and β = const. We have found exact cosmological solutions that predict the present cosmic acceleration. These models predict an inflationary de-Sitter era occurring in the early Universe. The cosmological constant (Λ) is found to decay with the Hubble constant (H) as, Λ ∝ H 4 . In this scenario the cosmological constant varies quadratically with the energy density (ρ), i.e., Λ ∝ ρ 2 . Such a variation is found to describe a two-component cosmic fluid in the Universe. One of the components accelerated the Universe in the early era, and the other in the present era. The scale factor of the Universe varies as a ∼ t n = 1/2 in the radiation era. The cosmological constant vanishes when n = 4/3 and n =1/2. We have found that the inclusion of the term R 2 mimics a cosmic matter that could substitute the ordinary matter. (author)

  15. Moving Towards Dynamic Ocean Management: How Well Do Modeled Ocean Products Predict Species Distributions?

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2016-02-01

    Full Text Available Species distribution models are now widely used in conservation and management to predict suitable habitat for protected marine species. The primary sources of dynamic habitat data have been in situ and remotely sensed oceanic variables (both are considered “measured data”, but now ocean models can provide historical estimates and forecast predictions of relevant habitat variables such as temperature, salinity, and mixed layer depth. To assess the performance of modeled ocean data in species distribution models, we present a case study for cetaceans that compares models based on output from a data assimilative implementation of the Regional Ocean Modeling System (ROMS to those based on measured data. Specifically, we used seven years of cetacean line-transect survey data collected between 1991 and 2009 to develop predictive habitat-based models of cetacean density for 11 species in the California Current Ecosystem. Two different generalized additive models were compared: one built with a full suite of ROMS output and another built with a full suite of measured data. Model performance was assessed using the percentage of explained deviance, root mean squared error (RMSE, observed to predicted density ratios, and visual inspection of predicted and observed distributions. Predicted distribution patterns were similar for models using ROMS output and measured data, and showed good concordance between observed sightings and model predictions. Quantitative measures of predictive ability were also similar between model types, and RMSE values were almost identical. The overall demonstrated success of the ROMS-based models opens new opportunities for dynamic species management and biodiversity monitoring because ROMS output is available in near real time and can be forecast.

  16. Models of alien species richness show moderate predictive accuracy and poor transferability

    Directory of Open Access Journals (Sweden)

    César Capinha

    2018-06-01

    Full Text Available Robust predictions of alien species richness are useful to assess global biodiversity change. Nevertheless, the capacity to predict spatial patterns of alien species richness remains largely unassessed. Using 22 data sets of alien species richness from diverse taxonomic groups and covering various parts of the world, we evaluated whether different statistical models were able to provide useful predictions of absolute and relative alien species richness, as a function of explanatory variables representing geographical, environmental and socio-economic factors. Five state-of-the-art count data modelling techniques were used and compared: Poisson and negative binomial generalised linear models (GLMs, multivariate adaptive regression splines (MARS, random forests (RF and boosted regression trees (BRT. We found that predictions of absolute alien species richness had a low to moderate accuracy in the region where the models were developed and a consistently poor accuracy in new regions. Predictions of relative richness performed in a superior manner in both geographical settings, but still were not good. Flexible tree ensembles-type techniques (RF and BRT were shown to be significantly better in modelling alien species richness than parametric linear models (such as GLM, despite the latter being more commonly applied for this purpose. Importantly, the poor spatial transferability of models also warrants caution in assuming the generality of the relationships they identify, e.g. by applying projections under future scenario conditions. Ultimately, our results strongly suggest that predictability of spatial variation in richness of alien species richness is limited. The somewhat more robust ability to rank regions according to the number of aliens they have (i.e. relative richness, suggests that models of aliens species richness may be useful for prioritising and comparing regions, but not for predicting exact species numbers.

  17. Mixed models for predictive modeling in actuarial science

    NARCIS (Netherlands)

    Antonio, K.; Zhang, Y.

    2012-01-01

    We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques

  18. Sci-Thur AM: YIS – 05: Prediction of lung tumor motion using a generalized neural network optimized from the average prediction outcome of a group of patients

    Energy Technology Data Exchange (ETDEWEB)

    Teo, Troy; Alayoubi, Nadia; Bruce, Neil; Pistorius, Stephen [University of Manitoba/ CancerCare Manitoba, University of Manitoba, University of Manitoba, University of Manitoba / CancerCare Manitoba (Canada)

    2016-08-15

    Purpose: In image-guided adaptive radiotherapy systems, prediction of tumor motion is required to compensate for system latencies. However, due to the non-stationary nature of respiration, it is a challenge to predict the associated tumor motions. In this work, a systematic design of the neural network (NN) using a mixture of online data acquired during the initial period of the tumor trajectory, coupled with a generalized model optimized using a group of patient data (obtained offline) is presented. Methods: The average error surface obtained from seven patients was used to determine the input data size and number of hidden neurons for the generalized NN. To reduce training time, instead of using random weights to initialize learning (method 1), weights inherited from previous training batches (method 2) were used to predict tumor position for each sliding window. Results: The generalized network was established with 35 input data (∼4.66s) and 20 hidden nodes. For a prediction horizon of 650 ms, mean absolute errors of 0.73 mm and 0.59 mm were obtained for method 1 and 2 respectively. An average initial learning period of 8.82 s is obtained. Conclusions: A network with a relatively short initial learning time was achieved. Its accuracy is comparable to previous studies. This network could be used as a plug-and play predictor in which (a) tumor positions can be predicted as soon as treatment begins and (b) the need for pretreatment data and optimization for individual patients can be avoided.

  19. Sci-Thur AM: YIS – 05: Prediction of lung tumor motion using a generalized neural network optimized from the average prediction outcome of a group of patients

    International Nuclear Information System (INIS)

    Teo, Troy; Alayoubi, Nadia; Bruce, Neil; Pistorius, Stephen

    2016-01-01

    Purpose: In image-guided adaptive radiotherapy systems, prediction of tumor motion is required to compensate for system latencies. However, due to the non-stationary nature of respiration, it is a challenge to predict the associated tumor motions. In this work, a systematic design of the neural network (NN) using a mixture of online data acquired during the initial period of the tumor trajectory, coupled with a generalized model optimized using a group of patient data (obtained offline) is presented. Methods: The average error surface obtained from seven patients was used to determine the input data size and number of hidden neurons for the generalized NN. To reduce training time, instead of using random weights to initialize learning (method 1), weights inherited from previous training batches (method 2) were used to predict tumor position for each sliding window. Results: The generalized network was established with 35 input data (∼4.66s) and 20 hidden nodes. For a prediction horizon of 650 ms, mean absolute errors of 0.73 mm and 0.59 mm were obtained for method 1 and 2 respectively. An average initial learning period of 8.82 s is obtained. Conclusions: A network with a relatively short initial learning time was achieved. Its accuracy is comparable to previous studies. This network could be used as a plug-and play predictor in which (a) tumor positions can be predicted as soon as treatment begins and (b) the need for pretreatment data and optimization for individual patients can be avoided.

  20. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  1. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  2. Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction

    Science.gov (United States)

    Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.

    2012-12-01

    ARMA(1,2) errors were fit to the observations. Preliminary model validation exercises at a 30-day forecast horizon show that the ARMA error models generally improve the predictive skill of the linear regression rating curves. Skill seems to vary based on the ambient hydrologic conditions at the onset of the forecast. For example, ARMA error model forecasts issued before a high flow/turbidity event do not show significant improvements over the rating curve approach. However, ARMA error model forecasts issued during the "falling limb" of the hydrograph are significantly more accurate than rating curves for both single day and accumulated event predictions. In order to assist in reservoir operations decisions associated with turbidity events and general water supply reliability, DEP has initiated design of an Operations Support Tool (OST). OST integrates a reservoir operations model with 2D hydrodynamic water quality models and a database compiling near-real-time data sources and hydrologic forecasts. Currently, OST uses conventional flow-turbidity rating curves and hydrologic forecasts for predictive turbidity inputs. Given the improvements in predictive skill over traditional rating curves, the ARMA error models are currently being evaluated as an addition to DEP's Operations Support Tool.

  3. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    Science.gov (United States)

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; hide

    2016-01-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.

  4. External intermittency prediction using AMR solutions of RANS turbulence and transported PDF models

    Science.gov (United States)

    Olivieri, D. A.; Fairweather, M.; Falle, S. A. E. G.

    2011-12-01

    External intermittency in turbulent round jets is predicted using a Reynolds-averaged Navier-Stokes modelling approach coupled to solutions of the transported probability density function (pdf) equation for scalar variables. Solutions to the descriptive equations are obtained using a finite-volume method, combined with an adaptive mesh refinement algorithm, applied in both physical and compositional space. This method contrasts with conventional approaches to solving the transported pdf equation which generally employ Monte Carlo techniques. Intermittency-modified eddy viscosity and second-moment turbulence closures are used to accommodate the effects of intermittency on the flow field, with the influence of intermittency also included, through modifications to the mixing model, in the transported pdf equation. Predictions of the overall model are compared with experimental data on the velocity and scalar fields in a round jet, as well as against measurements of intermittency profiles and scalar pdfs in a number of flows, with good agreement obtained. For the cases considered, predictions based on the second-moment turbulence closure are clearly superior, although both turbulence models give realistic predictions of the bimodal scalar pdfs observed experimentally.

  5. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  6. Proteochemometric model for predicting the inhibition of penicillin-binding proteins

    Science.gov (United States)

    Nabu, Sunanta; Nantasenamat, Chanin; Owasirikul, Wiwat; Lawung, Ratana; Isarankura-Na-Ayudhya, Chartchalerm; Lapins, Maris; Wikberg, Jarl E. S.; Prachayasittikul, Virapong

    2015-02-01

    Neisseria gonorrhoeae infection threatens to become an untreatable sexually transmitted disease in the near future owing to the increasing emergence of N. gonorrhoeae strains with reduced susceptibility and resistance to the extended-spectrum cephalosporins (ESCs), i.e. ceftriaxone and cefixime, which are the last remaining option for first-line treatment of gonorrhea. Alteration of the penA gene, encoding penicillin-binding protein 2 (PBP2), is the main mechanism conferring penicillin resistance including reduced susceptibility and resistance to ESCs. To predict and investigate putative amino acid mutations causing β-lactam resistance particularly for ESCs, we applied proteochemometric modeling to generalize N. gonorrhoeae susceptibility data for predicting the interaction of PBP2 with therapeutic β-lactam antibiotics. This was afforded by correlating publicly available data on antimicrobial susceptibility of wild-type and mutant N. gonorrhoeae strains for penicillin-G, cefixime and ceftriaxone with 50 PBP2 protein sequence data using partial least-squares projections to latent structures. The generated model revealed excellent predictability ( R 2 = 0.91, Q 2 = 0.77, Q Ext 2 = 0.78). Moreover, our model identified amino acid mutations in PBP2 with the highest impact on antimicrobial susceptibility and provided information on physicochemical properties of amino acid mutations affecting antimicrobial susceptibility. Our model thus provided insight into the physicochemical basis for resistance development in PBP2 suggesting its use for predicting and monitoring novel PBP2 mutations that may emerge in the future.

  7. Predictive Control, Competitive Model Business Planning, and Innovation ERP

    DEFF Research Database (Denmark)

    Nourani, Cyrus F.; Lauth, Codrina

    2015-01-01

    is not viewed as the sum of its component elements, but the product of their interactions. The paper starts with introducing a systems approach to business modeling. A competitive business modeling technique, based on the author's planning techniques is applied. Systemic decisions are based on common......New optimality principles are put forth based on competitive model business planning. A Generalized MinMax local optimum dynamic programming algorithm is presented and applied to business model computing where predictive techniques can determine local optima. Based on a systems model an enterprise...... organizational goals, and as such business planning and resource assignments should strive to satisfy higher organizational goals. It is critical to understand how different decisions affect and influence one another. Here, a business planning example is presented where systems thinking technique, using Causal...

  8. Improvement of PM10 prediction in East Asia using inverse modeling

    Science.gov (United States)

    Koo, Youn-Seo; Choi, Dae-Ryun; Kwon, Hi-Yong; Jang, Young-Kee; Han, Jin-Seok

    2015-04-01

    Aerosols from anthropogenic emissions in industrialized region in China as well as dust emissions from southern Mongolia and northern China that transport along prevailing northwestern wind have a large influence on the air quality in Korea. The emission inventory in the East Asia region is an important factor in chemical transport modeling (CTM) for PM10 (particulate matters less than 10 ㎛ in aerodynamic diameter) forecasts and air quality management in Korea. Most previous studies showed that predictions of PM10 mass concentration by the CTM were underestimated when comparing with observational data. In order to fill the gap in discrepancies between observations and CTM predictions, the inverse Bayesian approach with Comprehensive Air-quality Model with extension (CAMx) forward model was applied to obtain optimized a posteriori PM10 emissions in East Asia. The predicted PM10 concentrations with a priori emission were first compared with observations at monitoring sites in China and Korea for January and August 2008. The comparison showed that PM10 concentrations with a priori PM10 emissions for anthropogenic and dust sources were generally under-predicted. The result from the inverse modeling indicated that anthropogenic PM10 emissions in the industrialized and urbanized areas in China were underestimated while dust emissions from desert and barren soil in southern Mongolia and northern China were overestimated. A priori PM10 emissions from northeastern China regions including Shenyang, Changchun, and Harbin were underestimated by about 300% (i.e., the ratio of a posteriori to a priori PM10 emission was a factor of about 3). The predictions of PM10 concentrations with a posteriori emission showed better agreement with the observations, implying that the inverse modeling minimized the discrepancies in the model predictions by improving PM10 emissions in East Asia.

  9. General Potential-Current Model and Validation for Electrocoagulation

    International Nuclear Information System (INIS)

    Dubrawski, Kristian L.; Du, Codey; Mohseni, Madjid

    2014-01-01

    A model relating potential and current in continuous parallel plate iron electrocoagulation (EC) was developed for application in drinking water treatment. The general model can be applied to any EC parallel plate system relying only on geometric and tabulated input variables without the need of system-specific experimentally derived constants. For the theoretical model, the anode and cathode were vertically divided into n equipotential segments in a single pass, upflow, and adiabatic EC reactor. Potential and energy balances were simultaneously solved at each vertical segment, which included the contribution of ionic concentrations, solution temperature and conductivity, cathodic hydrogen flux, and gas/liquid ratio. We experimentally validated the numerical model with a vertical upflow EC reactor using a 24 cm height 99.99% pure iron anode divided into twelve 2 cm segments. Individual experimental currents from each segment were summed to determine total current, and compared with the theoretically derived value. Several key variables were studied to determine their impact on model accuracy: solute type, solute concentration, current density, flow rate, inter-electrode gap, and electrode surface condition. Model results were in good agreement with experimental values at cell potentials of 2-20 V (corresponding to a current density range of approximately 50-800 A/m 2 ), with mean relative deviation of 9% for low flow rate, narrow electrode gap, polished electrodes, and 150 mg/L NaCl. Highest deviation occurred with a large electrode gap, unpolished electrodes, and Na 2 SO 4 electrolyte, due to parasitic H 2 O oxidation and less than unity current efficiency. This is the first general model which can be applied to any parallel plate EC system for accurate electrochemical voltage or current prediction

  10. Winnerless competition principle and prediction of the transient dynamics in a Lotka-Volterra model

    Science.gov (United States)

    Afraimovich, Valentin; Tristan, Irma; Huerta, Ramon; Rabinovich, Mikhail I.

    2008-12-01

    Predicting the evolution of multispecies ecological systems is an intriguing problem. A sufficiently complex model with the necessary predicting power requires solutions that are structurally stable. Small variations of the system parameters should not qualitatively perturb its solutions. When one is interested in just asymptotic results of evolution (as time goes to infinity), then the problem has a straightforward mathematical image involving simple attractors (fixed points or limit cycles) of a dynamical system. However, for an accurate prediction of evolution, the analysis of transient solutions is critical. In this paper, in the framework of the traditional Lotka-Volterra model (generalized in some sense), we show that the transient solution representing multispecies sequential competition can be reproducible and predictable with high probability.

  11. Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.

    Science.gov (United States)

    Martínez, C A; Khare, K; Rahman, S; Elzo, M A

    2017-10-01

    Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.

  12. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: V. Predictive absorbability models.

    Science.gov (United States)

    Langenbucher, Frieder

    2007-08-01

    This paper discusses Excel applications related to the prediction of drug absorbability from physicochemical constants. PHDISSOC provides a generalized model for pH profiles of electrolytic dissociation, water solubility, and partition coefficient. SKMODEL predicts drug absorbability, based on a log-log plot of water solubility and O/W partitioning; augmented by additional features such as electrolytic dissociation, melting point, and the dose administered. GIABS presents a mechanistic model of g.i. drug absorption. BIODATCO presents a database compiling relevant drug data to be used for quantitative predictions.

  13. Dynamic optimization and robust explicit model predictive control of hydrogen storage tank

    KAUST Repository

    Panos, C.

    2010-09-01

    We present a general framework for the optimal design and control of a metal-hydride bed under hydrogen desorption operation. The framework features: (i) a detailed two-dimension dynamic process model, (ii) a design and operational dynamic optimization step, and (iii) an explicit/multi-parametric model predictive controller design step. For the controller design, a reduced order approximate model is obtained, based on which nominal and robust multi-parametric controllers are designed. © 2010 Elsevier Ltd.

  14. Dynamic optimization and robust explicit model predictive control of hydrogen storage tank

    KAUST Repository

    Panos, C.; Kouramas, K.I.; Georgiadis, M.C.; Pistikopoulos, E.N.

    2010-01-01

    We present a general framework for the optimal design and control of a metal-hydride bed under hydrogen desorption operation. The framework features: (i) a detailed two-dimension dynamic process model, (ii) a design and operational dynamic optimization step, and (iii) an explicit/multi-parametric model predictive controller design step. For the controller design, a reduced order approximate model is obtained, based on which nominal and robust multi-parametric controllers are designed. © 2010 Elsevier Ltd.

  15. EWAM: a model for predicting food and water ingestion, and inhalation rates of man

    International Nuclear Information System (INIS)

    Zach, Reto; Barnard, John W.

    1985-09-01

    A computer model, EWAM (Energy, Water and Air Model), has been designed and implemented for predicting food and water ingestion, and inhalation rates of man for use in environmental assessment models. EWAM uses physiological, energetic, nutritional and physical relationships in combination with activity time budgets, and mass and energy balances. The calculated ingestion and inhalation rates are closely related. Various age and sex classes of man are taken into account. EWAM is best described as a deterministic equilibrium or steady-state model, operating on a daily time-scale, with both detailed research and more general assessment model features. The parameters of EWAM are reviewed and suitable values recommended to allow biologically meaningful predictions

  16. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.

    Science.gov (United States)

    Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu

    2018-01-19

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing

  17. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model

    Directory of Open Access Journals (Sweden)

    Jingzhou Xin

    2018-01-01

    Full Text Available Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA, and generalized autoregressive conditional heteroskedasticity (GARCH. Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS deformation monitoring system demonstrated that: (1 the Kalman filter is capable of denoising the bridge deformation monitoring data; (2 the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3 in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity; the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data

  18. Aespoe Pillar Stability Experiment. Summary of preparatory work and predictive modelling

    International Nuclear Information System (INIS)

    Andersson, J. Christer

    2004-11-01

    The Aespoe Pillar Stability Experiment, APSE, is a large scale rock mechanics experiment for research of the spalling process and the possibility for numerical modelling of it. The experiment can be summarized in three objectives: Demonstrate the current capability to predict spalling in a fractured rock mass; Demonstrate the effect of backfill (confining pressure) on the rock mass response; and Comparison of 2D and 3D mechanical and thermal predicting capabilities. This report is a summary of the works that has been performed in the experiment prior to the heating of the rock mass. The major activities that have been performed and are discussed herein are: 1) The geology of the experiment drift in general and the experiment volume in particular. 2) The design process of the experiment and thoughts behind some of the important decisions. 3) The monitoring programme and the supporting constructions for the instruments. 4) The numerical modelling, approaches taken and a summary of the predictions. In the end of the report there is a comparison of the results from the different models. Included is also a comparison of the time needed for building, realizing and make changes in the different models

  19. A new general dynamic model predicting radionuclide concentrations and fluxes in coastal areas from readily accessible driving variables

    International Nuclear Information System (INIS)

    Haakanson, Lars

    2004-01-01

    This paper presents a general, process-based dynamic model for coastal areas for radionuclides (metals, organics and nutrients) from both single pulse fallout and continuous deposition. The model gives radionuclide concentrations in water (total, dissolved and particulate phases and concentrations in sediments and fish) for entire defined coastal areas. The model gives monthly variations. It accounts for inflow from tributaries, direct fallout to the coastal area, internal fluxes (sedimentation, resuspension, diffusion, burial, mixing and biouptake and retention in fish) and fluxes to and from the sea outside the defined coastal area and/or adjacent coastal areas. The fluxes of water and substances between the sea and the coastal area are differentiated into three categories of coast types: (i) areas where the water exchange is regulated by tidal effects; (ii) open coastal areas where the water exchange is regulated by coastal currents; and (iii) semi-enclosed archipelago coasts. The coastal model gives the fluxes to and from the following four abiotic compartments: surface water, deep water, ET areas (i.e., areas where fine sediment erosion and transport processes dominate the bottom dynamic conditions and resuspension appears) and A-areas (i.e., areas of continuous fine sediment accumulation). Criteria to define the boundaries for the given coastal area towards the sea, and to define whether a coastal area is open or closed are given in operational terms. The model is simple to apply since all driving variables may be readily accessed from maps and standard monitoring programs. The driving variables are: latitude, catchment area, mean annual precipitation, fallout and month of fallout and parameters expressing coastal size and form as determined from, e.g., digitized bathymetric maps using a GIS program. Selected results: the predictions of radionuclide concentrations in water and fish largely depend on two factors, the concentration in the sea outside the given

  20. Hyperspectral-based predictive modelling of grapevine water status in the Portuguese Douro wine region

    Science.gov (United States)

    Pôças, Isabel; Gonçalves, João; Costa, Patrícia Malva; Gonçalves, Igor; Pereira, Luís S.; Cunha, Mario

    2017-06-01

    In this study, hyperspectral reflectance (HySR) data derived from a handheld spectroradiometer were used to assess the water status of three grapevine cultivars in two sub-regions of Douro wine region during two consecutive years. A large set of potential predictors derived from the HySR data were considered for modelling/predicting the predawn leaf water potential (Ψpd) through different statistical and machine learning techniques. Three HySR vegetation indices were selected as final predictors for the computation of the models and the in-season time trend was removed from data by using a time predictor. The vegetation indices selected were the Normalized Reflectance Index for the wavelengths 554 nm and 561 nm (NRI554;561), the water index (WI) for the wavelengths 900 nm and 970 nm, and the D1 index which is associated with the rate of reflectance increase in the wavelengths of 706 nm and 730 nm. These vegetation indices covered the green, red edge and the near infrared domains of the electromagnetic spectrum. A large set of state-of-the-art analysis and statistical and machine-learning modelling techniques were tested. Predictive modelling techniques based on generalized boosted model (GBM), bagged multivariate adaptive regression splines (B-MARS), generalized additive model (GAM), and Bayesian regularized neural networks (BRNN) showed the best performance for predicting Ψpd, with an average determination coefficient (R2) ranging between 0.78 and 0.80 and RMSE varying between 0.11 and 0.12 MPa. When cultivar Touriga Nacional was used for training the models and the cultivars Touriga Franca and Tinta Barroca for testing (independent validation), the models performance was good, particularly for GBM (R2 = 0.85; RMSE = 0.09 MPa). Additionally, the comparison of Ψpd observed and predicted showed an equitable dispersion of data from the various cultivars. The results achieved show a good potential of these predictive models based on vegetation indices to support

  1. Generalized complex geometry, generalized branes and the Hitchin sigma model

    International Nuclear Information System (INIS)

    Zucchini, Roberto

    2005-01-01

    Hitchin's generalized complex geometry has been shown to be relevant in compactifications of superstring theory with fluxes and is expected to lead to a deeper understanding of mirror symmetry. Gualtieri's notion of generalized complex submanifold seems to be a natural candidate for the description of branes in this context. Recently, we introduced a Batalin-Vilkovisky field theoretic realization of generalized complex geometry, the Hitchin sigma model, extending the well known Poisson sigma model. In this paper, exploiting Gualtieri's formalism, we incorporate branes into the model. A detailed study of the boundary conditions obeyed by the world sheet fields is provided. Finally, it is found that, when branes are present, the classical Batalin-Vilkovisky cohomology contains an extra sector that is related non trivially to a novel cohomology associated with the branes as generalized complex submanifolds. (author)

  2. Validation of a risk prediction model for Barrett's esophagus in an Australian population.

    Science.gov (United States)

    Ireland, Colin J; Gordon, Andrea L; Thompson, Sarah K; Watson, David I; Whiteman, David C; Reed, Richard L; Esterman, Adrian

    2018-01-01

    Esophageal adenocarcinoma is a disease that has a high mortality rate, the only known precursor being Barrett's esophagus (BE). While screening for BE is not cost-effective at the population level, targeted screening might be beneficial. We have developed a risk prediction model to identify people with BE, and here we present the external validation of this model. A cohort study was undertaken to validate a risk prediction model for BE. Individuals with endoscopy and histopathology proven BE completed a questionnaire containing variables previously identified as risk factors for this condition. Their responses were combined with data from a population sample for analysis. Risk scores were derived for each participant. Overall performance of the risk prediction model in terms of calibration and discrimination was assessed. Scores from 95 individuals with BE and 636 individuals from the general population were analyzed. The Brier score was 0.118, suggesting reasonable overall performance. The area under the receiver operating characteristic was 0.83 (95% CI 0.78-0.87). The Hosmer-Lemeshow statistic was p =0.14. Minimizing false positives and false negatives, the model achieved a sensitivity of 74% and a specificity of 73%. This study has validated a risk prediction model for BE that has a higher sensitivity than previous models.

  3. Gas Emission Prediction Model of Coal Mine Based on CSBP Algorithm

    Directory of Open Access Journals (Sweden)

    Xiong Yan

    2016-01-01

    Full Text Available In view of the nonlinear characteristics of gas emission in a coal working face, a prediction method is proposed based on cuckoo search algorithm optimized BP neural network (CSBP. In the CSBP algorithm, the cuckoo search is adopted to optimize weight and threshold parameters of BP network, and obtains the global optimal solutions. Furthermore, the twelve main affecting factors of the gas emission in the coal working face are taken as input vectors of CSBP algorithm, the gas emission is acted as output vector, and then the prediction model of BP neural network with optimal parameters is established. The results show that the CSBP algorithm has batter generalization ability and higher prediction accuracy, and can be utilized effectively in the prediction of coal mine gas emission.

  4. Using Patient Demographics and Statistical Modeling to Predict Knee Tibia Component Sizing in Total Knee Arthroplasty.

    Science.gov (United States)

    Ren, Anna N; Neher, Robert E; Bell, Tyler; Grimm, James

    2018-06-01

    Preoperative planning is important to achieve successful implantation in primary total knee arthroplasty (TKA). However, traditional TKA templating techniques are not accurate enough to predict the component size to a very close range. With the goal of developing a general predictive statistical model using patient demographic information, ordinal logistic regression was applied to build a proportional odds model to predict the tibia component size. The study retrospectively collected the data of 1992 primary Persona Knee System TKA procedures. Of them, 199 procedures were randomly selected as testing data and the rest of the data were randomly partitioned between model training data and model evaluation data with a ratio of 7:3. Different models were trained and evaluated on the training and validation data sets after data exploration. The final model had patient gender, age, weight, and height as independent variables and predicted the tibia size within 1 size difference 96% of the time on the validation data, 94% of the time on the testing data, and 92% on a prospective cadaver data set. The study results indicated the statistical model built by ordinal logistic regression can increase the accuracy of tibia sizing information for Persona Knee preoperative templating. This research shows statistical modeling may be used with radiographs to dramatically enhance the templating accuracy, efficiency, and quality. In general, this methodology can be applied to other TKA products when the data are applicable. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  6. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  7. Comparison of Prediction Model for Cardiovascular Autonomic Dysfunction Using Artificial Neural Network and Logistic Regression Analysis

    Science.gov (United States)

    Zeng, Fangfang; Li, Zhongtao; Yu, Xiaoling; Zhou, Linuo

    2013-01-01

    Background This study aimed to develop the artificial neural network (ANN) and multivariable logistic regression (LR) analyses for prediction modeling of cardiovascular autonomic (CA) dysfunction in the general population, and compare the prediction models using the two approaches. Methods and Materials We analyzed a previous dataset based on a Chinese population sample consisting of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN and LR analysis, and were tested in the validation set. Performances of these prediction models were then compared. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with the prevalence of CA dysfunction (P<0.05). The mean area under the receiver-operating curve was 0.758 (95% CI 0.724–0.793) for LR and 0.762 (95% CI 0.732–0.793) for ANN analysis, but noninferiority result was found (P<0.001). The similar results were found in comparisons of sensitivity, specificity, and predictive values in the prediction models between the LR and ANN analyses. Conclusion The prediction models for CA dysfunction were developed using ANN and LR. ANN and LR are two effective tools for developing prediction models based on our dataset. PMID:23940593

  8. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  9. Economic model predictive control theory, formulations and chemical process applications

    CERN Document Server

    Ellis, Matthew; Christofides, Panagiotis D

    2017-01-01

    This book presents general methods for the design of economic model predictive control (EMPC) systems for broad classes of nonlinear systems that address key theoretical and practical considerations including recursive feasibility, closed-loop stability, closed-loop performance, and computational efficiency. Specifically, the book proposes: Lyapunov-based EMPC methods for nonlinear systems; two-tier EMPC architectures that are highly computationally efficient; and EMPC schemes handling explicitly uncertainty, time-varying cost functions, time-delays and multiple-time-scale dynamics. The proposed methods employ a variety of tools ranging from nonlinear systems analysis, through Lyapunov-based control techniques to nonlinear dynamic optimization. The applicability and performance of the proposed methods are demonstrated through a number of chemical process examples. The book presents state-of-the-art methods for the design of economic model predictive control systems for chemical processes. In addition to being...

  10. A Predictive Model for Readmissions Among Medicare Patients in a California Hospital.

    Science.gov (United States)

    Duncan, Ian; Huynh, Nhan

    2017-11-17

    Predictive models for hospital readmission rates are in high demand because of the Centers for Medicare & Medicaid Services (CMS) Hospital Readmission Reduction Program (HRRP). The LACE index is one of the most popular predictive tools among hospitals in the United States. The LACE index is a simple tool with 4 parameters: Length of stay, Acuity of admission, Comorbidity, and Emergency visits in the previous 6 months. The authors applied logistic regression to develop a predictive model for a medium-sized not-for-profit community hospital in California using patient-level data with more specific patient information (including 13 explanatory variables). Specifically, the logistic regression is applied to 2 populations: a general population including all patients and the specific group of patients targeted by the CMS penalty (characterized as ages 65 or older with select conditions). The 2 resulting logistic regression models have a higher sensitivity rate compared to the sensitivity of the LACE index. The C statistic values of the model applied to both populations demonstrate moderate levels of predictive power. The authors also build an economic model to demonstrate the potential financial impact of the use of the model for targeting high-risk patients in a sample hospital and demonstrate that, on balance, whether the hospital gains or loses from reducing readmissions depends on its margin and the extent of its readmission penalties.

  11. Prediction of Thermal Properties of Sweet Sorghum Bagasse as a Function of Moisture Content Using Artificial Neural Networks and Regression Models

    Directory of Open Access Journals (Sweden)

    Gosukonda Ramana

    2017-06-01

    Full Text Available Artificial neural networks (ANN and traditional regression models were developed for prediction of thermal properties of sweet sorghum bagasse as a function of moisture content and room temperature. Predictions were made for three thermal properties: 1 thermal conductivity, 2 volumetric specific heat, and 3 thermal diffusivity. Each thermal property had five levels of moisture content (8.52%, 12.93%, 18.94%, 24.63%, and 28.62%, w. b. and room temperature as inputs. Data were sub-partitioned for training, testing, and validation of models. Backpropagation (BP and Kalman Filter (KF learning algorithms were employed to develop nonparametric models between input and output data sets. Statistical indices including correlation coefficient (R between actual and predicted outputs were produced for selecting the suitable models. Prediction plots for thermal properties indicated that the ANN models had better accuracy from unseen patterns as compared to regression models. In general, ANN models were able to strongly generalize and interpolate unseen patterns within the domain of training.

  12. Brief report: Bifactor modeling of general vs. specific factors of religiousness differentially predicting substance use risk in adolescence.

    Science.gov (United States)

    Kim-Spoon, Jungmeen; Longo, Gregory S; Holmes, Christopher J

    2015-08-01

    Religiousness is important to adolescents in the U.S., and the significant link between high religiousness and low substance use is well known. There is a debate between multidimensional and unidimensional perspectives of religiousness (Gorsuch, 1984); yet, no empirical study has tested this hierarchical model of religiousness related to adolescent health outcomes. The current study presents the first attempt to test a bifactor model of religiousness related to substance use among adolescents (N = 220, 45% female). Our bifactor model using structural equation modeling suggested the multidimensional nature of religiousness as well as the presence of a superordinate general religiousness factor directly explaining the covariation among the specific factors including organizational and personal religiousness and religious social support. The general religiousness factor was inversely related to substance use. After accounting for the contribution of the general religiousness factor, high organizational religiousness related to low substance use, whereas personal religiousness and religious support were positively related to substance use. The findings present the first evidence that supports hierarchical structures of adolescent religiousness that contribute differentially to adolescent substance use. Copyright © 2015 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  13. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  14. Development and application of a statistical methodology to evaluate the predictive accuracy of building energy baseline models

    Energy Technology Data Exchange (ETDEWEB)

    Granderson, Jessica [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). Energy Technologies Area Div.; Price, Phillip N. [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). Energy Technologies Area Div.

    2014-03-01

    This paper documents the development and application of a general statistical methodology to assess the accuracy of baseline energy models, focusing on its application to Measurement and Verification (M&V) of whole-­building energy savings. The methodology complements the principles addressed in resources such as ASHRAE Guideline 14 and the International Performance Measurement and Verification Protocol. It requires fitting a baseline model to data from a ``training period’’ and using the model to predict total electricity consumption during a subsequent ``prediction period.’’ We illustrate the methodology by evaluating five baseline models using data from 29 buildings. The training period and prediction period were varied, and model predictions of daily, weekly, and monthly energy consumption were compared to meter data to determine model accuracy. Several metrics were used to characterize the accuracy of the predictions, and in some cases the best-­performing model as judged by one metric was not the best performer when judged by another metric.

  15. Exploring the squeezed three-point galaxy correlation function with generalized halo occupation distribution models

    Science.gov (United States)

    Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.

    2018-04-01

    We present the GeneRalized ANd Differentiable Halo Occupation Distribution (GRAND-HOD) routine that generalizes the standard 5 parameter halo occupation distribution model (HOD) with various halo-scale physics and assembly bias. We describe the methodology of 4 different generalizations: satellite distribution generalization, velocity bias, closest approach distance generalization, and assembly bias. We showcase the signatures of these generalizations in the 2-point correlation function (2PCF) and the squeezed 3-point correlation function (squeezed 3PCF). We identify generalized HOD prescriptions that are nearly degenerate in the projected 2PCF and demonstrate that these degeneracies are broken in the redshift-space anisotropic 2PCF and the squeezed 3PCF. We also discuss the possibility of identifying degeneracies in the anisotropic 2PCF and further demonstrate the extra constraining power of the squeezed 3PCF on galaxy-halo connection models. We find that within our current HOD framework, the anisotropic 2PCF can predict the squeezed 3PCF better than its statistical error. This implies that a discordant squeezed 3PCF measurement could falsify the particular HOD model space. Alternatively, it is possible that further generalizations of the HOD model would open opportunities for the squeezed 3PCF to provide novel parameter measurements. The GRAND-HOD Python package is publicly available at https://github.com/SandyYuan/GRAND-HOD.

  16. Contribution of Sequence Motif, Chromatin State, and DNA Structure Features to Predictive Models of Transcription Factor Binding in Yeast.

    Science.gov (United States)

    Tsai, Zing Tsung-Yeh; Shiu, Shin-Han; Tsai, Huai-Kuang

    2015-08-01

    Transcription factor (TF) binding is determined by the presence of specific sequence motifs (SM) and chromatin accessibility, where the latter is influenced by both chromatin state (CS) and DNA structure (DS) properties. Although SM, CS, and DS have been used to predict TF binding sites, a predictive model that jointly considers CS and DS has not been developed to predict either TF-specific binding or general binding properties of TFs. Using budding yeast as model, we found that machine learning classifiers trained with either CS or DS features alone perform better in predicting TF-specific binding compared to SM-based classifiers. In addition, simultaneously considering CS and DS further improves the accuracy of the TF binding predictions, indicating the highly complementary nature of these two properties. The contributions of SM, CS, and DS features to binding site predictions differ greatly between TFs, allowing TF-specific predictions and potentially reflecting different TF binding mechanisms. In addition, a "TF-agnostic" predictive model based on three DNA "intrinsic properties" (in silico predicted nucleosome occupancy, major groove geometry, and dinucleotide free energy) that can be calculated from genomic sequences alone has performance that rivals the model incorporating experiment-derived data. This intrinsic property model allows prediction of binding regions not only across TFs, but also across DNA-binding domain families with distinct structural folds. Furthermore, these predicted binding regions can help identify TF binding sites that have a significant impact on target gene expression. Because the intrinsic property model allows prediction of binding regions across DNA-binding domain families, it is TF agnostic and likely describes general binding potential of TFs. Thus, our findings suggest that it is feasible to establish a TF agnostic model for identifying functional regulatory regions in potentially any sequenced genome.

  17. Contribution of Sequence Motif, Chromatin State, and DNA Structure Features to Predictive Models of Transcription Factor Binding in Yeast.

    Directory of Open Access Journals (Sweden)

    Zing Tsung-Yeh Tsai

    2015-08-01

    Full Text Available Transcription factor (TF binding is determined by the presence of specific sequence motifs (SM and chromatin accessibility, where the latter is influenced by both chromatin state (CS and DNA structure (DS properties. Although SM, CS, and DS have been used to predict TF binding sites, a predictive model that jointly considers CS and DS has not been developed to predict either TF-specific binding or general binding properties of TFs. Using budding yeast as model, we found that machine learning classifiers trained with either CS or DS features alone perform better in predicting TF-specific binding compared to SM-based classifiers. In addition, simultaneously considering CS and DS further improves the accuracy of the TF binding predictions, indicating the highly complementary nature of these two properties. The contributions of SM, CS, and DS features to binding site predictions differ greatly between TFs, allowing TF-specific predictions and potentially reflecting different TF binding mechanisms. In addition, a "TF-agnostic" predictive model based on three DNA "intrinsic properties" (in silico predicted nucleosome occupancy, major groove geometry, and dinucleotide free energy that can be calculated from genomic sequences alone has performance that rivals the model incorporating experiment-derived data. This intrinsic property model allows prediction of binding regions not only across TFs, but also across DNA-binding domain families with distinct structural folds. Furthermore, these predicted binding regions can help identify TF binding sites that have a significant impact on target gene expression. Because the intrinsic property model allows prediction of binding regions across DNA-binding domain families, it is TF agnostic and likely describes general binding potential of TFs. Thus, our findings suggest that it is feasible to establish a TF agnostic model for identifying functional regulatory regions in potentially any sequenced genome.

  18. A general consumer-resource population model

    Science.gov (United States)

    Lafferty, Kevin D.; DeLeo, Giulio; Briggs, Cheryl J.; Dobson, Andrew P.; Gross, Thilo; Kuris, Armand M.

    2015-01-01

    Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model.

  19. Comparison of modeling methods to predict the spatial distribution of deep-sea coral and sponge in the Gulf of Alaska

    Science.gov (United States)

    Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.

    2017-08-01

    Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with

  20. Efficient Prediction of Progesterone Receptor Interactome Using a Support Vector Machine Model

    Directory of Open Access Journals (Sweden)

    Ji-Long Liu

    2015-03-01

    Full Text Available Protein-protein interaction (PPI is essential for almost all cellular processes and identification of PPI is a crucial task for biomedical researchers. So far, most computational studies of PPI are intended for pair-wise prediction. Theoretically, predicting protein partners for a single protein is likely a simpler problem. Given enough data for a particular protein, the results can be more accurate than general PPI predictors. In the present study, we assessed the potential of using the support vector machine (SVM model with selected features centered on a particular protein for PPI prediction. As a proof-of-concept study, we applied this method to identify the interactome of progesterone receptor (PR, a protein which is essential for coordinating female reproduction in mammals by mediating the actions of ovarian progesterone. We achieved an accuracy of 91.9%, sensitivity of 92.8% and specificity of 91.2%. Our method is generally applicable to any other proteins and therefore may be of help in guiding biomedical experiments.

  1. Evaluation of accuracy of linear regression models in predicting urban stormwater discharge characteristics.

    Science.gov (United States)

    Madarang, Krish J; Kang, Joo-Hyon

    2014-06-01

    Stormwater runoff has been identified as a source of pollution for the environment, especially for receiving waters. In order to quantify and manage the impacts of stormwater runoff on the environment, predictive models and mathematical models have been developed. Predictive tools such as regression models have been widely used to predict stormwater discharge characteristics. Storm event characteristics, such as antecedent dry days (ADD), have been related to response variables, such as pollutant loads and concentrations. However it has been a controversial issue among many studies to consider ADD as an important variable in predicting stormwater discharge characteristics. In this study, we examined the accuracy of general linear regression models in predicting discharge characteristics of roadway runoff. A total of 17 storm events were monitored in two highway segments, located in Gwangju, Korea. Data from the monitoring were used to calibrate United States Environmental Protection Agency's Storm Water Management Model (SWMM). The calibrated SWMM was simulated for 55 storm events, and the results of total suspended solid (TSS) discharge loads and event mean concentrations (EMC) were extracted. From these data, linear regression models were developed. R(2) and p-values of the regression of ADD for both TSS loads and EMCs were investigated. Results showed that pollutant loads were better predicted than pollutant EMC in the multiple regression models. Regression may not provide the true effect of site-specific characteristics, due to uncertainty in the data. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  2. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  3. An improved liquid film model to predict the CHF based on the influence of churn flow

    International Nuclear Information System (INIS)

    Wang, Ke; Bai, Bofeng; Ma, Weimin

    2014-01-01

    The critical heat flux (CHF) for boiling crisis is one of the most important parameters in thermal management and safe operation of many engineering systems. Traditionally, the liquid film flow model for “dryout” mechanism shows a good prediction in heated annular two-phase flow. However, a general assumption that the initial entrained fraction at the onset of annular flow shows a lack of reasonable physical interpretation. Since the droplets have great momentum and the length of churn flow is short, the droplets in churn flow show an inevitable effect on the downstream annular flow. To address this, we considered the effect of churn flow and developed the original liquid film flow model in vertical upward flow by suggesting that calculation starts from the onset of churn flow rather than annular flow. The results indicated satisfactory predictions with the experimental data and the developed model provided a better understanding about the effect of flow pattern on the CHF prediction. - Highlights: •The general assumption of initial entrained fraction is unreasonable. •The droplets in churn flow show an inevitable effect on downstream annular flow. •The original liquid film flow model for prediction of CHF was developed. •The integration process was modified to start from the onset of churn flow

  4. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  5. A regional neural network model for predicting mean daily river water temperature

    Science.gov (United States)

    Wagner, Tyler; DeWeber, Jefferson Tyrell

    2014-01-01

    Water temperature is a fundamental property of river habitat and often a key aspect of river resource management, but measurements to characterize thermal regimes are not available for most streams and rivers. As such, we developed an artificial neural network (ANN) ensemble model to predict mean daily water temperature in 197,402 individual stream reaches during the warm season (May–October) throughout the native range of brook trout Salvelinus fontinalis in the eastern U.S. We compared four models with different groups of predictors to determine how well water temperature could be predicted by climatic, landform, and land cover attributes, and used the median prediction from an ensemble of 100 ANNs as our final prediction for each model. The final model included air temperature, landform attributes and forested land cover and predicted mean daily water temperatures with moderate accuracy as determined by root mean squared error (RMSE) at 886 training sites with data from 1980 to 2009 (RMSE = 1.91 °C). Based on validation at 96 sites (RMSE = 1.82) and separately for data from 2010 (RMSE = 1.93), a year with relatively warmer conditions, the model was able to generalize to new stream reaches and years. The most important predictors were mean daily air temperature, prior 7 day mean air temperature, and network catchment area according to sensitivity analyses. Forest land cover at both riparian and catchment extents had relatively weak but clear negative effects. Predicted daily water temperature averaged for the month of July matched expected spatial trends with cooler temperatures in headwaters and at higher elevations and latitudes. Our ANN ensemble is unique in predicting daily temperatures throughout a large region, while other regional efforts have predicted at relatively coarse time steps. The model may prove a useful tool for predicting water temperatures in sampled and unsampled rivers under current conditions and future projections of climate

  6. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Eric R. Edelman

    2017-06-01

    Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

  7. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

    Science.gov (United States)

    Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

    2017-01-01

    For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

  8. Predictions of models for environmental radiological assessment

    International Nuclear Information System (INIS)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando

    2011-01-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  9. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  10. A systematic review of breast cancer incidence risk prediction models with meta-analysis of their performance.

    Science.gov (United States)

    Meads, Catherine; Ahmed, Ikhlaaq; Riley, Richard D

    2012-04-01

    A risk prediction model is a statistical tool for estimating the probability that a currently healthy individual with specific risk factors will develop a condition in the future such as breast cancer. Reliably accurate prediction models can inform future disease burdens, health policies and individual decisions. Breast cancer prediction models containing modifiable risk factors, such as alcohol consumption, BMI or weight, condom use, exogenous hormone use and physical activity, are of particular interest to women who might be considering how to reduce their risk of breast cancer and clinicians developing health policies to reduce population incidence rates. We performed a systematic review to identify and evaluate the performance of prediction models for breast cancer that contain modifiable factors. A protocol was developed and a sensitive search in databases including MEDLINE and EMBASE was conducted in June 2010. Extensive use was made of reference lists. Included were any articles proposing or validating a breast cancer prediction model in a general female population, with no language restrictions. Duplicate data extraction and quality assessment were conducted. Results were summarised qualitatively, and where possible meta-analysis of model performance statistics was undertaken. The systematic review found 17 breast cancer models, each containing a different but often overlapping set of modifiable and other risk factors, combined with an estimated baseline risk that was also often different. Quality of reporting was generally poor, with characteristics of included participants and fitted model results often missing. Only four models received independent validation in external data, most notably the 'Gail 2' model with 12 validations. None of the models demonstrated consistently outstanding ability to accurately discriminate between those who did and those who did not develop breast cancer. For example, random-effects meta-analyses of the performance of the

  11. Predicting the future completing models of observed complex systems

    CERN Document Server

    Abarbanel, Henry

    2013-01-01

    Predicting the Future: Completing Models of Observed Complex Systems provides a general framework for the discussion of model building and validation across a broad spectrum of disciplines. This is accomplished through the development of an exact path integral for use in transferring information from observations to a model of the observed system. Through many illustrative examples drawn from models in neuroscience, fluid dynamics, geosciences, and nonlinear electrical circuits, the concepts are exemplified in detail. Practical numerical methods for approximate evaluations of the path integral are explored, and their use in designing experiments and determining a model's consistency with observations is investigated. Using highly instructive examples, the problems of data assimilation and the means to treat them are clearly illustrated. This book will be useful for students and practitioners of physics, neuroscience, regulatory networks, meteorology and climate science, network dynamics, fluid dynamics, and o...

  12. Multiphase, multicomponent phase behavior prediction

    Science.gov (United States)

    Dadmohammadi, Younas

    Accurate prediction of phase behavior of fluid mixtures in the chemical industry is essential for designing and operating a multitude of processes. Reliable generalized predictions of phase equilibrium properties, such as pressure, temperature, and phase compositions offer an attractive alternative to costly and time consuming experimental measurements. The main purpose of this work was to assess the efficacy of recently generalized activity coefficient models based on binary experimental data to (a) predict binary and ternary vapor-liquid equilibrium systems, and (b) characterize liquid-liquid equilibrium systems. These studies were completed using a diverse binary VLE database consisting of 916 binary and 86 ternary systems involving 140 compounds belonging to 31 chemical classes. Specifically the following tasks were undertaken: First, a comprehensive assessment of the two common approaches (gamma-phi (gamma-ϕ) and phi-phi (ϕ-ϕ)) used for determining the phase behavior of vapor-liquid equilibrium systems is presented. Both the representation and predictive capabilities of these two approaches were examined, as delineated form internal and external consistency tests of 916 binary systems. For the purpose, the universal quasi-chemical (UNIQUAC) model and the Peng-Robinson (PR) equation of state (EOS) were used in this assessment. Second, the efficacy of recently developed generalized UNIQUAC and the nonrandom two-liquid (NRTL) for predicting multicomponent VLE systems were investigated. Third, the abilities of recently modified NRTL model (mNRTL2 and mNRTL1) to characterize liquid-liquid equilibria (LLE) phase conditions and attributes, including phase stability, miscibility, and consolute point coordinates, were assessed. The results of this work indicate that the ϕ-ϕ approach represents the binary VLE systems considered within three times the error of the gamma-ϕ approach. A similar trend was observed for the for the generalized model predictions using

  13. Validation of Occupants’ Behaviour Models for Indoor Quality Parameter and Energy Consumption Prediction

    DEFF Research Database (Denmark)

    Fabi, Valentina; Sugliano, Martina; Andersen, Rune Korsholm

    2015-01-01

    Occupants’ behaviour related to building control system plays a significant role to achieve thermal comfort and air quality in naturally-ventilated buildings. Generally, the published models of occupant's behavior are not validated, meaning that the predictive power has not yet been tested. For t...

  14. Predicting microRNA precursors with a generalized Gaussian components based density estimation algorithm

    Directory of Open Access Journals (Sweden)

    Wu Chi-Yeh

    2010-01-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G

  15. A Bayesian Spatial Model to Predict Disease Status Using Imaging Data From Various Modalities

    Directory of Open Access Journals (Sweden)

    Wenqiong Xue

    2018-03-01

    Full Text Available Relating disease status to imaging data stands to increase the clinical significance of neuroimaging studies. Many neurological and psychiatric disorders involve complex, systems-level alterations that manifest in functional and structural properties of the brain and possibly other clinical and biologic measures. We propose a Bayesian hierarchical model to predict disease status, which is able to incorporate information from both functional and structural brain imaging scans. We consider a two-stage whole brain parcellation, partitioning the brain into 282 subregions, and our model accounts for correlations between voxels from different brain regions defined by the parcellations. Our approach models the imaging data and uses posterior predictive probabilities to perform prediction. The estimates of our model parameters are based on samples drawn from the joint posterior distribution using Markov Chain Monte Carlo (MCMC methods. We evaluate our method by examining the prediction accuracy rates based on leave-one-out cross validation, and we employ an importance sampling strategy to reduce the computation time. We conduct both whole-brain and voxel-level prediction and identify the brain regions that are highly associated with the disease based on the voxel-level prediction results. We apply our model to multimodal brain imaging data from a study of Parkinson's disease. We achieve extremely high accuracy, in general, and our model identifies key regions contributing to accurate prediction including caudate, putamen, and fusiform gyrus as well as several sensory system regions.

  16. Meta-analysis of choice set generation effects on route choice model estimates and predictions

    DEFF Research Database (Denmark)

    Prato, Carlo Giacomo

    2012-01-01

    are applied for model estimation and results are compared to the ‘true model estimates’. Last, predictions from the simulation of models estimated with objective choice sets are compared to the ‘postulated predicted routes’. A meta-analytical approach allows synthesizing the effect of judgments......Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior...... modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation...

  17. Data driven propulsion system weight prediction model

    Science.gov (United States)

    Gerth, Richard J.

    1994-10-01

    The objective of the research was to develop a method to predict the weight of paper engines, i.e., engines that are in the early stages of development. The impetus for the project was the Single Stage To Orbit (SSTO) project, where engineers need to evaluate alternative engine designs. Since the SSTO is a performance driven project the performance models for alternative designs were well understood. The next tradeoff is weight. Since it is known that engine weight varies with thrust levels, a model is required that would allow discrimination between engines that produce the same thrust. Above all, the model had to be rooted in data with assumptions that could be justified based on the data. The general approach was to collect data on as many existing engines as possible and build a statistical model of the engines weight as a function of various component performance parameters. This was considered a reasonable level to begin the project because the data would be readily available, and it would be at the level of most paper engines, prior to detailed component design.

  18. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  19. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  20. Aquatic Exposure Predictions of Insecticide Field Concentrations Using a Multimedia Mass-Balance Model.

    Science.gov (United States)

    Knäbel, Anja; Scheringer, Martin; Stehle, Sebastian; Schulz, Ralf

    2016-04-05

    Highly complex process-driven mechanistic fate and transport models and multimedia mass balance models can be used for the exposure prediction of pesticides in different environmental compartments. Generally, both types of models differ in spatial and temporal resolution. Process-driven mechanistic fate models are very complex, and calculations are time-intensive. This type of model is currently used within the European regulatory pesticide registration (FOCUS). Multimedia mass-balance models require fewer input parameters to calculate concentration ranges and the partitioning between different environmental media. In this study, we used the fugacity-based small-region model (SRM) to calculate predicted environmental concentrations (PEC) for 466 cases of insecticide field concentrations measured in European surface waters. We were able to show that the PECs of the multimedia model are more protective in comparison to FOCUS. In addition, our results show that the multimedia model results have a higher predictive power to simulate varying field concentrations at a higher level of field relevance. The adaptation of the model scenario to actual field conditions suggests that the performance of the SRM increases when worst-case conditions are replaced by real field data. Therefore, this study shows that a less complex modeling approach than that used in the regulatory risk assessment exhibits a higher level of protectiveness and predictiveness and that there is a need to develop and evaluate new ecologically relevant scenarios in the context of pesticide exposure modeling.

  1. Receiver Operating Characteristic Curve-Based Prediction Model for Periodontal Disease Updated With the Calibrated Community Periodontal Index.

    Science.gov (United States)

    Su, Chiu-Wen; Yen, Amy Ming-Fang; Lai, Hongmin; Chen, Hsiu-Hsi; Chen, Sam Li-Sheng

    2017-12-01

    The accuracy of a prediction model for periodontal disease using the community periodontal index (CPI) has been undertaken by using an area under a receiver operating characteristics (AUROC) curve. How the uncalibrated CPI, as measured by general dentists trained by periodontists in a large epidemiologic study, and affects the performance in a prediction model, has not been researched yet. A two-stage design was conducted by first proposing a validation study to calibrate CPI between a senior periodontal specialist and trained general dentists who measured CPIs in the main study of a nationwide survey. A Bayesian hierarchical logistic regression model was applied to estimate the non-updated and updated clinical weights used for building up risk scores. How the calibrated CPI affected performance of the updated prediction model was quantified by comparing AUROC curves between the original and updated models. Estimates regarding calibration of CPI obtained from the validation study were 66% and 85% for sensitivity and specificity, respectively. After updating, clinical weights of each predictor were inflated, and the risk score for the highest risk category was elevated from 434 to 630. Such an update improved the AUROC performance of the two corresponding prediction models from 62.6% (95% confidence interval [CI]: 61.7% to 63.6%) for the non-updated model to 68.9% (95% CI: 68.0% to 69.6%) for the updated one, reaching a statistically significant difference (P prediction model was demonstrated for periodontal disease as measured by the calibrated CPI derived from a large epidemiologic survey.

  2. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  3. An online spatiotemporal prediction model for dengue fever epidemic in Kaohsiung (Taiwan).

    Science.gov (United States)

    Yu, Hwa-Lung; Angulo, José M; Cheng, Ming-Hung; Wu, Jiaping; Christakos, George

    2014-05-01

    The emergence and re-emergence of disease epidemics is a complex question that may be influenced by diverse factors, including the space-time dynamics of human populations, environmental conditions, and associated uncertainties. This study proposes a stochastic framework to integrate space-time dynamics in the form of a Susceptible-Infected-Recovered (SIR) model, together with uncertain disease observations, into a Bayesian maximum entropy (BME) framework. The resulting model (BME-SIR) can be used to predict space-time disease spread. Specifically, it was applied to obtain a space-time prediction of the dengue fever (DF) epidemic that took place in Kaohsiung City (Taiwan) during 2002. In implementing the model, the SIR parameters were continually updated and information on new cases of infection was incorporated. The results obtained show that the proposed model is rigorous to user-specified initial values of unknown model parameters, that is, transmission and recovery rates. In general, this model provides a good characterization of the spatial diffusion of the DF epidemic, especially in the city districts proximal to the location of the outbreak. Prediction performance may be affected by various factors, such as virus serotypes and human intervention, which can change the space-time dynamics of disease diffusion. The proposed BME-SIR disease prediction model can provide government agencies with a valuable reference for the timely identification, control, and prevention of DF spread in space and time. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. River-flow predictions for the South African mid-summer using a coupled general circulation model

    CSIR Research Space (South Africa)

    Olivier, C

    2013-09-01

    Full Text Available African Society for Atmospheric Sciences (SASAS) 2013 http://sasas.ukzn.ac.za/homepage.aspx 1 Tel: +27 12 367 6008 Fax: +27 12 367 6189 Email: cobus.olivier@weathersa.co.za RIVER-FLOW PREDICTIONS FOR THE SOUTH AFRICAN MID-SUMMER USING A COUPLED... for Atmospheric Sciences (SASAS) 2013 http://sasas.ukzn.ac.za/homepage.aspx 2 drops to 127 nationally and 65 stations for the area of interest. A recent coupled modeling system developed at the South African Weather Service (SAWS), that utilizes...

  5. Predictive Models and Tools for Assessing Chemicals under the Toxic Substances Control Act (TSCA)

    Science.gov (United States)

    EPA has developed databases and predictive models to help evaluate the hazard, exposure, and risk of chemicals released to the environment and how workers, the general public, and the environment may be exposed to and affected by them.

  6. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  7. Development of Shear Capacity Prediction Model for FRP-RC Beam without Web Reinforcement

    Directory of Open Access Journals (Sweden)

    Md. Arman Chowdhury

    2016-01-01

    Full Text Available Available codes and models generally use partially modified shear design equation, developed earlier for steel reinforced concrete, for predicting the shear capacity of FRP-RC members. Consequently, calculated shear capacity shows under- or overestimation. Furthermore, in most models some affecting parameters of shear strength are overlooked. In this study, a new and simplified shear capacity prediction model is proposed considering all the parameters. A large database containing 157 experimental results of FRP-RC beams without shear reinforcement is assembled from the published literature. A parametric study is then performed to verify the accuracy of the proposed model. Again, a comprehensive review of 9 codes and 12 available models is done, published back from 1997 to date for comparison with the proposed model. Hence, it is observed that the proposed equation shows overall optimized performance compared to all the codes and models within the range of used experimental dataset.

  8. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab (Massachusetts Institute of Technology, Cambridge, MA); Armstrong, Robert C.; Vanderveen, Keith

    2008-09-01

    The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

  9. Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling.

    Science.gov (United States)

    Kawashima, Issaku; Kumano, Hiroaki

    2017-01-01

    Mind-wandering (MW), task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG) variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR) to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.

  10. Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Issaku Kawashima

    2017-07-01

    Full Text Available Mind-wandering (MW, task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.

  11. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    Science.gov (United States)

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.

  12. Predicting homophobic behavior among heterosexual youth: domain general and sexual orientation-specific factors at the individual and contextual level.

    Science.gov (United States)

    Poteat, V Paul; DiGiovanni, Craig D; Scheer, Jillian R

    2013-03-01

    As a form of bias-based harassment, homophobic behavior remains prominent in schools. Yet, little attention has been given to factors that underlie it, aside from bullying and sexual prejudice. Thus, we examined multiple domain general (empathy, perspective-taking, classroom respect norms) and sexual orientation-specific factors (sexual orientation identity importance, number of sexual minority friends, parents' sexual minority attitudes, media messages). We documented support for a model in which these sets of factors converged to predict homophobic behavior, mediated through bullying and prejudice, among 581 students in grades 9-12 (55 % female). The structural equation model indicated that, with the exception of media messages, these additional factors predicted levels of prejudice and bullying, which in turn predicted the likelihood of students to engage in homophobic behavior. These findings highlight the importance of addressing multiple interrelated factors in efforts to reduce bullying, prejudice, and discrimination among youth.

  13. Multivariate generalized linear mixed models using R

    CERN Document Server

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  14. Prediction of inflows into Lake Kariba using a combination of physical and empirical models

    CSIR Research Space (South Africa)

    Muchuru, S

    2015-10-01

    Full Text Available the upper Zambezi catchment as predictor in a statistical model for estimating seasonal inflows into Lake Kariba. The second and more sophisticated method uses predicted low-level atmospheric circulation of a coupled ocean–atmosphere general circulation...

  15. A model to predict multivessel coronary artery disease from the exercise thallium-201 stress test

    International Nuclear Information System (INIS)

    Pollock, S.G.; Abbott, R.D.; Boucher, C.A.; Watson, D.D.; Kaul, S.

    1991-01-01

    The aim of this study was to (1) determine whether nonimaging variables add to the diagnostic information available from exercise thallium-201 images for the detection of multivessel coronary artery disease; and (2) to develop a model based on the exercise thallium-201 stress test to predict the presence of multivessel disease. The study populations included 383 patients referred to the University of Virginia and 325 patients referred to the Massachusetts General Hospital for evaluation of chest pain. All patients underwent both cardiac catheterization and exercise thallium-201 stress testing between 1978 and 1981. In the University of Virginia cohort, at each level of thallium-201 abnormality (no defects, one defect, more than one defect), ST depression and patient age added significantly in the detection of multivessel disease. Logistic regression analysis using data from these patients identified three independent predictors of multivessel disease: initial thallium-201 defects, ST depression, and age. A model was developed to predict multivessel disease based on these variables. As might be expected, the risk of multivessel disease predicted by the model was similar to that actually observed in the University of Virginia population. More importantly, however, the model was accurate in predicting the occurrence of multivessel disease in the unrelated population studied at the Massachusetts General Hospital. It is, therefore, concluded that (1) nonimaging variables (age and exercise-induced ST depression) add independent information to thallium-201 imaging data in the detection of multivessel disease; and (2) a model has been developed based on the exercise thallium-201 stress test that can accurately predict the probability of multivessel disease in other populations

  16. A general relativistic hydrostatic model for a galaxy

    International Nuclear Information System (INIS)

    Hojman, R.; Pena, L.; Zamorano, N.

    1991-08-01

    The existence of huge amounts of mass laying at the center of some galaxies has been inferred by data gathered at different wavelengths. It seems reasonable then, to incorporate general relativity in the study of these objects. A general relativistic hydrostatic model for a galaxy is studied. We assume that the galaxy is dominated by the dark mass except at the nucleus, where the luminous matter prevails. It considers four different concentric spherically symmetric regions, properly matched and with a specific equation of state for each of them. It yields a slowly raising orbital velocity for a test particle moving in the background gravitational field of the dark matter region. In this sense we think of this model as representing a spiral galaxy. The dependence of the mass on the radius in cluster and field spiral galaxies published recently, can be used to fix the size of the inner luminous core. A vanishing pressure at the edge of the galaxy and the assumption of hydrostatic equilibrium everywhere generates a jump in the density and the orbital velocity at the shell enclosing the galaxy. This is a prediction of this model. The ratio between the size core and the shells introduced here are proportional to their densities. In this sense the model is scale invariant. It can be used to reproduce a galaxy or the central region of a galaxy. We have also compared our results with those obtained with the Newtonian isothermal sphere. The luminosity is not included in our model as an extra variable in the determination of the orbital velocity. (author). 29 refs, 10 figs

  17. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  18. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Evaluation of Turbulence Models Through Predictions of a Simple 3D Boundary Layer.

    Science.gov (United States)

    Jammalamadaka, A.

    2005-11-01

    Although a number of popular turbulence models are now commonly used to predict complex 3D flows, in particular for industrial applications, very limited full evaluation of their performance has been carried out using thoroughly documented experiments. One such experiment is that of Bruns, Fernholz and Monkewitz (JFM, vol. 393; 1999) in a boundary layer on the wall of an S-shaped duct, where the wall shear stress was measured accurately and independently in the original work and more recently with oil-film interferometry by Reudi et al. (Exp Fluids vol. 35; 2003). Results from various models including k-ɛ, Spalart-Alamaras, k-φ, Menter's SST, and RSM are compared with the experimental results to extract better understanding of strengths and limitations of the various models. In addition to the various pressure distributions along the S-duct and the shear stress development on the test surface, the various normal stresses are compared for all the models with some surprising results in reference to the difficulty in predicting even such a simple 3D turbulent flow. Comparisons of other Reynolds stresses with models that predict them directly also reveal interesting results. In general the predictions of models are more in agreement with each other than with the experiment, suggesting that they suffer from common shortcomings. Also, the deviations of the predictions from the experiment grow to significant levels just beyond the development of the cross-over transverse velocity profile.

  20. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  1. Micro Data and General Equilibrium Models

    DEFF Research Database (Denmark)

    Browning, Martin; Hansen, Lars Peter; Heckman, James J.

    1999-01-01

    Dynamic general equilibrium models are required to evaluate policies applied at the national level. To use these models to make quantitative forecasts requires knowledge of an extensive array of parameter values for the economy at large. This essay describes the parameters required for different...... economic models, assesses the discordance between the macromodels used in policy evaluation and the microeconomic models used to generate the empirical evidence. For concreteness, we focus on two general equilibrium models: the stochastic growth model extended to include some forms of heterogeneity...

  2. Adjusting a cancer mortality-prediction model for disease status-related eligibility criteria

    Directory of Open Access Journals (Sweden)

    Kimmel Marek

    2011-05-01

    Full Text Available Abstract Background Volunteering participants in disease studies tend to be healthier than the general population partially due to specific enrollment criteria. Using modeling to accurately predict outcomes of cohort studies enrolling volunteers requires adjusting for the bias introduced in this way. Here we propose a new method to account for the effect of a specific form of healthy volunteer bias resulting from imposing disease status-related eligibility criteria, on disease-specific mortality, by explicitly modeling the length of the time interval between the moment when the subject becomes ineligible for the study, and the outcome. Methods Using survival time data from 1190 newly diagnosed lung cancer patients at MD Anderson Cancer Center, we model the time from clinical lung cancer diagnosis to death using an exponential distribution to approximate the length of this interval for a study where lung cancer death serves as the outcome. Incorporating this interval into our previously developed lung cancer risk model, we adjust for the effect of disease status-related eligibility criteria in predicting the number of lung cancer deaths in the control arm of CARET. The effect of the adjustment using the MD Anderson-derived approximation is compared to that based on SEER data. Results Using the adjustment developed in conjunction with our existing lung cancer model, we are able to accurately predict the number of lung cancer deaths observed in the control arm of CARET. Conclusions The resulting adjustment was accurate in predicting the lower rates of disease observed in the early years while still maintaining reasonable prediction ability in the later years of the trial. This method could be used to adjust for, or predict the duration and relative effect of any possible biases related to disease-specific eligibility criteria in modeling studies of volunteer-based cohorts.

  3. Nudging and predictability in regional climate modelling: investigation in a nested quasi-geostrophic model

    Science.gov (United States)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2010-05-01

    In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.

  4. Glauber model and its generalizations

    International Nuclear Information System (INIS)

    Bialkowski, G.

    The physical aspects of the Glauber model problems are studied: potential model, profile function and Feynman diagrams approaches. Different generalizations of the Glauber model are discussed: particularly higher and lower energy processes and large angles [fr

  5. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  6. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science.

    Science.gov (United States)

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets.

  7. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  8. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  9. Predictive value of the official cancer alarm symptoms in general practice

    DEFF Research Database (Denmark)

    Krasnik Huggenberger, Ivan; Andersen, John Sahl

    2015-01-01

    Introduction: The objective of this study was to investigate the evidence for positive predictive value (PPV) of alarm symptoms and combinations of symptoms for colorectal cancer, breast cancer, prostate cancer and lung cancer in general practice. Methods: This study is based on a literature search...

  10. The generalized circular model

    NARCIS (Netherlands)

    Webers, H.M.

    1995-01-01

    In this paper we present a generalization of the circular model. In this model there are two concentric circular markets, which enables us to study two types of markets simultaneously. There are switching costs involved for moving from one circle to the other circle, which can also be thought of as

  11. Predictive modeling of deep-sea fish distribution in the Azores

    Science.gov (United States)

    Parra, Hugo E.; Pham, Christopher K.; Menezes, Gui M.; Rosa, Alexandra; Tempera, Fernando; Morato, Telmo

    2017-11-01

    Understanding the link between fish and their habitat is essential for an ecosystem approach to fisheries management. However, determining such relationship is challenging, especially for deep-sea species. In this study, we applied generalized additive models (GAMs) to relate presence-absence and relative abundance data of eight economically-important fish species to environmental variables (depth, slope, aspect, substrate type, bottom temperature, salinity and oxygen saturation). We combined 13 years of catch data collected from systematic longline surveys performed across the region. Overall, presence-absence GAMs performed better than abundance models and predictions made for the observed data successfully predicted the occurrence of the eight deep-sea fish species. Depth was the most influential predictor of all fish species occurrence and abundance distributions, whereas other factors were found to be significant for some species but did not show such a clear influence. Our results predicted that despite the extensive Azores EEZ, the habitats available for the studied deep-sea fish species are highly limited and patchy, restricted to seamounts slopes and summits, offshore banks and island slopes. Despite some identified limitations, our GAMs provide an improved knowledge of the spatial distribution of these commercially important fish species in the region.

  12. Transferring and generalizing deep-learning-based neural encoding models across subjects.

    Science.gov (United States)

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-08-01

    Recent studies have shown the value of using deep learning models for mapping and characterizing how the brain represents and organizes information for natural vision. However, modeling the relationship between deep learning models and the brain (or encoding models), requires measuring cortical responses to large and diverse sets of natural visual stimuli from single subjects. This requirement limits prior studies to few subjects, making it difficult to generalize findings across subjects or for a population. In this study, we developed new methods to transfer and generalize encoding models across subjects. To train encoding models specific to a target subject, the models trained for other subjects were used as the prior models and were refined efficiently using Bayesian inference with a limited amount of data from the target subject. To train encoding models for a population, the models were progressively trained and updated with incremental data from different subjects. For the proof of principle, we applied these methods to functional magnetic resonance imaging (fMRI) data from three subjects watching tens of hours of naturalistic videos, while a deep residual neural network driven by image recognition was used to model visual cortical processing. Results demonstrate that the methods developed herein provide an efficient and effective strategy to establish both subject-specific and population-wide predictive models of cortical representations of high-dimensional and hierarchical visual features. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Generic global regression models for growth prediction of Salmonella in ground pork and pork cuts

    DEFF Research Database (Denmark)

    Buschhardt, Tasja; Hansen, Tina Beck; Bahl, Martin Iain

    2017-01-01

    Introduction and Objectives Models for the prediction of bacterial growth in fresh pork are primarily developed using two-step regression (i.e. primary models followed by secondary models). These models are also generally based on experiments in liquids or ground meat and neglect surface growth....... It has been shown that one-step global regressions can result in more accurate models and that bacterial growth on intact surfaces can substantially differ from growth in liquid culture. Material and Methods We used a global-regression approach to develop predictive models for the growth of Salmonella....... One part of obtained logtransformed cell counts was used for model development and another for model validation. The Ratkowsky square root model and the relative lag time (RLT) model were integrated into the logistic model with delay. Fitted parameter estimates were compared to investigate the effect...

  14. Rotary balance data for a typical single-engine general aviation design for an angle-of-attack range of 20 to 90 deg. 3: Influence of control deflection on predicted model D spin modes

    Science.gov (United States)

    Ralston, J. N.; Barnhart, B. P.

    1984-01-01

    The influence of control deflections on the rotational flow aerodynamics and on predicted spin modes is discussed for a 1/6-scale general aviation airplane model. The model was tested for various control settings at both zero and ten degree sideslip angles. Data were measured, using a rotary balance, over an angle-of-attack range of 30 deg to 90 deg, and for clockwise and counter-clockwise rotations covering an omegab/2V range of 0 to 0.5.

  15. Toward a general psychological model of tension and suspense.

    Science.gov (United States)

    Lehne, Moritz; Koelsch, Stefan

    2015-01-01

    Tension and suspense are powerful emotional experiences that occur in a wide variety of contexts (e.g., in music, film, literature, and everyday life). The omnipresence of tension and suspense suggests that they build on very basic cognitive and affective mechanisms. However, the psychological underpinnings of tension experiences remain largely unexplained, and tension and suspense are rarely discussed from a general, domain-independent perspective. In this paper, we argue that tension experiences in different contexts (e.g., musical tension or suspense in a movie) build on the same underlying psychological processes. We discuss key components of tension experiences and propose a domain-independent model of tension and suspense. According to this model, tension experiences originate from states of conflict, instability, dissonance, or uncertainty that trigger predictive processes directed at future events of emotional significance. We also discuss possible neural mechanisms underlying tension and suspense. The model provides a theoretical framework that can inform future empirical research on tension phenomena.

  16. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    Science.gov (United States)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  17. The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing.

    Directory of Open Access Journals (Sweden)

    Jaclyn K Mann

    2014-08-01

    Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  18. Generalized Reduced Order Modeling of Aeroservoelastic Systems

    Science.gov (United States)

    Gariffo, James Michael

    Transonic aeroelastic and aeroservoelastic (ASE) modeling presents a significant technical and computational challenge. Flow fields with a mixture of subsonic and supersonic flow, as well as moving shock waves, can only be captured through high-fidelity CFD analysis. With modern computing power, it is realtively straightforward to determine the flutter boundary for a single structural configuration at a single flight condition, but problems of larger scope remain quite costly. Some such problems include characterizing a vehicle's flutter boundary over its full flight envelope, optimizing its structural weight subject to aeroelastic constraints, and designing control laws for flutter suppression. For all of these applications, reduced-order models (ROMs) offer substantial computational savings. ROM techniques in general have existed for decades, and the methodology presented in this dissertation builds on successful previous techniques to create a powerful new scheme for modeling aeroelastic systems, and predicting and interpolating their transonic flutter boundaries. In this method, linear ASE state-space models are constructed from modal structural and actuator models coupled to state-space models of the linearized aerodynamic forces through feedback loops. Flutter predictions can be made from these models through simple eigenvalue analysis of their state-transition matrices for an appropriate set of dynamic pressures. Moreover, this analysis returns the frequency and damping trend of every aeroelastic branch. In contrast, determining the critical dynamic pressure by direct time-marching CFD requires a separate run for every dynamic pressure being analyzed simply to obtain the trend for the critical branch. The present ROM methodology also includes a new model interpolation technique that greatly enhances the benefits of these ROMs. This enables predictions of the dynamic behavior of the system for flight conditions where CFD analysis has not been explicitly

  19. Predicting climate-induced range shifts: model differences and model reliability.

    Science.gov (United States)

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  20. Incremental validity of positive orientation: predictive efficiency beyond the five-factor model

    Directory of Open Access Journals (Sweden)

    Łukasz Roland Miciuk

    2016-05-01

    Full Text Available Background The relation of positive orientation (a basic predisposition to think positively of oneself, one’s life and one’s future and personality traits is still disputable. The purpose of the described research was to verify the hypothesis that positive orientation has predictive efficiency beyond the five-factor model. Participants and procedure One hundred and thirty participants (at the mean age M = 24.84 completed the following questionnaires: the Self-Esteem Scale (SES, the Satisfaction with Life Scale (SWLS, the Life Orientation Test-Revised (LOT-R, the Positivity Scale (P-SCALE, the NEO Five Factor Inventory (NEO-FFI, the Self-Concept Clarity Scale (SCC, the Generalized Self-Efficacy Scale (GSES and the Life Engagement Test (LET. Results The introduction of positive orientation as an additional predictor in the second step of regression analyses led to better prediction of the following variables: purpose in life, self-concept clarity and generalized self-efficacy. This effect was the strongest for predicting purpose in life (i.e. 14% increment of the explained variance. Conclusions The results confirmed our hypothesis that positive orientation can be characterized by incremental validity – its inclusion in the regression model (in addition to the five main factors of personality increases the amount of explained variance. These findings may provide further evidence for the legitimacy of measuring positive orientation and personality traits separately.

  1. Prediction and reconstruction of future and missing unobservable modified Weibull lifetime based on generalized order statistics

    Directory of Open Access Journals (Sweden)

    Amany E. Aly

    2016-04-01

    Full Text Available When a system consisting of independent components of the same type, some appropriate actions may be done as soon as a portion of them have failed. It is, therefore, important to be able to predict later failure times from earlier ones. One of the well-known failure distributions commonly used to model component life, is the modified Weibull distribution (MWD. In this paper, two pivotal quantities are proposed to construct prediction intervals for future unobservable lifetimes based on generalized order statistics (gos from MWD. Moreover, a pivotal quantity is developed to reconstruct missing observations at the beginning of experiment. Furthermore, Monte Carlo simulation studies are conducted and numerical computations are carried out to investigate the efficiency of presented results. Finally, two illustrative examples for real data sets are analyzed.

  2. Deep Belief Network Based Hybrid Model for Building Energy Consumption Prediction

    Directory of Open Access Journals (Sweden)

    Chengdong Li

    2018-01-01

    Full Text Available To enhance the prediction performance for building energy consumption, this paper presents a modified deep belief network (DBN based hybrid model. The proposed hybrid model combines the outputs from the DBN model with the energy-consuming pattern to yield the final prediction results. The energy-consuming pattern in this study represents the periodicity property of building energy consumption and can be extracted from the observed historical energy consumption data. The residual data generated by removing the energy-consuming pattern from the original data are utilized to train the modified DBN model. The training of the modified DBN includes two steps, the first one of which adopts the contrastive divergence (CD algorithm to optimize the hidden parameters in a pre-train way, while the second one determines the output weighting vector by the least squares method. The proposed hybrid model is applied to two kinds of building energy consumption data sets that have different energy-consuming patterns (daily-periodicity and weekly-periodicity. In order to examine the advantages of the proposed model, four popular artificial intelligence methods—the backward propagation neural network (BPNN, the generalized radial basis function neural network (GRBFNN, the extreme learning machine (ELM, and the support vector regressor (SVR are chosen as the comparative approaches. Experimental results demonstrate that the proposed DBN based hybrid model has the best performance compared with the comparative techniques. Another thing to be mentioned is that all the predictors constructed by utilizing the energy-consuming patterns perform better than those designed only by the original data. This verifies the usefulness of the incorporation of the energy-consuming patterns. The proposed approach can also be extended and applied to some other similar prediction problems that have periodicity patterns, e.g., the traffic flow forecasting and the electricity consumption

  3. Generalized Veneziano model for pion scattering off isovector currents and the scaling limit

    CERN Document Server

    Rothe, H J; Rolhe, K D

    1972-01-01

    Starting from a local one-particle approximation scheme for the commutator of two conserved currents, the authors construct a generalized Veneziano model for pion scattering off neutral and charged isovector currents, satisfying the constraints of current conservation and current algebra. The model factorizes correctly on the leading Regge trajectories and incorporates the proper Regge behaviour for strong amplitudes. Fixed poles are found to be present in the s and t channels of the one- and two-current amplitudes. Furthermore, the model makes definite predictions about the structure of Schwinger terms and of the 'seagull' terms in the retarded commutator. (13 refs).

  4. Predictive Modeling of a Paradigm Mechanical Cooling Tower Model: II. Optimal Best-Estimate Results with Reduced Predicted Uncertainties

    Directory of Open Access Journals (Sweden)

    Ruixian Fang

    2016-09-01

    Full Text Available This work uses the adjoint sensitivity model of the counter-flow cooling tower derived in the accompanying PART I to obtain the expressions and relative numerical rankings of the sensitivities, to all model parameters, of the following model responses: (i outlet air temperature; (ii outlet water temperature; (iii outlet water mass flow rate; and (iv air outlet relative humidity. These sensitivities are subsequently used within the “predictive modeling for coupled multi-physics systems” (PM_CMPS methodology to obtain explicit formulas for the predicted optimal nominal values for the model responses and parameters, along with reduced predicted standard deviations for the predicted model parameters and responses. These explicit formulas embody the assimilation of experimental data and the “calibration” of the model’s parameters. The results presented in this work demonstrate that the PM_CMPS methodology reduces the predicted standard deviations to values that are smaller than either the computed or the experimentally measured ones, even for responses (e.g., the outlet water flow rate for which no measurements are available. These improvements stem from the global characteristics of the PM_CMPS methodology, which combines all of the available information simultaneously in phase-space, as opposed to combining it sequentially, as in current data assimilation procedures.

  5. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  6. Validation of a risk prediction model for Barrett’s esophagus in an Australian population

    Directory of Open Access Journals (Sweden)

    Ireland CJ

    2018-03-01

    Full Text Available Colin J Ireland,1 Andrea L Gordon,2 Sarah K Thompson,3 David I Watson,4 David C Whiteman,5 Richard L Reed,6 Adrian Esterman1,7 1School of Nursing and Midwifery, Division of Health Sciences, University of South Australia, Adelaide, SA, Australia; 2School of Pharmacy and Medical Science, Division of Health Sciences, University of South Australia, Adelaide, SA, Australia; 3Discipline of Surgery, University of Adelaide, Adelaide, SA, Australia; 4Department of Surgery, Flinders University, Bedford Park, SA, Australia; 5Population Health Department, QIMR Berghofer Medical Research Institute, Herston, QLD, Australia; 6Discipline of General Practice, Flinders University, Bedford Park, SA, Australia; 7Australian Institute of Tropical Health and Medicine, James Cook University, Cairns, QLD, Australia Background: Esophageal adenocarcinoma is a disease that has a high mortality rate, the only known precursor being Barrett’s esophagus (BE. While screening for BE is not cost-effective at the population level, targeted screening might be beneficial. We have developed a risk prediction model to identify people with BE, and here we present the external validation of this model. Materials and methods: A cohort study was undertaken to validate a risk prediction model for BE. Individuals with endoscopy and histopathology proven BE completed a questionnaire containing variables previously identified as risk factors for this condition. Their responses were combined with data from a population sample for analysis. Risk scores were derived for each participant. Overall performance of the risk prediction model in terms of calibration and discrimination was assessed. Results: Scores from 95 individuals with BE and 636 individuals from the general population were analyzed. The Brier score was 0.118, suggesting reasonable overall performance. The area under the receiver operating characteristic was 0.83 (95% CI 0.78–0.87. The Hosmer–Lemeshow statistic was p=0

  7. Modeling of clouds and radiation for development of parameterizations for general circulation models

    International Nuclear Information System (INIS)

    Westphal, D.; Toon, B.; Jensen, E.; Kinne, S.; Ackerman, A.; Bergstrom, R.; Walker, A.

    1994-01-01

    Atmospheric Radiation Measurement (ARM) Program research at NASA Ames Research Center (ARC) includes radiative transfer modeling, cirrus cloud microphysics, and stratus cloud modeling. These efforts are designed to provide the basis for improving cloud and radiation parameterizations in our main effort: mesoscale cloud modeling. The range of non-convective cloud models used by the ARM modeling community can be crudely categorized based on the number of predicted hydrometers such as cloud water, ice water, rain, snow, graupel, etc. The simplest model has no predicted hydrometers and diagnoses the presence of clouds based on the predicted relative humidity. The vast majority of cloud models have two or more predictive bulk hydrometers and are termed either bulk water (BW) or size-resolving (SR) schemes. This study compares the various cloud models within the same dynamical framework, and compares results with observations rather than climate statistics

  8. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  9. A modelling study for long-term life prediction of carbon steel overpack for geological isolation of High-Level Radioactive Waste

    International Nuclear Information System (INIS)

    Taniguchi, Naoki; Honda, Akira; Ishikawa, Hirohisa

    1996-01-01

    Current plans for the geological disposal of High-Level Radioactive Waste (HLW) in Japan include metal overpacks which contain HLW. Overpacks may be required to remain intact for more than several hundred years in order to provide containment of radio nuclides. The main factor limiting the performance of overpacks is considered to be corrosion by groundwater. Carbon steel is one of the candidate material for overpacks. A mathematical model for life prediction of carbon steel overpack has been developed based on corrosion mechanism. General corrosion and localized corrosion are considered because these are likely to initiate in repository conditions. In general corrosion model, the reduction of oxygen and water are considered as cathodic reaction. In localized corrosion model, we have constructed a model which predict the period for localized corrosion based on oxygen transport in bentonite. We also developed a model which predict the propagation rate of localized corrosion that is based on mass balance within the corroding cavity. (author)

  10. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  11. DC dynamic pull-in predictions for a generalized clamped–clamped micro-beam based on a continuous model and bifurcation analysis

    International Nuclear Information System (INIS)

    Chao, Paul C-P; Chiu, C W; Liu, Tsu-Hsien

    2008-01-01

    This study is devoted to providing precise predictions of the dc dynamic pull-in voltages of a clamped–clamped micro-beam based on a continuous model. A pull-in phenomenon occurs when the electrostatic force on the micro-beam exceeds the elastic restoring force exerted by beam deformation, leading to contact between the actuated beam and bottom electrode. DC dynamic pull-in means that an instantaneous application of the voltage (a step function such as voltage) is applied. To derive the pull-in voltage, a dynamic model in partial differential equations is established based on the equilibrium among beam flexibility, inertia, residual stress, squeeze film, distributed electrostatic forces and its electrical field fringing effects. The method of Galerkin decomposition is then employed to convert the established system equations into reduced discrete modal equations. Considering lower-order modes and approximating the beam deflection by a different order series, bifurcation based on phase portraits is conducted to derive static and dynamic pull-in voltages. It is found that the static pull-in phenomenon follows dynamic instabilities, and the dc dynamic pull-in voltage is around 91–92% of the static counterpart. However, the derived dynamic pull-in voltage is found to be dependent on the varied beam parameters, different from a fixed predicted value derived in past works, where only lumped models are assumed. Furthermore, accurate closed-form predictions are provided for non-narrow beams. The predictions are finally validated by finite element analysis and available experimental data

  12. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  13. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  14. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  15. Genomic prediction based on data from three layer lines using non-linear regression models.

    Science.gov (United States)

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional

  16. Product unit neural network models for predicting the growth limits of Listeria monocytogenes.

    Science.gov (United States)

    Valero, A; Hervás, C; García-Gimeno, R M; Zurera, G

    2007-08-01

    A new approach to predict the growth/no growth interface of Listeria monocytogenes as a function of storage temperature, pH, citric acid (CA) and ascorbic acid (AA) is presented. A linear logistic regression procedure was performed and a non-linear model was obtained by adding new variables by means of a Neural Network model based on Product Units (PUNN). The classification efficiency of the training data set and the generalization data of the new Logistic Regression PUNN model (LRPU) were compared with Linear Logistic Regression (LLR) and Polynomial Logistic Regression (PLR) models. 92% of the total cases from the LRPU model were correctly classified, an improvement on the percentage obtained using the PLR model (90%) and significantly higher than the results obtained with the LLR model, 80%. On the other hand predictions of LRPU were closer to data observed which permits to design proper formulations in minimally processed foods. This novel methodology can be applied to predictive microbiology for describing growth/no growth interface of food-borne microorganisms such as L. monocytogenes. The optimal balance is trying to find models with an acceptable interpretation capacity and with good ability to fit the data on the boundaries of variable range. The results obtained conclude that these kinds of models might well be very a valuable tool for mathematical modeling.

  17. New models of droplet deposition and entrainment for prediction of CHF in cylindrical rod bundles

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Haibin, E-mail: hb-zhang@xjtu.edu.cn [School of Chemical Engineering and Technology, Xi’an Jiaotong University, Xi’an 710049 (China); Department of Chemical Engineering, Imperial College, London SW7 2BY (United Kingdom); Hewitt, G.F. [Department of Chemical Engineering, Imperial College, London SW7 2BY (United Kingdom)

    2016-08-15

    Highlights: • New models of droplet deposition and entrainment in rod bundles is developed. • A new phenomenological model to predict the CHF in rod bundles is described. • The present model is well able to predict CHF in rod bundles. - Abstract: In this paper, we present a new set of model of droplet deposition and entrainment in cylindrical rod bundles based on the previously proposed model for annuli (effectively a “one-rod” bundle) (2016a). These models make it possible to evaluate the differences of the rates of droplet deposition and entrainment for the respective rods and for the outer tube by taking into account the geometrical characteristics of the rod bundles. Using these models, a phenomenological model to predict the CHF (critical heat flux) for upward annular flow in vertical rod bundles is described. The performance of the model is tested against the experimental data of Becker et al. (1964) for CHF in 3-rod and 7-rod bundles. These data include tests in which only the rods were heated and data for simultaneous uniform and non-uniform heating of the rods and the outer tube. It was shown that the predicted CHFs by the present model agree well with the experimental data and with the experimental observation that dryout occurred first on the outer rods in 7-rod bundles. It is expected that the methodology used will be generally applicable in the prediction of CHF in rod bundles.

  18. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  19. Prediction of chronic critical illness in a general intensive care unit

    Directory of Open Access Journals (Sweden)

    Sérgio H. Loss

    2013-06-01

    Full Text Available OBJECTIVE: To assess the incidence, costs, and mortality associated with chronic critical illness (CCI, and to identify clinical predictors of CCI in a general intensive care unit. METHODS: This was a prospective observational cohort study. All patients receiving supportive treatment for over 20 days were considered chronically critically ill and eligible for the study. After applying the exclusion criteria, 453 patients were analyzed. RESULTS: There was an 11% incidence of CCI. Total length of hospital stay, costs, and mortality were significantly higher among patients with CCI. Mechanical ventilation, sepsis, Glasgow score < 15, inadequate calorie intake, and higher body mass index were independent predictors for cci in the multivariate logistic regression model. CONCLUSIONS: CCI affects a distinctive population in intensive care units with higher mortality, costs, and prolonged hospitalization. Factors identifiable at the time of admission or during the first week in the intensive care unit can be used to predict CCI.

  20. Force Control for a Pneumatic Cylinder Using Generalized Predictive Controller Approach

    Directory of Open Access Journals (Sweden)

    Ahmad ’Athif Mohd Faudzi

    2014-01-01

    Full Text Available Pneumatic cylinder is a well-known device because of its high power to weight ratio, easy use, and environmental safety. Pneumatic cylinder uses air as its power source and converts it to a possible movement such as linear and rotary movement. In order to control the pneumatic cylinder, controller algorithm is needed to control the on-off solenoid valve with encoder and pressure sensor as the feedback inputs. In this paper, generalized predictive controller (GPC is proposed as the control strategy for the pneumatic cylinder force control. To validate and compare the performance, proportional-integral (PI controller is also presented. Both controllers algorithms GPC and PI are developed using existing linear model of the cylinder from previous research. Results are presented in simulation and experimental approach using MATLAB-Simulink as the platform. The results show that the GPC is capable of fast response with low steady state error and percentage overshoot compared to PI.

  1. Accounting for misclassification in electronic health records-derived exposures using generalized linear finite mixture models.

    Science.gov (United States)

    Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M

    2017-06-01

    Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.

  2. Generalized One-Band Model Based on Zhang-Rice Singlets for Tetragonal CuO

    Science.gov (United States)

    Hamad, I. J.; Manuel, L. O.; Aligia, A. A.

    2018-04-01

    Tetragonal CuO (T-CuO) has attracted attention because of its structure similar to that of the cuprates. It has been recently proposed as a compound whose study can give an end to the long debate about the proper microscopic modeling for cuprates. In this work, we rigorously derive an effective one-band generalized t -J model for T-CuO, based on orthogonalized Zhang-Rice singlets, and make an estimative calculation of its parameters, based on previous ab initio calculations. By means of the self-consistent Born approximation, we then evaluate the spectral function and the quasiparticle dispersion for a single hole doped in antiferromagnetically ordered half filled T-CuO. Our predictions show very good agreement with angle-resolved photoemission spectra and with theoretical multiband results. We conclude that a generalized t -J model remains the minimal Hamiltonian for a correct description of single-hole dynamics in cuprates.

  3. Generalized Recovery

    DEFF Research Database (Denmark)

    Lando, David; Pedersen, Lasse Heje; Jensen, Christian Skov

    We characterize when physical probabilities, marginal utilities, and the discount rate can be recovered from observed state prices for several future time periods. We make no assumptions of the probability distribution, thus generalizing the time-homogeneous stationary model of Ross (2015...... our model empirically, testing the predictive power of the recovered expected return and other recovered statistics....

  4. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  5. Design and Application of Offset-Free Model Predictive Control Disturbance Observation Method

    Directory of Open Access Journals (Sweden)

    Xue Wang

    2016-01-01

    Full Text Available Model predictive control (MPC with its lower request to the mathematical model, excellent control performance, and convenience online calculation has developed into a very important subdiscipline with rich theory foundation and practical application. However, unmeasurable disturbance is widespread in industrial processes, which is difficult to deal with directly at present. In most of the implemented MPC strategies, the method of incorporating a constant output disturbance into the process model is introduced to solve this problem, but it fails to achieve offset-free control once the unmeasured disturbances access the process. Based on the Kalman filter theory, the problem is solved by using a more general disturbance model which is superior to the constant output disturbance model. This paper presents the necessary conditions for offset-free model predictive control based on the model. By applying disturbance model, the unmeasurable disturbance vectors are augmented as the states of control system, and the Kalman filer is used to estimate unmeasurable disturbance and its effect on the output. Then, the dynamic matrix control (DMC algorithm is improved by utilizing the feed-forward compensation control strategy with the disturbance estimated.

  6. Testing the generalized partial credit model

    NARCIS (Netherlands)

    Glas, Cornelis A.W.

    1996-01-01

    The partial credit model (PCM) (G.N. Masters, 1982) can be viewed as a generalization of the Rasch model for dichotomous items to the case of polytomous items. In many cases, the PCM is too restrictive to fit the data. Several generalizations of the PCM have been proposed. In this paper, a

  7. Development of a likelihood of survival scoring system for hospitalized equine neonates using generalized boosted regression modeling.

    Directory of Open Access Journals (Sweden)

    Katarzyna A Dembek

    Full Text Available BACKGROUND: Medical management of critically ill equine neonates (foals can be expensive and labor intensive. Predicting the odds of foal survival using clinical information could facilitate the decision-making process for owners and clinicians. Numerous prognostic indicators and mathematical models to predict outcome in foals have been published; however, a validated scoring method to predict survival in sick foals has not been reported. The goal of this study was to develop and validate a scoring system that can be used by clinicians to predict likelihood of survival of equine neonates based on clinical data obtained on admission. METHODS AND RESULTS: Data from 339 hospitalized foals of less than four days of age admitted to three equine hospitals were included to develop the model. Thirty seven variables including historical information, physical examination and laboratory findings were analyzed by generalized boosted regression modeling (GBM to determine which ones would be included in the survival score. Of these, six variables were retained in the final model. The weight for each variable was calculated using a generalized linear model and the probability of survival for each total score was determined. The highest (7 and the lowest (0 scores represented 97% and 3% probability of survival, respectively. Accuracy of this survival score was validated in a prospective study on data from 283 hospitalized foals from the same three hospitals. Sensitivity, specificity, positive and negative predictive values for the survival score in the prospective population were 96%, 71%, 91%, and 85%, respectively. CONCLUSIONS: The survival score developed in our study was validated in a large number of foals with a wide range of diseases and can be easily implemented using data available in most equine hospitals. GBM was a useful tool to develop the survival score. Further evaluations of this scoring system in field conditions are needed.

  8. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  9. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  10. Assessing the sensitivity and robustness of prediction models for apple firmness using spectral scattering technique

    Science.gov (United States)

    Spectral scattering is useful for nondestructive sensing of fruit firmness. Prediction models, however, are typically built using multivariate statistical methods such as partial least squares regression (PLSR), whose performance generally depends on the characteristics of the data. The aim of this ...

  11. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  12. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  13. Generalized Nonlinear Yule Models

    OpenAIRE

    Lansky, Petr; Polito, Federico; Sacerdote, Laura

    2016-01-01

    With the aim of considering models with persistent memory we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macrovolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth...

  14. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  15. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  16. Hot Temperatures, Hostile Affect, Hostile Cognition, and Arousal: Tests of a General Model of Affective Aggression.

    Science.gov (United States)

    Anderson, Craig A.; And Others

    1995-01-01

    Used a general model of affective aggression to generate predictions concerning hot temperatures. Results indicated that hot temperatures produced increases in hostile affect, hostile cognition, and physiological arousal. Concluded that hostile affect, hostile cognitions, and excitation transfer processes may all increase the likelihood of biased…

  17. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  18. Straw combustion on slow-moving grates - a comparison of model predictions with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Kaer, S.K. [Aalborg Univ. (Denmark). Inst. of Energy Technology

    2005-03-01

    Combustion of straw in grate-based boilers is often associated with high emission levels and relatively poor fuel burnout. A numerical grate combustion model was developed to assist in improving the combustion performance of these boilers. The model is based on a one-dimensional ''walking-column'' approach and includes the energy equations for both the fuel and the gas accounting for heat transfer between the two phases. The model gives important insight into the combustion process and provides inlet conditions for a computational fluid dynamics analysis of the freeboard. The model predictions indicate the existence of two distinct combustion modes. Combustion air temperature and mass flow-rate are the two parameters determining the mode. There is a significant difference in reaction rates (ignition velocity) and temperature levels between the two modes. Model predictions were compared to measurements in terms of ignition velocity and temperatures for five different combinations of air mass flow and temperature. In general, the degree of correspondence with the experimental data is favorable. The largest difference between measurements and predictions occurs when the combustion mode changes. The applicability to full-scale is demonstrated by predictions made for an existing straw-fired boiler located in Denmark. (author)

  19. Why are predictions of general relativity theory for gravitational effects non-unique?

    International Nuclear Information System (INIS)

    Loskutov, Yu.M.

    1990-01-01

    Reasons of non-uniqueness of predictions of the general relativity theory (GRT) for gravitational effects are analyzed in detail. To authors' opinion, the absence of comparison mechanism of curved and plane metrics is the reason of non-uniqueness

  20. A generalized additive regression model for survival times

    DEFF Research Database (Denmark)

    Scheike, Thomas H.

    2001-01-01

    Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...

  1. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  2. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  3. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua

    2014-01-01

    the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...

  4. Impact of rainstorm and runoff modeling on predicted consequences of atmospheric releases from nuclear reactor accidents

    International Nuclear Information System (INIS)

    Ritchie, L.T.; Brown, W.D.; Wayland, J.R.

    1980-05-01

    A general temperate latitude cyclonic rainstorm model is presented which describes the effects of washout and runoff on consequences of atmospheric releases of radioactive material from potential nuclear reactor accidents. The model treats the temporal and spatial variability of precipitation processes. Predicted air and ground concentrations of radioactive material and resultant health consequences for the new model are compared to those of the original WASH-1400 model under invariant meteorological conditions and for realistic weather events using observed meteorological sequences. For a specific accident under a particular set of meteorological conditions, the new model can give significantly different results from those predicted by the WASH-1400 model, but the aggregate consequences produced for a large number of meteorological conditions are similar

  5. Performance assessment of turbulence models for the prediction of moderator thermal flow inside CANDU calandria

    International Nuclear Information System (INIS)

    Lee, Gong Hee; Bang, Young Seok; Woo, Sweng Woong

    2012-01-01

    The moderator thermal flow in the CANDU calandria is generally complex and highly turbulent because of the interaction of the buoyancy force with the inlet jet inertia. In this study, the prediction performance of turbulence models for the accurate analysis of the moderator thermal flow are assessed by comparing the results calculated with various types of turbulence models in the commercial flow solver FLUENT with experimental data for the test vessel at Sheridan Park Engineering Laboratory (SPEL). Through this comparative study of turbulence models, it is concluded that turbulence models that include the source term to consider the effects of buoyancy on the turbulent flow should be used for the reliable prediction of the moderator thermal flow inside the CANDU calandria

  6. A system identification approach for developing model predictive controllers of antibody quality attributes in cell culture processes.

    Science.gov (United States)

    Downey, Brandon; Schmitt, John; Beller, Justin; Russell, Brian; Quach, Anthony; Hermann, Elizabeth; Lyon, David; Breit, Jeffrey

    2017-11-01

    As the biopharmaceutical industry evolves to include more diverse protein formats and processes, more robust control of Critical Quality Attributes (CQAs) is needed to maintain processing flexibility without compromising quality. Active control of CQAs has been demonstrated using model predictive control techniques, which allow development of processes which are robust against disturbances associated with raw material variability and other potentially flexible operating conditions. Wide adoption of model predictive control in biopharmaceutical cell culture processes has been hampered, however, in part due to the large amount of data and expertise required to make a predictive model of controlled CQAs, a requirement for model predictive control. Here we developed a highly automated, perfusion apparatus to systematically and efficiently generate predictive models using application of system identification approaches. We successfully created a predictive model of %galactosylation using data obtained by manipulating galactose concentration in the perfusion apparatus in serialized step change experiments. We then demonstrated the use of the model in a model predictive controller in a simulated control scenario to successfully achieve a %galactosylation set point in a simulated fed-batch culture. The automated model identification approach demonstrated here can potentially be generalized to many CQAs, and could be a more efficient, faster, and highly automated alternative to batch experiments for developing predictive models in cell culture processes, and allow the wider adoption of model predictive control in biopharmaceutical processes. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:1647-1661, 2017. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers.

  7. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  8. Alcator C-Mod predictive modeling

    International Nuclear Information System (INIS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-01-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles

  9. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  10. Generalized Nonlinear Yule Models

    Science.gov (United States)

    Lansky, Petr; Polito, Federico; Sacerdote, Laura

    2016-11-01

    With the aim of considering models related to random graphs growth exhibiting persistent memory, we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macroevolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth rates. Among the main results we derive the explicit distribution of the number of in-links of a webpage chosen uniformly at random recognizing the contribution to the asymptotics and the finite time correction. The mean value of the latter distribution is also calculated explicitly in the most general case. Furthermore, in order to show the usefulness of our results, we particularize them in the case of specific birth rates giving rise to a saturating behaviour, a property that is often observed in nature. The further specialization to the non-fractional case allows us to extend the Yule model accounting for a nonlinear growth.

  11. Climatology of the HOPE-G global ocean general circulation model - Sea ice general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Legutke, S. [Deutsches Klimarechenzentrum (DKRZ), Hamburg (Germany); Maier-Reimer, E. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany)

    1999-12-01

    The HOPE-G global ocean general circulation model (OGCM) climatology, obtained in a long-term forced integration is described. HOPE-G is a primitive-equation z-level ocean model which contains a dynamic-thermodynamic sea-ice model. It is formulated on a 2.8 grid with increased resolution in low latitudes in order to better resolve equatorial dynamics. The vertical resolution is 20 layers. The purpose of the integration was both to investigate the models ability to reproduce the observed general circulation of the world ocean and to obtain an initial state for coupled atmosphere - ocean - sea-ice climate simulations. The model was driven with daily mean data of a 15-year integration of the atmosphere general circulation model ECHAM4, the atmospheric component in later coupled runs. Thereby, a maximum of the flux variability that is expected to appear in coupled simulations is included already in the ocean spin-up experiment described here. The model was run for more than 2000 years until a quasi-steady state was achieved. It reproduces the major current systems and the main features of the so-called conveyor belt circulation. The observed distribution of water masses is reproduced reasonably well, although with a saline bias in the intermediate water masses and a warm bias in the deep and bottom water of the Atlantic and Indian Oceans. The model underestimates the meridional transport of heat in the Atlantic Ocean. The simulated heat transport in the other basins, though, is in good agreement with observations. (orig.)

  12. Hydrological-niche models predict water plant functional group distributions in diverse wetland types.

    Science.gov (United States)

    Deane, David C; Nicol, Jason M; Gehrig, Susan L; Harding, Claire; Aldridge, Kane T; Goodman, Abigail M; Brookes, Justin D

    2017-06-01

    Human use of water resources threatens environmental water supplies. If resource managers are to develop policies that avoid unacceptable ecological impacts, some means to predict ecosystem response to changes in water availability is necessary. This is difficult to achieve at spatial scales relevant for water resource management because of the high natural variability in ecosystem hydrology and ecology. Water plant functional groups classify species with similar hydrological niche preferences together, allowing a qualitative means to generalize community responses to changes in hydrology. We tested the potential for functional groups in making quantitative prediction of water plant functional group distributions across diverse wetland types over a large geographical extent. We sampled wetlands covering a broad range of hydrogeomorphic and salinity conditions in South Australia, collecting both hydrological and floristic data from 687 quadrats across 28 wetland hydrological gradients. We built hydrological-niche models for eight water plant functional groups using a range of candidate models combining different surface inundation metrics. We then tested the predictive performance of top-ranked individual and averaged models for each functional group. Cross validation showed that models achieved acceptable predictive performance, with correct classification rates in the range 0.68-0.95. Model predictions can be made at any spatial scale that hydrological data are available and could be implemented in a geographical information system. We show the response of water plant functional groups to inundation is consistent enough across diverse wetland types to quantify the probability of hydrological impacts over regional spatial scales. © 2017 by the Ecological Society of America.

  13. Modeling of energy consumption and related GHG (greenhouse gas) intensity and emissions in Europe using general regression neural networks

    International Nuclear Information System (INIS)

    Antanasijević, Davor; Pocajt, Viktor; Ristić, Mirjana; Perić-Grujić, Aleksandra

    2015-01-01

    This paper presents a new approach for the estimation of energy-related GHG (greenhouse gas) emissions at the national level that combines the simplicity of the concept of GHG intensity and the generalization capabilities of ANNs (artificial neural networks). The main objectives of this work includes the determination of the accuracy of a GRNN (general regression neural network) model applied for the prediction of EC (energy consumption) and GHG intensity of energy consumption, utilizing general country statistics as inputs, as well as analysis of the accuracy of energy-related GHG emissions obtained by multiplying the two aforementioned outputs. The models were developed using historical data from the period 2004–2012, for a set of 26 European countries (EU Members). The obtained results demonstrate that the GRNN GHG intensity model provides a more accurate prediction, with the MAPE (mean absolute percentage error) of 4.5%, than tested MLR (multiple linear regression) and second-order and third-order non-linear MPR (multiple polynomial regression) models. Also, the GRNN EC model has high accuracy (MAPE = 3.6%), and therefore both GRNN models and the proposed approach can be considered as suitable for the calculation of GHG emissions. The energy-related predicted GHG emissions were very similar to the actual GHG emissions of EU Members (MAPE = 6.4%). - Highlights: • ANN modeling of GHG intensity of energy consumption is presented. • ANN modeling of energy consumption at the national level is presented. • GHG intensity concept was used for the estimation of energy-related GHG emissions. • The ANN models provide better results in comparison with conventional models. • Forecast of GHG emissions for 26 countries was made successfully with MAPE of 6.4%

  14. The General Education Collaboration Model: A Model for Successful Mainstreaming.

    Science.gov (United States)

    Simpson, Richard L.; Myles, Brenda Smith

    1990-01-01

    The General Education Collaboration Model is designed to support general educators teaching mainstreamed disabled students, through collaboration with special educators. The model is based on flexible departmentalization, program ownership, identification and development of supportive attitudes, student assessment as a measure of program…

  15. Model to predict the radiological consequences of transportation of radioactive material through an urban environment

    International Nuclear Information System (INIS)

    Taylor, J.M.; Daniel, S.L.; DuCharme, A.R.; Finley, N.N.

    1977-01-01

    A model has been developed which predicts the radiological consequences of the transportation of radioactive material in and around urban environments. This discussion of the model includes discussion of the following general topics: health effects from radiation exposure, urban area characterization, computation of dose resulting from normal transportation, computation of dose resulting from vehicular accidents or sabotage, and preliminary results and conclusions

  16. Primordial non-Gaussianities of gravitational waves in the most general single-field inflation model with second-order field equations.

    Science.gov (United States)

    Gao, Xian; Kobayashi, Tsutomu; Yamaguchi, Masahide; Yokoyama, Jun'ichi

    2011-11-18

    We completely clarify the feature of primordial non-Gaussianities of tensor perturbations in the most general single-field inflation model with second-order field equations. It is shown that the most general cubic action for the tensor perturbation h(ij) is composed only of two contributions, one with two spacial derivatives and the other with one time derivative on each h(ij). The former is essentially identical to the cubic term that appears in Einstein gravity and predicts a squeezed shape, while the latter newly appears in the presence of the kinetic coupling to the Einstein tensor and predicts an equilateral shape. Thus, only two shapes appear in the graviton bispectrum of the most general single-field inflation model, which could open a new clue to the identification of inflationary gravitational waves in observations of cosmic microwave background anisotropies as well as direct detection experiments.

  17. Predictive Modelling of Heavy Metals in Urban Lakes

    OpenAIRE

    Lindström, Martin

    2000-01-01

    Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the ...

  18. Perceived Threat and Corroboration: Key Factors That Improve a Predictive Model of Trust in Internet-based Health Information and Advice

    Science.gov (United States)

    Harris, Peter R; Briggs, Pam

    2011-01-01

    Background How do people decide which sites to use when seeking health advice online? We can assume, from related work in e-commerce, that general design factors known to affect trust in the site are important, but in this paper we also address the impact of factors specific to the health domain. Objective The current study aimed to (1) assess the factorial structure of a general measure of Web trust, (2) model how the resultant factors predicted trust in, and readiness to act on, the advice found on health-related websites, and (3) test whether adding variables from social cognition models to capture elements of the response to threatening, online health-risk information enhanced the prediction of these outcomes. Methods Participants were asked to recall a site they had used to search for health-related information and to think of that site when answering an online questionnaire. The questionnaire consisted of a general Web trust questionnaire plus items assessing appraisals of the site, including threat appraisals, information checking, and corroboration. It was promoted on the hungersite.com website. The URL was distributed via Yahoo and local print media. We assessed the factorial structure of the measures using principal components analysis and modeled how well they predicted the outcome measures using structural equation modeling (SEM) with EQS software. Results We report an analysis of the responses of participants who searched for health advice for themselves (N = 561). Analysis of the general Web trust questionnaire revealed 4 factors: information quality, personalization, impartiality, and credible design. In the final SEM model, information quality and impartiality were direct predictors of trust. However, variables specific to eHealth (perceived threat, coping, and corroboration) added substantially to the ability of the model to predict variance in trust and readiness to act on advice on the site. The final model achieved a satisfactory fit: χ2 5 = 10

  19. A new General Lorentz Transformation model

    International Nuclear Information System (INIS)

    Novakovic, Branko; Novakovic, Alen; Novakovic, Dario

    2000-01-01

    A new general structure of Lorentz Transformations, in the form of General Lorentz Transformation model (GLT-model), has been derived. This structure includes both Lorentz-Einstein and Galilean Transformations as its particular (special) realizations. Since the free parameters of GLT-model have been identified in a gravitational field, GLT-model can be employed both in Special and General Relativity. Consequently, the possibilities of an unification of Einstein's Special and General Theories of Relativity, as well as an unification of electromagnetic and gravitational fields are opened. If GLT-model is correct then there exist four new observation phenomena (a length and time neutrality, and a length dilation and a time contraction). Besides, the well-known phenomena (a length contraction, and a time dilation) are also the constituents of GLT-model. It means that there is a symmetry in GLT-model, where the center of this symmetry is represented by a length and a time neutrality. A time and a length neutrality in a gravitational field can be realized if the velocity of a moving system is equal to the free fall velocity. A time and a length neutrality include an observation of a particle mass neutrality. A special consideration has been devoted to a correlation between GLT-model and a limitation on particle velocities in order to investigate the possibility of a travel time reduction. It is found out that an observation of a particle speed faster then c=299 792 458 m/s, is possible in a gravitational field, if certain conditions are fulfilled

  20. Computational intelligence models to predict porosity of tablets using minimum features

    Directory of Open Access Journals (Sweden)

    Khalid MH

    2017-01-01

    Full Text Available Mohammad Hassan Khalid,1 Pezhman Kazemi,1 Lucia Perez-Gandarillas,2 Abderrahim Michrafy,2 Jakub Szlęk,1 Renata Jachowicz,1 Aleksander Mendyk1 1Department of Pharmaceutical Technology and Biopharmaceutics, Faculty of Pharmacy, Jagiellonian University Medical College, Krakow, Poland; 2Centre National de la Recherche Scientifique, Centre RAPSODEE, Mines Albi, Université de Toulouse, Albi, France Abstract: The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD practices. Computational intelligence (CI offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs, and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC (in percentage, granule size fraction (in micrometers, and die compaction force (in kilonewtons as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1% and symbolic regression (NRMSE =4% as the best-performing methods, also exhibiting reliable predictive

  1. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  3. A multivariate model for predicting segmental body composition.

    Science.gov (United States)

    Tian, Simiao; Mioche, Laurence; Denis, Jean-Baptiste; Morio, Béatrice

    2013-12-01

    The aims of the present study were to propose a multivariate model for predicting simultaneously body, trunk and appendicular fat and lean masses from easily measured variables and to compare its predictive capacity with that of the available univariate models that predict body fat percentage (BF%). The dual-energy X-ray absorptiometry (DXA) dataset (52% men and 48% women) with White, Black and Hispanic ethnicities (1999-2004, National Health and Nutrition Examination Survey) was randomly divided into three sub-datasets: a training dataset (TRD), a test dataset (TED); a validation dataset (VAD), comprising 3835, 1917 and 1917 subjects. For each sex, several multivariate prediction models were fitted from the TRD using age, weight, height and possibly waist circumference. The most accurate model was selected from the TED and then applied to the VAD and a French DXA dataset (French DB) (526 men and 529 women) to assess the prediction accuracy in comparison with that of five published univariate models, for which adjusted formulas were re-estimated using the TRD. Waist circumference was found to improve the prediction accuracy, especially in men. For BF%, the standard error of prediction (SEP) values were 3.26 (3.75) % for men and 3.47 (3.95)% for women in the VAD (French DB), as good as those of the adjusted univariate models. Moreover, the SEP values for the prediction of body and appendicular lean masses ranged from 1.39 to 2.75 kg for both the sexes. The prediction accuracy was best for age < 65 years, BMI < 30 kg/m2 and the Hispanic ethnicity. The application of our multivariate model to large populations could be useful to address various public health issues.

  4. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  6. Generating a robust prediction model for stage I lung adenocarcinoma recurrence after surgical resection.

    Science.gov (United States)

    Wu, Yu-Chung; Wei, Nien-Chih; Hung, Jung-Jyh; Yeh, Yi-Chen; Su, Li-Jen; Hsu, Wen-Hu; Chou, Teh-Ying

    2017-10-03

    Lung cancer mortality remains high even after successful resection. Adjuvant treatment benefits stage II and III patients, but not stage I patients, and most studies fail to predict recurrence in stage I patients. Our study included 211 lung adenocarcinoma patients (stages I-IIIA; 81% stage I) who received curative resections at Taipei Veterans General Hospital between January 2001 and December 2012. We generated a prediction model using 153 samples, with validation using an additional 58 clinical outcome-blinded samples. Gene expression profiles were generated using formalin-fixed, paraffin-embedded tissue samples and microarrays. Data analysis was performed using a supervised clustering method. The prediction model generated from mixed stage samples successfully separated patients at high vs. low risk for recurrence. The validation tests hazard ratio (HR = 4.38) was similar to that of the training tests (HR = 4.53), indicating a robust training process. Our prediction model successfully distinguished high- from low-risk stage IA and IB patients, with a difference in 5-year disease-free survival between high- and low-risk patients of 42% for stage IA and 45% for stage IB ( p model for identifying lung adenocarcinoma patients at high risk for recurrence who may benefit from adjuvant therapy. Our prediction performance of the difference in disease free survival between high risk and low risk groups demonstrates more than two fold improvement over earlier published results.

  7. Dynamic Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  8. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  9. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. DEEP: a general computational framework for predicting enhancers

    KAUST Repository

    Kleftogiannis, Dimitrios A.

    2014-11-05

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer\\'s properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.

  11. DEEP: a general computational framework for predicting enhancers

    KAUST Repository

    Kleftogiannis, Dimitrios A.; Kalnis, Panos; Bajic, Vladimir B.

    2014-01-01

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer's properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.

  12. Prediction of tectonic stresses and fracture networks with geomechanical reservoir models

    International Nuclear Information System (INIS)

    Henk, A.; Fischer, K.

    2014-09-01

    This project evaluates the potential of geomechanical Finite Element (FE) models for the prediction of in situ stresses and fracture networks in faulted reservoirs. Modeling focuses on spatial variations of the in situ stress distribution resulting from faults and contrasts in mechanical rock properties. In a first methodological part, a workflow is developed for building such geomechanical reservoir models and calibrating them to field data. In the second part, this workflow was applied successfully to an intensively faulted gas reservoir in the North German Basin. A truly field-scale geomechanical model covering more than 400km 2 was built and calibrated. It includes a mechanical stratigraphy as well as a network of 86 faults. The latter are implemented as distinct planes of weakness and allow the fault-specific evaluation of shear and normal stresses. A so-called static model describes the recent state of the reservoir and, thus, after calibration its results reveal the present-day in situ stress distribution. Further geodynamic modeling work considers the major stages in the tectonic history of the reservoir and provides insights in the paleo stress distribution. These results are compared to fracture data and hydraulic fault behavior observed today. The outcome of this project confirms the potential of geomechanical FE models for robust stress and fracture predictions. The workflow is generally applicable and can be used for modeling of any stress-sensitive reservoir.

  13. Prediction of tectonic stresses and fracture networks with geomechanical reservoir models

    Energy Technology Data Exchange (ETDEWEB)

    Henk, A.; Fischer, K. [TU Darmstadt (Germany). Inst. fuer Angewandte Geowissenschaften

    2014-09-15

    This project evaluates the potential of geomechanical Finite Element (FE) models for the prediction of in situ stresses and fracture networks in faulted reservoirs. Modeling focuses on spatial variations of the in situ stress distribution resulting from faults and contrasts in mechanical rock properties. In a first methodological part, a workflow is developed for building such geomechanical reservoir models and calibrating them to field data. In the second part, this workflow was applied successfully to an intensively faulted gas reservoir in the North German Basin. A truly field-scale geomechanical model covering more than 400km{sup 2} was built and calibrated. It includes a mechanical stratigraphy as well as a network of 86 faults. The latter are implemented as distinct planes of weakness and allow the fault-specific evaluation of shear and normal stresses. A so-called static model describes the recent state of the reservoir and, thus, after calibration its results reveal the present-day in situ stress distribution. Further geodynamic modeling work considers the major stages in the tectonic history of the reservoir and provides insights in the paleo stress distribution. These results are compared to fracture data and hydraulic fault behavior observed today. The outcome of this project confirms the potential of geomechanical FE models for robust stress and fracture predictions. The workflow is generally applicable and can be used for modeling of any stress-sensitive reservoir.

  14. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  15. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  16. Prediction of resource volumes at untested locations using simple local prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  17. Prediction of health effects of cross-border atmospheric pollutants using an aerosol forecast model.

    Science.gov (United States)

    Onishi, Kazunari; Sekiyama, Tsuyoshi Thomas; Nojima, Masanori; Kurosaki, Yasunori; Fujitani, Yusuke; Otani, Shinji; Maki, Takashi; Shinoda, Masato; Kurozawa, Youichi; Yamagata, Zentaro

    2018-08-01

    Health effects of cross-border air pollutants and Asian dust are of significant concern in Japan. Currently, models predicting the arrival of aerosols have not investigated the association between arrival predictions and health effects. We investigated the association between subjective health symptoms and unreleased aerosol data from the Model of Aerosol Species in the Global Atmosphere (MASINGAR) acquired from the Japan Meteorological Agency, with the objective of ascertaining if these data could be applied to predicting health effects. Subjective symptom scores were collected via self-administered questionnaires and, along with modeled surface aerosol concentration data, were used to conduct a risk evaluation using generalized estimating equations between October and November 2011. Altogether, 29 individuals provided 1670 responses. Spearman's correlation coefficients were determined for the relationship between the proportion of the participants reporting the maximum score of two or more for each symptom and the surface concentrations for each considered aerosol species calculated using MASINGAR; the coefficients showed significant intermediate correlations between surface sulfate aerosol concentration and respiratory, throat, and fever symptoms (R = 0.557, 0.454, and 0.470, respectively; p < 0.01). In the general estimation equation (logit link) analyses, a significant linear association of surface sulfate aerosol concentration, with an endpoint determined by reported respiratory symptom scores of two or more, was observed (P trend = 0.001, odds ratio [OR] of the highest quartile [Q4] vs. the lowest [Q1] = 5.31, 95% CI = 2.18 to 12.96), with adjustment for potential confounding. The surface sulfate aerosol concentration was also associated with throat and fever symptoms. In conclusion, our findings suggest that modeled data are potentially useful for predicting health risks of cross-border aerosol arrivals. Copyright © 2018 Elsevier Ltd

  18. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  19. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  20. Implementing general gauge mediation

    International Nuclear Information System (INIS)

    Carpenter, Linda M.; Dine, Michael; Festuccia, Guido; Mason, John D.

    2009-01-01

    Recently there has been much progress in building models of gauge mediation, often with predictions different than those of minimal gauge mediation. Meade, Seiberg, and Shih have characterized the most general spectrum which can arise in gauge-mediated models. We discuss some of the challenges of building models of general gauge mediation, especially the problem of messenger parity and issues connected with R symmetry breaking and CP violation. We build a variety of viable, weakly coupled models which exhibit some or all of the possible low energy parameters.

  1. To predict the niche, model colonization and extinction

    Science.gov (United States)

    Yackulic, Charles B.; Nichols, James D.; Reid, Janice; Der, Ricky

    2015-01-01

    Ecologists frequently try to predict the future geographic distributions of species. Most studies assume that the current distribution of a species reflects its environmental requirements (i.e., the species' niche). However, the current distributions of many species are unlikely to be at equilibrium with the current distribution of environmental conditions, both because of ongoing invasions and because the distribution of suitable environmental conditions is always changing. This mismatch between the equilibrium assumptions inherent in many analyses and the disequilibrium conditions in the real world leads to inaccurate predictions of species' geographic distributions and suggests the need for theory and analytical tools that avoid equilibrium assumptions. Here, we develop a general theory of environmental associations during periods of transient dynamics. We show that time-invariant relationships between environmental conditions and rates of local colonization and extinction can produce substantial temporal variation in occupancy–environment relationships. We then estimate occupancy–environment relationships during three avian invasions. Changes in occupancy–environment relationships over time differ among species but are predicted by dynamic occupancy models. Since estimates of the occupancy–environment relationships themselves are frequently poor predictors of future occupancy patterns, research should increasingly focus on characterizing how rates of local colonization and extinction vary with environmental conditions.

  2. A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  3. Development of Models to Predict the Redox State of Nuclear Waste Containment Glass

    Energy Technology Data Exchange (ETDEWEB)

    Pinet, O.; Guirat, R.; Advocat, T. [Commissariat a l' Energie Atomique (CEA), Departement de Traitement et de Conditionnement des Dechets, Marcoule, BP 71171, 30207 Bagnols-sur-Ceze Cedex (France); Phalippou, J. [Universite de Montpellier II, Laboratoire des Colloides, Verres et Nanomateriaux, 34095 Montpellier Cedex 5 (France)

    2008-07-01

    Vitrification is one of the recommended immobilization routes for nuclear waste, and is currently implemented at industrial scale in several countries, notably for high-level waste. To optimize nuclear waste vitrification, research is conducted to specify suitable glass formulations and develop more effective processes. This research is based not only on experiments at laboratory or technological scale, but also on computer models. Vitrified nuclear waste often contains several multi-valent species whose oxidation state can impact the properties of the melt and of the final glass; these include iron, cerium, ruthenium, manganese, chromium and nickel. Cea is therefore also developing models to predict the final glass redox state. Given the raw materials and production conditions, the model predicts the oxygen fugacity at equilibrium in the melt. It can also estimate the ratios between the oxidation states of the multi-valent species contained in the molten glass. The oxidizing or reductive nature of the atmosphere above the glass melt is also taken into account. Unlike the models used in the conventional glass industry based on empirical methods with a limited range of application, the models proposed are based on the thermodynamic properties of the redox species contained in the waste vitrification feed stream. The thermodynamic data on which the model is based concern the relationship between the glass redox state and the oxygen fugacity in the molten glass. The model predictions were compared with oxygen fugacity measurements for some fifty glasses. The experiments carried out at laboratory and industrial scale with a cold crucible melter. The oxygen fugacity of the glass samples was measured by electrochemical methods and compared with the predicted value. The differences between the predicted and measured oxygen fugacity values were generally less than 0.5 Log unit. (authors)

  4. Dark Radiation predictions from general Large Volume Scenarios

    Science.gov (United States)

    Hebecker, Arthur; Mangat, Patrick; Rompineve, Fabrizio; Witkowski, Lukas T.

    2014-09-01

    Recent observations constrain the amount of Dark Radiation (Δ N eff ) and may even hint towards a non-zero value of Δ N eff . It is by now well-known that this puts stringent constraints on the sequestered Large Volume Scenario (LVS), i.e. on LVS realisations with the Standard Model at a singularity. We go beyond this setting by considering LVS models where SM fields are realised on 7-branes in the geometric regime. As we argue, this naturally goes together with high-scale supersymmetry. The abundance of Dark Radiation is determined by the competition between the decay of the lightest modulus to axions, to the SM Higgs and to gauge fields, and leads to strict constraints on these models. Nevertheless, these constructions can in principle meet current DR bounds due to decays into gauge bosons alone. Further, a rather robust prediction for a substantial amount of Dark Radiation can be made. This applies both to cases where the SM 4-cycles are stabilised by D-terms and are small `by accident', i.e. tuning, as well as to fibred models with the small cycles stabilised by loops. In these constructions the DR axion and the QCD axion are the same field and we require a tuning of the initial misalignment to avoid Dark Matter overproduction. Furthermore, we analyse a closely related setting where the SM lives at a singularity but couples to the volume modulus through flavour branes. We conclude that some of the most natural LVS settings with natural values of model parameters lead to Dark Radiation predictions just below the present observational limits. Barring a discovery, rather modest improvements of present Dark Radiation bounds can rule out many of these most simple and generic variants of the LVS.

  5. Risk predictive modelling for diabetes and cardiovascular disease.

    Science.gov (United States)

    Kengne, Andre Pascal; Masconi, Katya; Mbanya, Vivian Nchanchou; Lekoubou, Alain; Echouffo-Tcheugui, Justin Basile; Matsha, Tandi E

    2014-02-01

    Absolute risk models or clinical prediction models have been incorporated in guidelines, and are increasingly advocated as tools to assist risk stratification and guide prevention and treatments decisions relating to common health conditions such as cardiovascular disease (CVD) and diabetes mellitus. We have reviewed the historical development and principles of prediction research, including their statistical underpinning, as well as implications for routine practice, with a focus on predictive modelling for CVD and diabetes. Predictive modelling for CVD risk, which has developed over the last five decades, has been largely influenced by the Framingham Heart Study investigators, while it is only ∼20 years ago that similar efforts were started in the field of diabetes. Identification of predictive factors is an important preliminary step which provides the knowledge base on potential predictors to be tested for inclusion during the statistical derivation of the final model. The derived models must then be tested both on the development sample (internal validation) and on other populations in different settings (external validation). Updating procedures (e.g. recalibration) should be used to improve the performance of models that fail the tests of external validation. Ultimately, the effect of introducing validated models in routine practice on the process and outcomes of care as well as its cost-effectiveness should be tested in impact studies before wide dissemination of models beyond the research context. Several predictions models have been developed for CVD or diabetes, but very few have been externally validated or tested in impact studies, and their comparative performance has yet to be fully assessed. A shift of focus from developing new CVD or diabetes prediction models to validating the existing ones will improve their adoption in routine practice.

  6. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...

  7. The General Aggression Model

    NARCIS (Netherlands)

    Allen, Johnie J.; Anderson, Craig A.; Bushman, Brad J.

    The General Aggression Model (GAM) is a comprehensive, integrative, framework for understanding aggression. It considers the role of social, cognitive, personality, developmental, and biological factors on aggression. Proximate processes of GAM detail how person and situation factors influence

  8. Predicting the Direction of Stock Market Index Movement Using an Optimized Artificial Neural Network Model.

    Directory of Open Access Journals (Sweden)

    Mingyue Qiu

    Full Text Available In the business sector, it has always been a difficult task to predict the exact daily price of the stock market index; hence, there is a great deal of research being conducted regarding the prediction of the direction of stock price index movement. Many factors such as political events, general economic conditions, and traders' expectations may have an influence on the stock market index. There are numerous research studies that use similar indicators to forecast the direction of the stock market index. In this study, we compare two basic types of input variables to predict the direction of the daily stock market index. The main contribution of this study is the ability to predict the direction of the next day's price of the Japanese stock market index by using an optimized artificial neural network (ANN model. To improve the prediction accuracy of the trend of the stock market index in the future, we optimize the ANN model using genetic algorithms (GA. We demonstrate and verify the predictability of stock price direction by using the hybrid GA-ANN model and then compare the performance with prior studies. Empirical results show that the Type 2 input variables can generate a higher forecast accuracy and that it is possible to enhance the performance of the optimized ANN model by selecting input variables appropriately.

  9. Predicting the Direction of Stock Market Index Movement Using an Optimized Artificial Neural Network Model.

    Science.gov (United States)

    Qiu, Mingyue; Song, Yu

    2016-01-01

    In the business sector, it has always been a difficult task to predict the exact daily price of the stock market index; hence, there is a great deal of research being conducted regarding the prediction of the direction of stock price index movement. Many factors such as political events, general economic conditions, and traders' expectations may have an influence on the stock market index. There are numerous research studies that use similar indicators to forecast the direction of the stock market index. In this study, we compare two basic types of input variables to predict the direction of the daily stock market index. The main contribution of this study is the ability to predict the direction of the next day's price of the Japanese stock market index by using an optimized artificial neural network (ANN) model. To improve the prediction accuracy of the trend of the stock market index in the future, we optimize the ANN model using genetic algorithms (GA). We demonstrate and verify the predictability of stock price direction by using the hybrid GA-ANN model and then compare the performance with prior studies. Empirical results show that the Type 2 input variables can generate a higher forecast accuracy and that it is possible to enhance the performance of the optimized ANN model by selecting input variables appropriately.

  10. A generic analytical foot rollover model for predicting translational ankle kinematics in gait simulation studies.

    Science.gov (United States)

    Ren, Lei; Howard, David; Ren, Luquan; Nester, Chris; Tian, Limei

    2010-01-19

    The objective of this paper is to develop an analytical framework to representing the ankle-foot kinematics by modelling the foot as a rollover rocker, which cannot only be used as a generic tool for general gait simulation but also allows for case-specific modelling if required. Previously, the rollover models used in gait simulation have often been based on specific functions that have usually been of a simple form. In contrast, the analytical model described here is in a general form that the effective foot rollover shape can be represented by any polar function rho=rho(phi). Furthermore, a normalized generic foot rollover model has been established based on a normative foot rollover shape dataset of 12 normal healthy subjects. To evaluate model accuracy, the predicted ankle motions and the centre of pressure (CoP) were compared with measurement data for both subject-specific and general cases. The results demonstrated that the ankle joint motions in both vertical and horizontal directions (relative RMSE approximately 10%) and CoP (relative RMSE approximately 15% for most of the subjects) are accurately predicted over most of the stance phase (from 10% to 90% of stance). However, we found that the foot cannot be very accurately represented by a rollover model just after heel strike (HS) and just before toe off (TO), probably due to shear deformation of foot plantar tissues (ankle motion can occur without any foot rotation). The proposed foot rollover model can be used in both inverse and forward dynamics gait simulation studies and may also find applications in rehabilitation engineering. Copyright 2009 Elsevier Ltd. All rights reserved.

  11. Survival prediction model for postoperative hepatocellular carcinoma patients.

    Science.gov (United States)

    Ren, Zhihui; He, Shasha; Fan, Xiaotang; He, Fangping; Sang, Wei; Bao, Yongxing; Ren, Weixin; Zhao, Jinming; Ji, Xuewen; Wen, Hao

    2017-09-01

    This study is to establish a predictive index (PI) model of 5-year survival rate for patients with hepatocellular carcinoma (HCC) after radical resection and to evaluate its prediction sensitivity, specificity, and accuracy.Patients underwent HCC surgical resection were enrolled and randomly divided into prediction model group (101 patients) and model evaluation group (100 patients). Cox regression model was used for univariate and multivariate survival analysis. A PI model was established based on multivariate analysis and receiver operating characteristic (ROC) curve was drawn accordingly. The area under ROC (AUROC) and PI cutoff value was identified.Multiple Cox regression analysis of prediction model group showed that neutrophil to lymphocyte ratio, histological grade, microvascular invasion, positive resection margin, number of tumor, and postoperative transcatheter arterial chemoembolization treatment were the independent predictors for the 5-year survival rate for HCC patients. The model was PI = 0.377 × NLR + 0.554 × HG + 0.927 × PRM + 0.778 × MVI + 0.740 × NT - 0.831 × transcatheter arterial chemoembolization (TACE). In the prediction model group, AUROC was 0.832 and the PI cutoff value was 3.38. The sensitivity, specificity, and accuracy were 78.0%, 80%, and 79.2%, respectively. In model evaluation group, AUROC was 0.822, and the PI cutoff value was well corresponded to the prediction model group with sensitivity, specificity, and accuracy of 85.0%, 83.3%, and 84.0%, respectively.The PI model can quantify the mortality risk of hepatitis B related HCC with high sensitivity, specificity, and accuracy.

  12. Effective and Robust Generalized Predictive Speed Control of Induction Motor

    Directory of Open Access Journals (Sweden)

    Patxi Alkorta

    2013-01-01

    Full Text Available This paper presents and validates a new proposal for effective speed vector control of induction motors based on linear Generalized Predictive Control (GPC law. The presented GPC-PI cascade configuration simplifies the design with regard to GPC-GPC cascade configuration, maintaining the advantages of the predictive control algorithm. The robust stability of the closed loop system is demonstrated by the poles placement method for several typical cases of uncertainties in induction motors. The controller has been tested using several simulations and experiments and has been compared with Proportional Integral Derivative (PID and Sliding Mode (SM control schemes, obtaining outstanding results in speed tracking even in the presence of parameter uncertainties, unknown load disturbance, and measurement noise in the loop signals, suggesting its use in industrial applications.

  13. Methodology for Designing Models Predicting Success of Infertility Treatment

    OpenAIRE

    Alireza Zarinara; Mohammad Mahdi Akhondi; Hojjat Zeraati; Koorsh Kamali; Kazem Mohammad

    2016-01-01

    Abstract Background: The prediction models for infertility treatment success have presented since 25 years ago. There are scientific principles for designing and applying the prediction models that is also used to predict the success rate of infertility treatment. The purpose of this study is to provide basic principles for designing the model to predic infertility treatment success. Materials and Methods: In this paper, the principles for developing predictive models are explained and...

  14. Development of a Predictive Model for Induction Success of Labour

    Directory of Open Access Journals (Sweden)

    Cristina Pruenza

    2018-03-01

    Full Text Available Induction of the labour process is an extraordinarily common procedure used in some pregnancies. Obstetricians face the need to end a pregnancy, for medical reasons usually (maternal or fetal requirements or less frequently, social (elective inductions for convenience. The success of induction procedure is conditioned by a multitude of maternal and fetal variables that appear before or during pregnancy or birth process, with a low predictive value. The failure of the induction process involves performing a caesarean section. This project arises from the clinical need to resolve a situation of uncertainty that occurs frequently in our clinical practice. Since the weight of clinical variables is not adequately weighted, we consider very interesting to know a priori the possibility of success of induction to dismiss those inductions with high probability of failure, avoiding unnecessary procedures or postponing end if possible. We developed a predictive model of induced labour success as a support tool in clinical decision making. Improve the predictability of a successful induction is one of the current challenges of Obstetrics because of its negative impact. The identification of those patients with high chances of failure, will allow us to offer them better care improving their health outcomes (adverse perinatal outcomes for mother and newborn, costs (medication, hospitalization, qualified staff and patient perceived quality. Therefore a Clinical Decision Support System was developed to give support to the Obstetricians. In this article, we had proposed a robust method to explore and model a source of clinical information with the purpose of obtaining all possible knowledge. Generally, in classification models are difficult to know the contribution that each attribute provides to the model. We had worked in this direction to offer transparency to models that may be considered as black boxes. The positive results obtained from both the

  15. Data-Based Predictive Control with Multirate Prediction Step

    Science.gov (United States)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  16. Relativistic theory of gravitation and nonuniqueness of the predictions of general relativity theory

    International Nuclear Information System (INIS)

    Logunov, A.A.; Loskutov, Yu.M.

    1986-01-01

    It is shown that while the predictions of relativistic theory of gravitation (RTG) for the gravitational effects are unique and consistent with the experimental data available, the relevant predictions of general relativity theory are not unique. Therewith the above nonuniqueness manifests itself in some effects in the first order in the gravitational interaction constant in others in the second one. The absence in GRT of the energy-momentum and angular momentum conservation laws for the matter and gravitational field taken together and its inapplicability to give uniquely determined predictions for the gravitational phenomena compel to reject GRT as a physical theory

  17. Prognosis of patients with whiplash-associated disorders consulting physiotherapy: development of a predictive model for recovery

    Directory of Open Access Journals (Sweden)

    Bohman Tony

    2012-12-01

    Full Text Available Abstract Background Patients with whiplash-associated disorders (WAD have a generally favourable prognosis, yet some develop longstanding pain and disability. Predicting who will recover from WAD shortly after a traffic collision is very challenging for health care providers such as physical therapists. Therefore, we aimed to develop a prediction model for the recovery of WAD in a cohort of patients who consulted physical therapists within six weeks after the injury. Methods Our cohort included 680 adult patients with WAD who were injured in Saskatchewan, Canada, between 1997 and 1999. All patients had consulted a physical therapist as a result of the injury. Baseline prognostic factors were collected from an injury questionnaire administered by Saskatchewan Government Insurance. The outcome, global self-perceived recovery, was assessed by telephone interviews six weeks, three and six months later. Twenty-five possible baseline prognostic factors were considered in the analyses. A prediction model was built using Cox regression. The predictive ability of the model was estimated with concordance statistics (c-index. Internal validity was checked using bootstrapping. Results Our final prediction model included: age, number of days to reporting the collision, neck pain intensity, low back pain intensity, pain other than neck and back pain, headache before collision and recovery expectations. The model had an acceptable level of predictive ability with a c-index of 0.68 (95% CI: 0.65, 0.71. Internal validation showed that our model was robust and had a good fit. Conclusions We developed a model predicting recovery from WAD, in a cohort of patients who consulted physical therapists. Our model has adequate predictive ability. However, to be fully incorporated in clinical practice the model needs to be validated in other populations and tested in clinical settings.

  18. Prognosis of patients with whiplash-associated disorders consulting physiotherapy: development of a predictive model for recovery

    Science.gov (United States)

    2012-01-01

    Background Patients with whiplash-associated disorders (WAD) have a generally favourable prognosis, yet some develop longstanding pain and disability. Predicting who will recover from WAD shortly after a traffic collision is very challenging for health care providers such as physical therapists. Therefore, we aimed to develop a prediction model for the recovery of WAD in a cohort of patients who consulted physical therapists within six weeks after the injury. Methods Our cohort included 680 adult patients with WAD who were injured in Saskatchewan, Canada, between 1997 and 1999. All patients had consulted a physical therapist as a result of the injury. Baseline prognostic factors were collected from an injury questionnaire administered by Saskatchewan Government Insurance. The outcome, global self-perceived recovery, was assessed by telephone interviews six weeks, three and six months later. Twenty-five possible baseline prognostic factors were considered in the analyses. A prediction model was built using Cox regression. The predictive ability of the model was estimated with concordance statistics (c-index). Internal validity was checked using bootstrapping. Results Our final prediction model included: age, number of days to reporting the collision, neck pain intensity, low back pain intensity, pain other than neck and back pain, headache before collision and recovery expectations. The model had an acceptable level of predictive ability with a c-index of 0.68 (95% CI: 0.65, 0.71). Internal validation showed that our model was robust and had a good fit. Conclusions We developed a model predicting recovery from WAD, in a cohort of patients who consulted physical therapists. Our model has adequate predictive ability. However, to be fully incorporated in clinical practice the model needs to be validated in other populations and tested in clinical settings. PMID:23273330

  19. A General Mathematical Algorithm for Predicting the Course of Unfused Tetanic Contractions of Motor Units in Rat Muscle.

    Directory of Open Access Journals (Sweden)

    Rositsa Raikova

    Full Text Available An unfused tetanus of a motor unit (MU evoked by a train of pulses at variable interpulse intervals is the sum of non-equal twitch-like responses to these stimuli. A tool for a precise prediction of these successive contractions for MUs of different physiological types with different contractile properties is crucial for modeling the whole muscle behavior during various types of activity. The aim of this paper is to develop such a general mathematical algorithm for the MUs of the medial gastrocnemius muscle of rats. For this purpose, tetanic curves recorded for 30 MUs (10 slow, 10 fast fatigue-resistant and 10 fast fatigable were mathematically decomposed into twitch-like contractions. Each contraction was modeled by the previously proposed 6-parameter analytical function, and the analysis of these six parameters allowed us to develop a prediction algorithm based on the following input data: parameters of the initial twitch, the maximum force of a MU and the series of pulses. Linear relationship was found between the normalized amplitudes of the successive contractions and the remainder between the actual force levels at which the contraction started and the maximum tetanic force. The normalization was made according to the amplitude of the first decomposed twitch. However, the respective approximation lines had different specific angles with respect to the ordinate. These angles had different and non-overlapping ranges for slow and fast MUs. A sensitivity analysis concerning this slope was performed and the dependence between the angles and the maximal fused tetanic force normalized to the amplitude of the first contraction was approximated by a power function. The normalized MU contraction and half-relaxation times were approximated by linear functions depending on the normalized actual force levels at which each contraction starts. The normalization was made according to the contraction time of the first contraction. The actual force levels

  20. Generalized bi-additive modelling for categorical data

    NARCIS (Netherlands)

    P.J.F. Groenen (Patrick); A.J. Koning (Alex)

    2004-01-01

    textabstractGeneralized linear modelling (GLM) is a versatile technique, which may be viewed as a generalization of well-known techniques such as least squares regression, analysis of variance, loglinear modelling, and logistic regression. In may applications, low-order interaction (such as

  1. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  2. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  3. Model Predictive Controller Combined with LQG Controller and Velocity Feedback to Control the Stewart Platform

    DEFF Research Database (Denmark)

    Nadimi, Esmaeil Sharak; Bak, Thomas; Izadi-Zamanabadi, Roozbeh

    2006-01-01

    The main objective of this paper is to investigate the erformance and applicability of two GPC (generalized predictive control) based control methods on a complete benchmark model of the Stewart platform made in MATLAB V6.5. The first method involves an LQG controller (Linear Quadratic Gaussian...

  4. Evaluation of DNA bending models in their capacity to predict electrophoretic migration anomalies of satellite DNA sequences.

    Science.gov (United States)

    Matyášek, Roman; Fulneček, Jaroslav; Kovařík, Aleš

    2013-09-01

    DNA containing a sequence that generates a local curvature exhibits a pronounced retardation in electrophoretic mobility. Various theoretical models have been proposed to explain relationship between DNA structural features and migration anomaly. Here, we studied the capacity of 15 static wedge-bending models to predict electrophoretic behavior of 69 satellite monomers derived from four divergent families. All monomers exhibited retarded mobility in PAGE corresponding to retardation factors ranging 1.02-1.54. The curvature varied both within and across the groups and correlated with the number, position, and lengths of A-tracts. Two dinucleotide models provided strong correlation between gel mobility and curvature prediction; two trinucleotide models were satisfactory while remaining dinucleotide models provided intermediate results with reliable prediction for subsets of sequences only. In some cases, similarly shaped molecules exhibited relatively large differences in mobility and vice versa. Generally less accurate predictions were obtained in groups containing less homogeneous sequences possessing distinct structural features. In conclusion, relatively universal theoretical models were identified suitable for the analysis of natural sequences known to harbor relatively moderate curvature. These models could be potentially applied to genome wide studies. However, in silico predictions should be viewed in context of experimental measurement of intrinsic DNA curvature. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Gstat: a program for geostatistical modelling, prediction and simulation

    Science.gov (United States)

    Pebesma, Edzer J.; Wesseling, Cees G.

    1998-01-01

    Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.

  6. A predictive modeling approach to increasing the economic effectiveness of disease management programs.

    Science.gov (United States)

    Bayerstadler, Andreas; Benstetter, Franz; Heumann, Christian; Winter, Fabian

    2014-09-01

    Predictive Modeling (PM) techniques are gaining importance in the worldwide health insurance business. Modern PM methods are used for customer relationship management, risk evaluation or medical management. This article illustrates a PM approach that enables the economic potential of (cost-) effective disease management programs (DMPs) to be fully exploited by optimized candidate selection as an example of successful data-driven business management. The approach is based on a Generalized Linear Model (GLM) that is easy to apply for health insurance companies. By means of a small portfolio from an emerging country, we show that our GLM approach is stable compared to more sophisticated regression techniques in spite of the difficult data environment. Additionally, we demonstrate for this example of a setting that our model can compete with the expensive solutions offered by professional PM vendors and outperforms non-predictive standard approaches for DMP selection commonly used in the market.

  7. On the Use of Generalized Volume Scattering Models for the Improvement of General Polarimetric Model-Based Decomposition

    Directory of Open Access Journals (Sweden)

    Qinghua Xie

    2017-01-01

    Full Text Available Recently, a general polarimetric model-based decomposition framework was proposed by Chen et al., which addresses several well-known limitations in previous decomposition methods and implements a simultaneous full-parameter inversion by using complete polarimetric information. However, it only employs four typical models to characterize the volume scattering component, which limits the parameter inversion performance. To overcome this issue, this paper presents two general polarimetric model-based decomposition methods by incorporating the generalized volume scattering model (GVSM or simplified adaptive volume scattering model, (SAVSM proposed by Antropov et al. and Huang et al., respectively, into the general decomposition framework proposed by Chen et al. By doing so, the final volume coherency matrix structure is selected from a wide range of volume scattering models within a continuous interval according to the data itself without adding unknowns. Moreover, the new approaches rely on one nonlinear optimization stage instead of four as in the previous method proposed by Chen et al. In addition, the parameter inversion procedure adopts the modified algorithm proposed by Xie et al. which leads to higher accuracy and more physically reliable output parameters. A number of Monte Carlo simulations of polarimetric synthetic aperture radar (PolSAR data are carried out and show that the proposed method with GVSM yields an overall improvement in the final accuracy of estimated parameters and outperforms both the version using SAVSM and the original approach. In addition, C-band Radarsat-2 and L-band AIRSAR fully polarimetric images over the San Francisco region are also used for testing purposes. A detailed comparison and analysis of decomposition results over different land-cover types are conducted. According to this study, the use of general decomposition models leads to a more accurate quantitative retrieval of target parameters. However, there

  8. The Traditional Model Does Not Explain Attitudes Toward Euthanasia: A Web-Based Survey of the General Public in Finland.

    Science.gov (United States)

    Terkamo-Moisio, Anja; Kvist, Tarja; Laitila, Teuvo; Kangasniemi, Mari; Ryynänen, Olli-Pekka; Pietilä, Anna-Maija

    2017-08-01

    The debate about euthanasia is ongoing in several countries including Finland. However, there is a lack of information on current attitudes toward euthanasia among general Finnish public. The traditional model for predicting individuals' attitudes to euthanasia is based on their age, gender, educational level, and religiosity. However, a new evaluation of religiosity is needed due to the limited operationalization of this factor in previous studies. This study explores the connections between the factors of the traditional model and the attitudes toward euthanasia among the general public in the Finnish context. The Finnish public's attitudes toward euthanasia have become remarkably more positive over the last decade. Further research is needed on the factors that predict euthanasia attitudes. We suggest two different explanatory models for consideration: one that emphasizes the value of individual autonomy and another that approaches euthanasia from the perspective of fears of death or the process of dying.

  9. Spatial generalized linear mixed models of electric power outages due to hurricanes and ice storms

    International Nuclear Information System (INIS)

    Liu Haibin; Davidson, Rachel A.; Apanasovich, Tatiyana V.

    2008-01-01

    This paper presents new statistical models that predict the number of hurricane- and ice storm-related electric power outages likely to occur in each 3 kmx3 km grid cell in a region. The models are based on a large database of recent outages experienced by three major East Coast power companies in six hurricanes and eight ice storms. A spatial generalized linear mixed modeling (GLMM) approach was used in which spatial correlation is incorporated through random effects. Models were fitted using a composite likelihood approach and the covariance matrix was estimated empirically. A simulation study was conducted to test the model estimation procedure, and model training, validation, and testing were done to select the best models and assess their predictive power. The final hurricane model includes number of protective devices, maximum gust wind speed, hurricane indicator, and company indicator covariates. The final ice storm model includes number of protective devices, ice thickness, and ice storm indicator covariates. The models should be useful for power companies as they plan for future storms. The statistical modeling approach offers a new way to assess the reliability of electric power and other infrastructure systems in extreme events

  10. Models of expected returns on the brazilian market: Empirical tests using predictive methodology

    Directory of Open Access Journals (Sweden)

    Adriano Mussa

    2009-01-01

    Full Text Available Predictive methodologies for test of the expected returns models are largely diffused on the international academic environment. However, these methods have not been used in Brazil in a systematic way. Generally, empirical studies proceeded with Brazilian stock market data are concentrated only in the first step of these methodologies. The purpose of this article was test and compare the models CAPM, 3-factors and 4-factors using a predictive methodology, considering two steps – temporal and cross-section regressions – with standard errors obtained by the techniques of Fama and Macbeth (1973. The results indicated the superiority of the 4-fators model as compared to the 3-fators model, and the superiority of the 3- factors model as compared to the CAPM, but no one of the tested models were enough on the explanation of the Brazilian stock returns. Contrary to some empirical evidences, that do not use predictive methodology, the size and momentum effect seem do not exist on the Brazilian capital markets, but there are evidences of the value effect and the relevance of the market for explanation of expected returns. These finds rise some questions, mainly caused by the originality of the methodology on the local market and by the fact that this subject is still incipient and polemic on the Brazilian academic environment.

  11. Modeling and Control of CSTR using Model based Neural Network Predictive Control

    OpenAIRE

    Shrivastava, Piyush

    2012-01-01

    This paper presents a predictive control strategy based on neural network model of the plant is applied to Continuous Stirred Tank Reactor (CSTR). This system is a highly nonlinear process; therefore, a nonlinear predictive method, e.g., neural network predictive control, can be a better match to govern the system dynamics. In the paper, the NN model and the way in which it can be used to predict the behavior of the CSTR process over a certain prediction horizon are described, and some commen...

  12. Consensus models to predict endocrine disruption for all ...

    Science.gov (United States)

    Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an exte

  13. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  14. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  15. Simple implementation of general dark energy models

    International Nuclear Information System (INIS)

    Bloomfield, Jolyon K.; Pearson, Jonathan A.

    2014-01-01

    We present a formalism for the numerical implementation of general theories of dark energy, combining the computational simplicity of the equation of state for perturbations approach with the generality of the effective field theory approach. An effective fluid description is employed, based on a general action describing single-scalar field models. The formalism is developed from first principles, and constructed keeping the goal of a simple implementation into CAMB in mind. Benefits of this approach include its straightforward implementation, the generality of the underlying theory, the fact that the evolved variables are physical quantities, and that model-independent phenomenological descriptions may be straightforwardly investigated. We hope this formulation will provide a powerful tool for the comparison of theoretical models of dark energy with observational data

  16. [Treatment of cloud radiative effects in general circulation models

    International Nuclear Information System (INIS)

    Wang, W.C.

    1993-01-01

    This is a renewal proposal for an on-going project of the Department of Energy (DOE)/Atmospheric Radiation Measurement (ARM) Program. The objective of the ARM Program is to improve the treatment of radiation-cloud in GCMs so that reliable predictions of the timing and magnitude of greenhouse gas-induced global warming and regional responses can be made. The ARM Program supports two research areas: (I) The modeling and analysis of data related to the parameterization of clouds and radiation in general circulation models (GCMs); and (II) the development of advanced instrumentation for both mapping the three-dimensional structure of the atmosphere and high accuracy/precision radiometric observations. The present project conducts research in area (I) and focuses on GCM treatment of cloud life cycle, optical properties, and vertical overlapping. The project has two tasks: (1) Development and Refinement of GCM Radiation-Cloud Treatment Using ARM Data; and (2) Validation of GCM Radiation-Cloud Treatment

  17. Prostate-specific antigen and long-term prediction of prostate cancer incidence and mortality in the general population

    DEFF Research Database (Denmark)

    Ørsted, David Dynnes; Nordestgaard, Børge G; Jensen, Gorm B

    2012-01-01

    It is largely unknown whether prostate-specific antigen (PSA) level at first date of testing predicts long-term risk of prostate cancer (PCa) incidence and mortality in the general population.......It is largely unknown whether prostate-specific antigen (PSA) level at first date of testing predicts long-term risk of prostate cancer (PCa) incidence and mortality in the general population....

  18. Preclinical models used for immunogenicity prediction of therapeutic proteins.

    Science.gov (United States)

    Brinks, Vera; Weinbuch, Daniel; Baker, Matthew; Dean, Yann; Stas, Philippe; Kostense, Stefan; Rup, Bonita; Jiskoot, Wim

    2013-07-01

    All therapeutic proteins are potentially immunogenic. Antibodies formed against these drugs can decrease efficacy, leading to drastically increased therapeutic costs and in rare cases to serious and sometimes life threatening side-effects. Many efforts are therefore undertaken to develop therapeutic proteins with minimal immunogenicity. For this, immunogenicity prediction of candidate drugs during early drug development is essential. Several in silico, in vitro and in vivo models are used to predict immunogenicity of drug leads, to modify potentially immunogenic properties and to continue development of drug candidates with expected low immunogenicity. Despite the extensive use of these predictive models, their actual predictive value varies. Important reasons for this uncertainty are the limited/insufficient knowledge on the immune mechanisms underlying immunogenicity of therapeutic proteins, the fact that different predictive models explore different components of the immune system and the lack of an integrated clinical validation. In this review, we discuss the predictive models in use, summarize aspects of immunogenicity that these models predict and explore the merits and the limitations of each of the models.

  19. Python tools for rapid development, calibration, and analysis of generalized groundwater-flow models

    Science.gov (United States)

    Starn, J. J.; Belitz, K.

    2014-12-01

    National-scale water-quality data sets for the United States have been available for several decades; however, groundwater models to interpret these data are available for only a small percentage of the country. Generalized models may be adequate to explain and project groundwater-quality trends at the national scale by using regional scale models (defined as watersheds at or between the HUC-6 and HUC-8 levels). Coast-to-coast data such as the National Hydrologic Dataset Plus (NHD+) make it possible to extract the basic building blocks for a model anywhere in the country. IPython notebooks have been developed to automate the creation of generalized groundwater-flow models from the NHD+. The notebook format allows rapid testing of methods for model creation, calibration, and analysis. Capabilities within the Python ecosystem greatly speed up the development and testing of algorithms. GeoPandas is used for very efficient geospatial processing. Raster processing includes the Geospatial Data Abstraction Library and image processing tools. Model creation is made possible through Flopy, a versatile input and output writer for several MODFLOW-based flow and transport model codes. Interpolation, integration, and map plotting included in the standard Python tool stack also are used, making the notebook a comprehensive platform within on to build and evaluate general models. Models with alternative boundary conditions, number of layers, and cell spacing can be tested against one another and evaluated by using water-quality data. Novel calibration criteria were developed by comparing modeled heads to land-surface and surface-water elevations. Information, such as predicted age distributions, can be extracted from general models and tested for its ability to explain water-quality trends. Groundwater ages then can be correlated with horizontal and vertical hydrologic position, a relation that can be used for statistical assessment of likely groundwater-quality conditions

  20. A Grey NGM(1,1, k) Self-Memory Coupling Prediction Model for Energy Consumption Prediction

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span. PMID:25054174

  1. A general science-based framework for dynamical spatio-temporal models

    Science.gov (United States)

    Wikle, C.K.; Hooten, M.B.

    2010-01-01

    Spatio-temporal statistical models are increasingly being used across a wide variety of scientific disciplines to describe and predict spatially-explicit processes that evolve over time. Correspondingly, in recent years there has been a significant amount of research on new statistical methodology for such models. Although descriptive models that approach the problem from the second-order (covariance) perspective are important, and innovative work is being done in this regard, many real-world processes are dynamic, and it can be more efficient in some cases to characterize the associated spatio-temporal dependence by the use of dynamical models. The chief challenge with the specification of such dynamical models has been related to the curse of dimensionality. Even in fairly simple linear, first-order Markovian, Gaussian error settings, statistical models are often over parameterized. Hierarchical models have proven invaluable in their ability to deal to some extent with this issue by allowing dependency among groups of parameters. In addition, this framework has allowed for the specification of science based parameterizations (and associated prior distributions) in which classes of deterministic dynamical models (e. g., partial differential equations (PDEs), integro-difference equations (IDEs), matrix models, and agent-based models) are used to guide specific parameterizations. Most of the focus for the application of such models in statistics has been in the linear case. The problems mentioned above with linear dynamic models are compounded in the case of nonlinear models. In this sense, the need for coherent and sensible model parameterizations is not only helpful, it is essential. Here, we present an overview of a framework for incorporating scientific information to motivate dynamical spatio-temporal models. First, we illustrate the methodology with the linear case. We then develop a general nonlinear spatio-temporal framework that we call general quadratic

  2. Evaluating the predictive abilities of community occupancy models using AUC while accounting for imperfect detection

    Science.gov (United States)

    Zipkin, Elise F.; Grant, Evan H. Campbell; Fagan, William F.

    2012-01-01

    The ability to accurately predict patterns of species' occurrences is fundamental to the successful management of animal communities. To determine optimal management strategies, it is essential to understand species-habitat relationships and how species habitat use is related to natural or human-induced environmental changes. Using five years of monitoring data in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA, we developed four multi-species hierarchical models for estimating amphibian wetland use that account for imperfect detection during sampling. The models were designed to determine which factors (wetland habitat characteristics, annual trend effects, spring/summer precipitation, and previous wetland occupancy) were most important for predicting future habitat use. We used the models to make predictions of species occurrences in sampled and unsampled wetlands and evaluated model projections using additional data. Using a Bayesian approach, we calculated a posterior distribution of receiver operating characteristic area under the curve (ROC AUC) values, which allowed us to explicitly quantify the uncertainty in the quality of our predictions and to account for false negatives in the evaluation dataset. We found that wetland hydroperiod (the length of time that a wetland holds water) as well as the occurrence state in the prior year were generally the most important factors in determining occupancy. The model with only habitat covariates predicted species occurrences well; however, knowledge of wetland use in the previous year significantly improved predictive ability at the community level and for two of 12 species/species complexes. Our results demonstrate the utility of multi-species models for understanding which factors affect species habitat use of an entire community (of species) and provide an improved methodology using AUC that is helpful for quantifying the uncertainty in model predictions while explicitly accounting for

  3. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior...... and predictive inferences under different reasonable choices of prior distribution in sensitivity analysis have been presented....

  4. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  5. Privacy-Preserving Evaluation of Generalization Error and Its Application to Model and Attribute Selection

    Science.gov (United States)

    Sakuma, Jun; Wright, Rebecca N.

    Privacy-preserving classification is the task of learning or training a classifier on the union of privately distributed datasets without sharing the datasets. The emphasis of existing studies in privacy-preserving classification has primarily been put on the design of privacy-preserving versions of particular data mining algorithms, However, in classification problems, preprocessing and postprocessing— such as model selection or attribute selection—play a prominent role in achieving higher classification accuracy. In this paper, we show generalization error of classifiers in privacy-preserving classification can be securely evaluated without sharing prediction results. Our main technical contribution is a new generalized Hamming distance protocol that is universally applicable to preprocessing and postprocessing of various privacy-preserving classification problems, such as model selection in support vector machine and attribute selection in naive Bayes classification.

  6. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  7. Revised predictive equations for salt intrusion modelling in estuaries

    NARCIS (Netherlands)

    Gisen, J.I.A.; Savenije, H.H.G.; Nijzink, R.C.

    2015-01-01

    For one-dimensional salt intrusion models to be predictive, we need predictive equations to link model parameters to observable hydraulic and geometric variables. The one-dimensional model of Savenije (1993b) made use of predictive equations for the Van der Burgh coefficient $K$ and the dispersion

  8. Life Prediction Model for Grid-Connected Li-ion Battery Energy Storage System: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Kandler A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Saxon, Aron R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Keyser, Matthew A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Lundstrom, Blake R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cao, Ziwei [SunPower Corporation; Roc, Albert [SunPower Corp.

    2017-08-25

    Life Prediction Model for Grid-Connected Li-ion Battery Energy Storage System: Preprint Lithium-ion (Li-ion) batteries are being deployed on the electrical grid for a variety of purposes, such as to smooth fluctuations in solar renewable power generation. The lifetime of these batteries will vary depending on their thermal environment and how they are charged and discharged. To optimal utilization of a battery over its lifetime requires characterization of its performance degradation under different storage and cycling conditions. Aging tests were conducted on commercial graphite/nickel-manganese-cobalt (NMC) Li-ion cells. A general lifetime prognostic model framework is applied to model changes in capacity and resistance as the battery degrades. Across 9 aging test conditions from 0oC to 55oC, the model predicts capacity fade with 1.4 percent RMS error and resistance growth with 15 percent RMS error. The model, recast in state variable form with 8 states representing separate fade mechanisms, is used to extrapolate lifetime for example applications of the energy storage system integrated with renewable photovoltaic (PV) power generation.

  9. Preprocedural Prediction Model for Contrast-Induced Nephropathy Patients.

    Science.gov (United States)

    Yin, Wen-Jun; Yi, Yi-Hu; Guan, Xiao-Feng; Zhou, Ling-Yun; Wang, Jiang-Lin; Li, Dai-Yang; Zuo, Xiao-Cong

    2017-02-03

    Several models have been developed for prediction of contrast-induced nephropathy (CIN); however, they only contain patients receiving intra-arterial contrast media for coronary angiographic procedures, which represent a small proportion of all contrast procedures. In addition, most of them evaluate radiological interventional procedure-related variables. So it is necessary for us to develop a model for prediction of CIN before radiological procedures among patients administered contrast media. A total of 8800 patients undergoing contrast administration were randomly assigned in a 4:1 ratio to development and validation data sets. CIN was defined as an increase of 25% and/or 0.5 mg/dL in serum creatinine within 72 hours above the baseline value. Preprocedural clinical variables were used to develop the prediction model from the training data set by the machine learning method of random forest, and 5-fold cross-validation was used to evaluate the prediction accuracies of the model. Finally we tested this model in the validation data set. The incidence of CIN was 13.38%. We built a prediction model with 13 preprocedural variables selected from 83 variables. The model obtained an area under the receiver-operating characteristic (ROC) curve (AUC) of 0.907 and gave prediction accuracy of 80.8%, sensitivity of 82.7%, specificity of 78.8%, and Matthews correlation coefficient of 61.5%. For the first time, 3 new factors are included in the model: the decreased sodium concentration, the INR value, and the preprocedural glucose level. The newly established model shows excellent predictive ability of CIN development and thereby provides preventative measures for CIN. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  10. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  11. Model predictive control using fuzzy decision functions

    NARCIS (Netherlands)

    Kaymak, U.; Costa Sousa, da J.M.

    2001-01-01

    Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the

  12. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  13. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  14. Prognosis of patients with whiplash-associated disorders consulting physiotherapy: development of a predictive model for recovery

    OpenAIRE

    Bohman, Tony; C?t?, Pierre; Boyle, Eleanor; Cassidy, J David; Carroll, Linda J; Skillgate, Eva

    2012-01-01

    Abstract Background Patients with whiplash-associated disorders (WAD) have a generally favourable prognosis, yet some develop longstanding pain and disability. Predicting who will recover from WAD shortly after a traffic collision is very challenging for health care providers such as physical therapists. Therefore, we aimed to develop a prediction model for the recovery of WAD in a cohort of patients who consulted physical therapists within six weeks after the injury. Methods Our cohort inclu...

  15. Nostradamus 2014 prediction, modeling and analysis of complex systems

    CERN Document Server

    Suganthan, Ponnuthurai; Chen, Guanrong; Snasel, Vaclav; Abraham, Ajith; Rössler, Otto

    2014-01-01

    The prediction of behavior of complex systems, analysis and modeling of its structure is a vitally important problem in engineering, economy and generally in science today. Examples of such systems can be seen in the world around us (including our bodies) and of course in almost every scientific discipline including such “exotic” domains as the earth’s atmosphere, turbulent fluids, economics (exchange rate and stock markets), population growth, physics (control of plasma), information flow in social networks and its dynamics, chemistry and complex networks. To understand such complex dynamics, which often exhibit strange behavior, and to use it in research or industrial applications, it is paramount to create its models. For this purpose there exists a rich spectrum of methods, from classical such as ARMA models or Box Jenkins method to modern ones like evolutionary computation, neural networks, fuzzy logic, geometry, deterministic chaos amongst others. This proceedings book is a collection of accepted ...

  16. Seasonal prediction of East Asian summer rainfall using a multi-model ensemble system

    Science.gov (United States)

    Ahn, Joong-Bae; Lee, Doo-Young; Yoo, Jin‑Ho

    2015-04-01

    Using the retrospective forecasts of seven state-of-the-art coupled models and their multi-model ensemble (MME) for boreal summers, the prediction skills of climate models in the western tropical Pacific (WTP) and East Asian region are assessed. The prediction of summer rainfall anomalies in East Asia is difficult, while the WTP has a strong correlation between model prediction and observation. We focus on developing a new approach to further enhance the seasonal prediction skill for summer rainfall in East Asia and investigate the influence of convective activity in the WTP on East Asian summer rainfall. By analyzing the characteristics of the WTP convection, two distinct patterns associated with El Niño-Southern Oscillation developing and decaying modes are identified. Based on the multiple linear regression method, the East Asia Rainfall Index (EARI) is developed by using the interannual variability of the normalized Maritime continent-WTP Indices (MPIs), as potentially useful predictors for rainfall prediction over East Asia, obtained from the above two main patterns. For East Asian summer rainfall, the EARI has superior performance to the East Asia summer monsoon index or each MPI. Therefore, the regressed rainfall from EARI also shows a strong relationship with the observed East Asian summer rainfall pattern. In addition, we evaluate the prediction skill of the East Asia reconstructed rainfall obtained by hybrid dynamical-statistical approach using the cross-validated EARI from the individual models and their MME. The results show that the rainfalls reconstructed from simulations capture the general features of observed precipitation in East Asia quite well. This study convincingly demonstrates that rainfall prediction skill is considerably improved by using a hybrid dynamical-statistical approach compared to the dynamical forecast alone. Acknowledgements This work was carried out with the support of Rural Development Administration Cooperative Research

  17. A computational model that predicts behavioral sensitivity to intracortical microstimulation

    Science.gov (United States)

    Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.

    2017-02-01

    Objective. Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber’s law. Significance. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.

  18. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  19. A general one-dimension nonlinear magneto-elastic coupled constitutive model for magnetostrictive materials

    International Nuclear Information System (INIS)

    Zhang, Da-Guang; Li, Meng-Han; Zhou, Hao-Miao

    2015-01-01

    For magnetostrictive rods under combined axial pre-stress and magnetic field, a general one-dimension nonlinear magneto-elastic coupled constitutive model was built in this paper. First, the elastic Gibbs free energy was expanded into polynomial, and the relationship between stress and strain and the relationship between magnetization and magnetic field with the polynomial form were obtained with the help of thermodynamic relations. Then according to microscopic magneto-elastic coupling mechanism and some physical facts of magnetostrictive materials, a nonlinear magneto-elastic constitutive with concise form was obtained when the relations of nonlinear strain and magnetization in the polynomial constitutive were instead with transcendental functions. The comparisons between the prediction and the experimental data of different magnetostrictive materials, such as Terfenol-D, Metglas and Ni showed that the predicted magnetostrictive strain and magnetization curves were consistent with experimental results under different pre-stresses whether in the region of low and moderate field or high field. Moreover, the model can fully reflect the nonlinear magneto-mechanical coupling characteristics between magnetic, magnetostriction and elasticity, and it can effectively predict the changes of material parameters with pre-stress and bias field, which is useful in practical applications

  20. Coal demand prediction based on a support vector machine model

    Energy Technology Data Exchange (ETDEWEB)

    Jia, Cun-liang; Wu, Hai-shan; Gong, Dun-wei [China University of Mining & Technology, Xuzhou (China). School of Information and Electronic Engineering

    2007-01-15

    A forecasting model for coal demand of China using a support vector regression was constructed. With the selected embedding dimension, the output vectors and input vectors were constructed based on the coal demand of China from 1980 to 2002. After compared with lineal kernel and Sigmoid kernel, a radial basis function(RBF) was adopted as the kernel function. By analyzing the relationship between the error margin of prediction and the model parameters, the proper parameters were chosen. The support vector machines (SVM) model with multi-input and single output was proposed. Compared the predictor based on RBF neural networks with test datasets, the results show that the SVM predictor has higher precision and greater generalization ability. In the end, the coal demand from 2003 to 2006 is accurately forecasted. l0 refs., 2 figs., 4 tabs.

  1. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  2. General parenting, anti-smoking socialization and smoking onset

    NARCIS (Netherlands)

    Otten, R.; Engels, R.C.M.E.; Eijnden, R.J.J.M. van den

    2008-01-01

    A theoretical model was tested in which general parenting and parental smoking predicted anti-smoking socialization, which in turn predicted adolescent smoking onset. Participants were 4351 Dutch adolescents between 13 and 15 years of age. In the model, strictness and psychological autonomy granting

  3. Actuarial statistics with generalized linear mixed models

    NARCIS (Netherlands)

    Antonio, K.; Beirlant, J.

    2007-01-01

    Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics

  4. Seasonal climate prediction for North Eurasia

    International Nuclear Information System (INIS)

    Kryjov, Vladimir N

    2012-01-01

    An overview of the current status of the operational seasonal climate prediction for North Eurasia is presented. It is shown that the performance of existing climate models is rather poor in seasonal prediction for North Eurasia. Multi-model ensemble forecasts are more reliable than single-model ones; however, for North Eurasia they tend to be close to climatological ones. Application of downscaling methods may improve predictions for some locations (or regions). However, general improvement of the reliability of seasonal forecasts for North Eurasia requires improvement of the climate prediction models. (letter)

  5. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  6. Predictive models for fish assemblages in eastern USA streams: implications for assessing biodiversity

    Science.gov (United States)

    Meador, Michael R.; Carlisle, Daren M.

    2009-01-01

    Management and conservation of aquatic systems require the ability to assess biological conditions and identify changes in biodiversity. Predictive models for fish assemblages were constructed to assess biological condition and changes in biodiversity for streams sampled in the eastern United States as part of the U.S. Geological Survey's National Water Quality Assessment Program. Separate predictive models were developed for northern and southern regions. Reference sites were designated using land cover and local professional judgment. Taxonomic completeness was quantified based on the ratio of the number of observed native fish species expected to occur to the number of expected native fish species. Models for both regions accurately predicted fish species composition at reference sites with relatively high precision and low bias. In general, species that occurred less frequently than expected (decreasers) tended to prefer riffle areas and larger substrates, such as gravel and cobble, whereas increaser species (occurring more frequently than expected) tended to prefer pools, backwater areas, and vegetated and sand substrates. In the north, the percentage of species identified as increasers and the percentage identified as decreasers were equal, whereas in the south nearly two-thirds of the species examined were identified as decreasers. Predictive models of fish species can provide a standardized indicator for consistent assessments of biological condition at varying spatial scales and critical information for an improved understanding of fish species that are potentially at risk of loss with changing water quality conditions.

  7. Modeling and Predicting the Daily Equatorial Plasma Bubble Activity Using the Tiegcm

    Science.gov (United States)

    Carter, B. A.; Retterer, J. M.; Yizengaw, E.; Wiens, K. C.; Wing, S.; Groves, K. M.; Caton, R. G.; Bridgwood, C.; Francis, M. J.; Terkildsen, M. B.; Norman, R.; Zhang, K.

    2014-12-01

    Describing and understanding the daily variability of Equatorial Plasma Bubble (EPB) occurrence has remained a significant challenge over recent decades. In this study we use the Thermosphere Ionosphere Electrodynamics General Circulation Model (TIEGCM), which is driven by solar (F10.7) and geomagnetic (Kp) activity indices, to study daily variations of the linear Rayleigh-Taylor (R-T) instability growth rate in relation to the measured scintillation strength at five longitudinally distributed stations. For locations characterized by generally favorable conditions for EPB growth (i.e., within the scintillation season for that location) we find that the TIEGCM is capable of identifying days when EPB development, determined from the calculated R-T growth rate, is suppressed as a result of geomagnetic activity. Both observed and modeled upward plasma drift indicate that the pre-reversal enhancement scales linearly with Kp from several hours prior, from which it is concluded that even small Kp changes cause significant variations in daily EPB growth. This control of Kp variations on EPB growth prompted an investigation into the use of predicted Kp values from the Wing Kp model over a 2-month equinoctial campaign in 2014. It is found that both the 1-hr and 4-hr predicted Kp values can be reliably used as inputs into the TIEGCM to forecast the EPB growth conditions during scintillation season, when daily EPB variability is governed by the suppression of EPBs on days with increased, but not necessarily high, geomagnetic activity.

  8. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  9. Development of a risk-prediction model for Middle East respiratory syndrome coronavirus infection in dialysis patients.

    Science.gov (United States)

    Ahmed, Anwar E; Alshukairi, Abeer N; Al-Jahdali, Hamdan; Alaqeel, Mody; Siddiq, Salma S; Alsaab, Hanan A; Sakr, Ezzeldin A; Alyahya, Hamed A; Alandonisi, Munzir M; Subedar, Alaa T; Aloudah, Nouf M; Baharoon, Salim; Alsalamah, Majid A; Al Johani, Sameera; Alghamdi, Mohammed G

    2018-04-14

    Introduction The Middle East respiratory syndrome coronavirus (MERS-CoV) infection can cause transmission clusters and high mortality in hemodialysis facilities. We attempted to develop a risk-prediction model to assess the early risk of MERS-CoV infection in dialysis patients. Methods This two-center retrospective cohort study included 104 dialysis patients who were suspected of MERS-CoV infection and diagnosed with rRT-PCR between September 2012 and June 2016 at King Fahd General Hospital in Jeddah and King Abdulaziz Medical City in Riyadh. We retrieved data on demographic, clinical, and radiological findings, and laboratory indices of each patient. Findings A risk-prediction model to assess early risk for MERS-CoV in dialysis patients has been developed. Independent predictors of MERS-CoV infection were identified, including chest pain (OR = 24.194; P = 0.011), leukopenia (OR = 6.080; P = 0.049), and elevated aspartate aminotransferase (AST) (OR = 11.179; P = 0.013). The adequacy of this prediction model was good (P = 0.728), with a high predictive utility (area under curve [AUC] = 76.99%; 95% CI: 67.05% to 86.38%). The prediction of the model had optimism-corrected bootstrap resampling AUC of 71.79%. The Youden index yielded a value of 0.439 or greater as the best cut-off for high risk of MERS infection. Discussion This risk-prediction model in dialysis patients appears to depend markedly on chest pain, leukopenia, and elevated AST. The model accurately predicts the high risk of MERS-CoV infection in dialysis patients. This could be clinically useful in applying timely intervention and control measures to prevent clusters of infections in dialysis facilities or other health care settings. The predictive utility of the model warrants further validation in external samples and prospective studies. © 2018 International Society for Hemodialysis.

  10. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  11. Nonlinear signal processing using neural networks: Prediction and system modelling

    Energy Technology Data Exchange (ETDEWEB)

    Lapedes, A.; Farber, R.

    1987-06-01

    The backpropagation learning algorithm for neural networks is developed into a formalism for nonlinear signal processing. We illustrate the method by selecting two common topics in signal processing, prediction and system modelling, and show that nonlinear applications can be handled extremely well by using neural networks. The formalism is a natural, nonlinear extension of the linear Least Mean Squares algorithm commonly used in adaptive signal processing. Simulations are presented that document the additional performance achieved by using nonlinear neural networks. First, we demonstrate that the formalism may be used to predict points in a highly chaotic time series with orders of magnitude increase in accuracy over conventional methods including the Linear Predictive Method and the Gabor-Volterra-Weiner Polynomial Method. Deterministic chaos is thought to be involved in many physical situations including the onset of turbulence in fluids, chemical reactions and plasma physics. Secondly, we demonstrate the use of the formalism in nonlinear system modelling by providing a graphic example in which it is clear that the neural network has accurately modelled the nonlinear transfer function. It is interesting to note that the formalism provides explicit, analytic, global, approximations to the nonlinear maps underlying the various time series. Furthermore, the neural net seems to be extremely parsimonious in its requirements for data points from the time series. We show that the neural net is able to perform well because it globally approximates the relevant maps by performing a kind of generalized mode decomposition of the maps. 24 refs., 13 figs.

  12. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    Science.gov (United States)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  13. Time-specific ecological niche modeling predicts spatial dynamics of vector insects and human dengue cases.

    Science.gov (United States)

    Peterson, A Townsend; Martínez-Campos, Carmen; Nakazawa, Yoshinori; Martínez-Meyer, Enrique

    2005-09-01

    Numerous human diseases-malaria, dengue, yellow fever and leishmaniasis, to name a few-are transmitted by insect vectors with brief life cycles and biting activity that varies in both space and time. Although the general geographic distributions of these epidemiologically important species are known, the spatiotemporal variation in their emergence and activity remains poorly understood. We used ecological niche modeling via a genetic algorithm to produce time-specific predictive models of monthly distributions of Aedes aegypti in Mexico in 1995. Significant predictions of monthly mosquito activity and distributions indicate that predicting spatiotemporal dynamics of disease vector species is feasible; significant coincidence with human cases of dengue indicate that these dynamics probably translate directly into transmission of dengue virus to humans. This approach provides new potential for optimizing use of resources for disease prevention and remediation via automated forecasting of disease transmission risk.

  14. Energy saving and prediction modeling of petrochemical industries: A novel ELM based on FAHP

    International Nuclear Information System (INIS)

    Geng, ZhiQiang; Qin, Lin; Han, YongMing; Zhu, QunXiong

    2017-01-01

    Extreme learning machine (ELM), which is a simple single-hidden-layer feed-forward neural network with fast implementation, has been widely applied in many engineering fields. However, it is difficult to enhance the modeling ability of extreme learning in disposing the high-dimensional noisy data. And the predictive modeling method based on the ELM integrated fuzzy C-Means integrating analytic hierarchy process (FAHP) (FAHP-ELM) is proposed. The fuzzy C-Means algorithm is used to cluster the input attributes of the high-dimensional data. The Analytic Hierarchy Process (AHP) based on the entropy weights is proposed to filter the redundant information and extracts characteristic components. Then, the fusion data is used as the input of the ELM. Compared with the back-propagation (BP) neural network and the ELM, the proposed model has better performance in terms of the speed of convergence, generalization and modeling accuracy based on University of California Irvine (UCI) benchmark datasets. Finally, the proposed method was applied to build the energy saving and predictive model of the purified terephthalic acid (PTA) solvent system and the ethylene production system. The experimental results demonstrated the validity of the proposed method. Meanwhile, it could enhance the efficiency of energy utilization and achieve energy conservation and emission reduction. - Highlights: • The ELM integrated FAHP approach is proposed. • The FAHP-ELM prediction model is effectively verified through UCI datasets. • The energy saving and prediction model of petrochemical industries is obtained. • The method is efficient in improvement of energy efficiency and emission reduction.

  15. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  16. Predicting plant invasions under climate change: are species distribution models validated by field trials?

    Science.gov (United States)

    Sheppard, Christine S; Burns, Bruce R; Stanley, Margaret C

    2014-09-01

    Climate change may facilitate alien species invasion into new areas, particularly for species from warm native ranges introduced into areas currently marginal for temperature. Although conclusions from modelling approaches and experimental studies are generally similar, combining the two approaches has rarely occurred. The aim of this study was to validate species distribution models by conducting field trials in sites of differing suitability as predicted by the models, thus increasing confidence in their ability to assess invasion risk. Three recently naturalized alien plants in New Zealand were used as study species (Archontophoenix cunninghamiana, Psidium guajava and Schefflera actinophylla): they originate from warm native ranges, are woody bird-dispersed species and of concern as potential weeds. Seedlings were grown in six sites across the country, differing both in climate and suitability (as predicted by the species distribution models). Seedling growth and survival were recorded over two summers and one or two winter seasons, and temperature and precipitation were monitored hourly at each site. Additionally, alien seedling performances were compared to those of closely related native species (Rhopalostylis sapida, Lophomyrtus bullata and Schefflera digitata). Furthermore, half of the seedlings were sprayed with pesticide, to investigate whether enemy release may influence performance. The results showed large differences in growth and survival of the alien species among the six sites. In the more suitable sites, performance was frequently higher compared to the native species. Leaf damage from invertebrate herbivory was low for both alien and native seedlings, with little evidence that the alien species should have an advantage over the native species because of enemy release. Correlations between performance in the field and predicted suitability of species distribution models were generally high. The projected increase in minimum temperature and reduced

  17. Predicting Vascular Plant Diversity in Anthropogenic Peatlands: Comparison of Modeling Methods with Free Satellite Data

    Directory of Open Access Journals (Sweden)

    Ivan Castillo-Riffart

    2017-07-01

    Full Text Available Peatlands are ecosystems of great relevance, because they have an important number of ecological functions that provide many services to mankind. However, studies focusing on plant diversity, addressed from the remote sensing perspective, are still scarce in these environments. In the present study, predictions of vascular plant richness and diversity were performed in three anthropogenic peatlands on Chiloé Island, Chile, using free satellite data from the sensors OLI, ASTER, and MSI. Also, we compared the suitability of these sensors using two modeling methods: random forest (RF and the generalized linear model (GLM. As predictors for the empirical models, we used the spectral bands, vegetation indices and textural metrics. Variable importance was estimated using recursive feature elimination (RFE. Fourteen out of the 17 predictors chosen by RFE were textural metrics, demonstrating the importance of the spatial context to predict species richness and diversity. Non-significant differences were found between the algorithms; however, the GLM models often showed slightly better results than the RF. Predictions obtained by the different satellite sensors did not show significant differences; nevertheless, the best models were obtained with ASTER (richness: R2 = 0.62 and %RMSE = 17.2, diversity: R2 = 0.71 and %RMSE = 20.2, obtained with RF and GLM respectively, followed by OLI and MSI. Diversity obtained higher accuracies than richness; nonetheless, accurate predictions were achieved for both, demonstrating the potential of free satellite data for the prediction of relevant community characteristics in anthropogenic peatland ecosystems.

  18. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A; Giebel, G; Landberg, L [Risoe National Lab., Roskilde (Denmark); Madsen, H; Nielsen, H A [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  19. Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network.

    Science.gov (United States)

    Han, Hong-Gui; Zhang, Lu; Hou, Ying; Qiao, Jun-Fei

    2016-02-01

    A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration.

  20. Different perceptions of stress, coping styles, and general well-being among pregnant Chinese women: a structural equation modeling approach.

    Science.gov (United States)

    Lau, Ying; Tha, Pyai Htun; Wong, Daniel Fu Keung; Wang, Yuqiong; Wang, Ying; Yobas, Piyanee Klainin

    2016-02-01

    Few studies have examined different perceptions of stress or explored the positive aspects of well-being among pregnant Chinese women, so there is a need to explore these phenomena in order to fill the research gap. The aim of this study was to examine the relationships among the different perceptions of stress, coping styles, and general well-being using a structural equation modeling approach. We examined a hypothetical model among 755 pregnant Chinese women based on the integration of theoretical models. The Perceived Stress Scale (PSS), the Trait Coping Styles Questionnaire (TCSQ), and the General Well-Being Schedule (GWB) were used to measure perceived stress, coping styles, and general well-being, respectively. A structural equation model showed that positive and negative perceptions of stress significantly influenced positive and negative coping styles, respectively. Different perceptions of stress were significantly associated with general well-being, but different coping styles had no significant effects on general well-being. The model had a good fit to the data (IFI = 0.910, TLI = 0.904, CFI = 0.910, and RMSEA = 0.038). Different perception of stress was able to predict significant differences in coping styles and general well-being.

  1. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  2. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    Science.gov (United States)

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  3. Short-Term Wind Speed Prediction Using EEMD-LSSVM Model

    Directory of Open Access Journals (Sweden)

    Aiqing Kang

    2017-01-01

    Full Text Available Hybrid Ensemble Empirical Mode Decomposition (EEMD and Least Square Support Vector Machine (LSSVM is proposed to improve short-term wind speed forecasting precision. The EEMD is firstly utilized to decompose the original wind speed time series into a set of subseries. Then the LSSVM models are established to forecast these subseries. Partial autocorrelation function is adopted to analyze the inner relationships between the historical wind speed series in order to determine input variables of LSSVM models for prediction of every subseries. Finally, the superposition principle is employed to sum the predicted values of every subseries as the final wind speed prediction. The performance of hybrid model is evaluated based on six metrics. Compared with LSSVM, Back Propagation Neural Networks (BP, Auto-Regressive Integrated Moving Average (ARIMA, combination of Empirical Mode Decomposition (EMD with LSSVM, and hybrid EEMD with ARIMA models, the wind speed forecasting results show that the proposed hybrid model outperforms these models in terms of six metrics. Furthermore, the scatter diagrams of predicted versus actual wind speed and histograms of prediction errors are presented to verify the superiority of the hybrid model in short-term wind speed prediction.

  4. PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...

    African Journals Online (AJOL)

    HOD

    their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.

  5. A laboratory-scale comparison of rate of spread model predictions using chaparral fuel beds – preliminary results

    Science.gov (United States)

    D.R. Weise; E. Koo; X. Zhou; S. Mahalingam

    2011-01-01

    Observed fire spread rates from 240 laboratory fires in horizontally-oriented single-species live fuel beds were compared to predictions from various implementations and modifications of the Rothermel rate of spread model and a physical fire spread model developed by Pagni and Koo. Packing ratio of the laboratory fuel beds was generally greater than that observed in...

  6. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  7. A Generalized QMRA Beta-Poisson Dose-Response Model.

    Science.gov (United States)

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2016-10-01

    Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0Poisson model, PI(d|α,β), is a special case of the generalized model with K min = 1 (which implies r*=1). The generalized beta-Poisson model is based on a conceptual model with greater detail in the dose-response mechanism. Since a maximum likelihood solution is not easily available, a likelihood-free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median r* estimates produced fall short of meeting the required condition of r* = 1 for single-hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single-hit assumption for characterizing the dose-response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three-parameter generalized model provides a possibility to investigate the mechanism of a dose-response process in greater detail than is possible under a single-hit model. © 2016 Society for Risk Analysis.

  8. Modeling the prediction of business intelligence system effectiveness.

    Science.gov (United States)

    Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I

    2016-01-01

    Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.

  9. Comparison of the Predictive Performance and Interpretability of Random Forest and Linear Models on Benchmark Data Sets.

    Science.gov (United States)

    Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan

    2017-08-28

    The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical

  10. [Application of ARIMA model on prediction of malaria incidence].

    Science.gov (United States)

    Jing, Xia; Hua-Xun, Zhang; Wen, Lin; Su-Jian, Pei; Ling-Cong, Sun; Xiao-Rong, Dong; Mu-Min, Cao; Dong-Ni, Wu; Shunxiang, Cai

    2016-01-29

    To predict the incidence of local malaria of Hubei Province applying the Autoregressive Integrated Moving Average model (ARIMA). SPSS 13.0 software was applied to construct the ARIMA model based on the monthly local malaria incidence in Hubei Province from 2004 to 2009. The local malaria incidence data of 2010 were used for model validation and evaluation. The model of ARIMA (1, 1, 1) (1, 1, 0) 12 was tested as relatively the best optimal with the AIC of 76.085 and SBC of 84.395. All the actual incidence data were in the range of 95% CI of predicted value of the model. The prediction effect of the model was acceptable. The ARIMA model could effectively fit and predict the incidence of local malaria of Hubei Province.

  11. Generalized thick strip modelling for vortex-induced vibration of long flexible cylinders

    International Nuclear Information System (INIS)

    Bao, Y.; Palacios, R.; Graham, M.; Sherwin, S.

    2016-01-01

    We propose a generalized strip modelling method that is computationally efficient for the VIV prediction of long flexible cylinders in three-dimensional incompressible flow. In order to overcome the shortcomings of conventional strip-theory-based 2D models, the fluid domain is divided into “thick” strips, which are sufficiently thick to locally resolve the small scale turbulence effects and three dimensionality of the flow around the cylinder. An attractive feature of the model is that we independently construct a three-dimensional scale resolving model for individual strips, which have local spanwise scale along the cylinder's axial direction and are only coupled through the structural model of the cylinder. Therefore, this approach is able to cover the full spectrum for fully resolved 3D modelling to 2D strip theory. The connection between these strips is achieved through the calculation of a tensioned beam equation, which is used to represent the dynamics of the flexible body. In the limit, however, a single “thick” strip would fill the full 3D domain. A parallel Fourier spectral/hp element method is employed to solve the 3D flow dynamics in the strip-domain, and then the VIV response prediction is achieved through the strip–structure interactions. Numerical tests on both laminar and turbulent flows as well as the comparison against the fully resolved DNS are presented to demonstrate the applicability of this approach.

  12. Generalized thick strip modelling for vortex-induced vibration of long flexible cylinders

    Energy Technology Data Exchange (ETDEWEB)

    Bao, Y., E-mail: ybao@sjtu.edu.cn [Department of Civil Engineering, School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiaotong University, No. 800 Dongchuan Road, Shanghai (China); Department of Aeronautics, Imperial College London, South Kensington Campus, London (United Kingdom); Palacios, R., E-mail: r.palacios@imperial.ac.uk [Department of Aeronautics, Imperial College London, South Kensington Campus, London (United Kingdom); Graham, M., E-mail: m.graham@imperial.ac.uk [Department of Aeronautics, Imperial College London, South Kensington Campus, London (United Kingdom); Sherwin, S., E-mail: s.sherwin@imperial.ac.uk [Department of Aeronautics, Imperial College London, South Kensington Campus, London (United Kingdom)

    2016-09-15

    We propose a generalized strip modelling method that is computationally efficient for the VIV prediction of long flexible cylinders in three-dimensional incompressible flow. In order to overcome the shortcomings of conventional strip-theory-based 2D models, the fluid domain is divided into “thick” strips, which are sufficiently thick to locally resolve the small scale turbulence effects and three dimensionality of the flow around the cylinder. An attractive feature of the model is that we independently construct a three-dimensional scale resolving model for individual strips, which have local spanwise scale along the cylinder's axial direction and are only coupled through the structural model of the cylinder. Therefore, this approach is able to cover the full spectrum for fully resolved 3D modelling to 2D strip theory. The connection between these strips is achieved through the calculation of a tensioned beam equation, which is used to represent the dynamics of the flexible body. In the limit, however, a single “thick” strip would fill the full 3D domain. A parallel Fourier spectral/hp element method is employed to solve the 3D flow dynamics in the strip-domain, and then the VIV response prediction is achieved through the strip–structure interactions. Numerical tests on both laminar and turbulent flows as well as the comparison against the fully resolved DNS are presented to demonstrate the applicability of this approach.

  13. In silico prediction of toxicity of non-congeneric industrial chemicals using ensemble learning based modeling approaches

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com; Gupta, Shikha

    2014-03-15

    Ensemble learning approach based decision treeboost (DTB) and decision tree forest (DTF) models are introduced in order to establish quantitative structure–toxicity relationship (QSTR) for the prediction of toxicity of 1450 diverse chemicals. Eight non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals was evaluated using Tanimoto similarity index. Stochastic gradient boosting and bagging algorithms supplemented DTB and DTF models were constructed for classification and function optimization problems using the toxicity end-point in T. pyriformis. Special attention was drawn to prediction ability and robustness of the models, investigated both in external and 10-fold cross validation processes. In complete data, optimal DTB and DTF models rendered accuracies of 98.90%, 98.83% in two-category and 98.14%, 98.14% in four-category toxicity classifications. Both the models further yielded classification accuracies of 100% in external toxicity data of T. pyriformis. The constructed regression models (DTB and DTF) using five descriptors yielded correlation coefficients (R{sup 2}) of 0.945, 0.944 between the measured and predicted toxicities with mean squared errors (MSEs) of 0.059, and 0.064 in complete T. pyriformis data. The T. pyriformis regression models (DTB and DTF) applied to the external toxicity data sets yielded R{sup 2} and MSE values of 0.637, 0.655; 0.534, 0.507 (marine bacteria) and 0.741, 0.691; 0.155, 0.173 (algae). The results suggest for wide applicability of the inter-species models in predicting toxicity of new chemicals for regulatory purposes. These approaches provide useful strategy and robust tools in the screening of ecotoxicological risk or environmental hazard potential of chemicals. - Graphical abstract: Importance of input variables in DTB and DTF classification models for (a) two-category, and (b) four-category toxicity intervals in T. pyriformis data. Generalization and predictive abilities of the

  14. In silico prediction of toxicity of non-congeneric industrial chemicals using ensemble learning based modeling approaches

    International Nuclear Information System (INIS)

    Singh, Kunwar P.; Gupta, Shikha

    2014-01-01

    Ensemble learning approach based decision treeboost (DTB) and decision tree forest (DTF) models are introduced in order to establish quantitative structure–toxicity relationship (QSTR) for the prediction of toxicity of 1450 diverse chemicals. Eight non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals was evaluated using Tanimoto similarity index. Stochastic gradient boosting and bagging algorithms supplemented DTB and DTF models were constructed for classification and function optimization problems using the toxicity end-point in T. pyriformis. Special attention was drawn to prediction ability and robustness of the models, investigated both in external and 10-fold cross validation processes. In complete data, optimal DTB and DTF models rendered accuracies of 98.90%, 98.83% in two-category and 98.14%, 98.14% in four-category toxicity classifications. Both the models further yielded classification accuracies of 100% in external toxicity data of T. pyriformis. The constructed regression models (DTB and DTF) using five descriptors yielded correlation coefficients (R 2 ) of 0.945, 0.944 between the measured and predicted toxicities with mean squared errors (MSEs) of 0.059, and 0.064 in complete T. pyriformis data. The T. pyriformis regression models (DTB and DTF) applied to the external toxicity data sets yielded R 2 and MSE values of 0.637, 0.655; 0.534, 0.507 (marine bacteria) and 0.741, 0.691; 0.155, 0.173 (algae). The results suggest for wide applicability of the inter-species models in predicting toxicity of new chemicals for regulatory purposes. These approaches provide useful strategy and robust tools in the screening of ecotoxicological risk or environmental hazard potential of chemicals. - Graphical abstract: Importance of input variables in DTB and DTF classification models for (a) two-category, and (b) four-category toxicity intervals in T. pyriformis data. Generalization and predictive abilities of the

  15. Reservoir Inflow Prediction under GCM Scenario Downscaled by Wavelet Transform and Support Vector Machine Hybrid Models

    Directory of Open Access Journals (Sweden)

    Gusfan Halik

    2015-01-01

    Full Text Available Climate change has significant impacts on changing precipitation patterns causing the variation of the reservoir inflow. Nowadays, Indonesian hydrologist performs reservoir inflow prediction according to the technical guideline of Pd-T-25-2004-A. This technical guideline does not consider the climate variables directly, resulting in significant deviation to the observation results. This research intends to predict the reservoir inflow using the statistical downscaling (SD of General Circulation Model (GCM outputs. The GCM outputs are obtained from the National Center for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP/NCAR Reanalysis. A new proposed hybrid SD model named Wavelet Support Vector Machine (WSVM was utilized. It is a combination of the Multiscale Principal Components Analysis (MSPCA and nonlinear Support Vector Machine regression. The model was validated at Sutami Reservoir, Indonesia. Training and testing were carried out using data of 1991–2008 and 2008–2012, respectively. The results showed that MSPCA produced better extracting data than PCA. The WSVM generated better reservoir inflow prediction than the one of technical guideline. Moreover, this research also applied WSVM for future reservoir inflow prediction based on GCM ECHAM5 and scenario SRES A1B.

  16. Microbiome Data Accurately Predicts the Postmortem Interval Using Random Forest Regression Models

    Directory of Open Access Journals (Sweden)

    Aeriel Belk

    2018-02-01

    Full Text Available Death investigations often include an effort to establish the postmortem interval (PMI in cases in which the time of death is uncertain. The postmortem interval can lead to the identification of the deceased and the validation of witness statements and suspect alibis. Recent research has demonstrated that microbes provide an accurate clock that starts at death and relies on ecological change in the microbial communities that normally inhabit a body and its surrounding environment. Here, we explore how to build the most robust Random Forest regression models for prediction of PMI by testing models built on different sample types (gravesoil, skin of the torso, skin of the head, gene markers (16S ribosomal RNA (rRNA, 18S rRNA, internal transcribed spacer regions (ITS, and taxonomic levels (sequence variants, species, genus, etc.. We also tested whether particular suites of indicator microbes were informative across different datasets. Generally, results indicate that the most accurate models for predicting PMI were built using gravesoil and skin data using the 16S rRNA genetic marker at the taxonomic level of phyla. Additionally, several phyla consistently contributed highly to model accuracy and may be candidate indicators of PMI.

  17. A cost prediction model for machine operation in multi-field production systems

    Directory of Open Access Journals (Sweden)

    Alessandro Sopegno

    Full Text Available ABSTRACT Capacity planning in agricultural field operations needs to give consideration to the operational system design which involves the selection and dimensioning of production components, such as machinery and equipment. Capacity planning models currently onstream are generally based on average norm data and not on specific farm data which may vary from year to year. In this paper a model is presented for predicting the cost of in-field and transport operations for multiple-field and multiple-crop production systems. A case study from a real production system is presented in order to demonstrate the model’s functionalities and its sensitivity to parameters known to be somewhat imprecise. It was shown that the proposed model can provide operation cost predictions for complex cropping systems where labor and machinery are shared between the various operations which can be individually formulated for each individual crop. By so doing, the model can be used as a decision support system at the strategic level of management of agricultural production systems and specifically for the mid-term design process of systems in terms of labor/machinery and crop selection conforming to the criterion of profitability.

  18. Prediction Models for Licensure Examination Performance using Data Mining Classifiers for Online Test and Decision Support System

    Directory of Open Access Journals (Sweden)

    Ivy M. Tarun

    2017-05-01

    Full Text Available This study focuse d on two main points: the generation of licensure examination performan ce prediction models; and the development of a Decision Support System. In this study, data mining classifiers were used to generate the models using WEKA (Waikato Environment for Knowledge Analysis. These models were integrated into the Decision Support System as default models to support decision making as far as appropriate interventions during review sessions are concerned. The system developed mainly involves the repeated generation of MR models for performance prediction and also provides a Mock Boar d Exam for the reviewees to take. From the models generated, it is established that the General Weighted Average of the reviewees in their General Education subjects, the result of the Mock Board Exam and the instance when the reviewee is conducting a sel f - review are good predictors of the licensure examination performance. Further , it is concluded that the General Weighted Average of the reviewees in their Major or Content courses is the best predictor of licensure examination performance. Based from the evaluation results of the system , the system satisfied its implied functions and is efficient, usable, reliable and portable. Hence, it can already be used not as a substitute to the face - to - face review sessions but to enhance the reviewees’ licensure exa mination review and allow initial identification of those who are likely to have difficulty in passing the licensure examination, therefore providing sufficient time and opportunities for appropriate interventions.

  19. Prediction of lithium-ion battery capacity with metabolic grey model

    International Nuclear Information System (INIS)

    Chen, Lin; Lin, Weilong; Li, Junzi; Tian, Binbin; Pan, Haihong

    2016-01-01

    Given the popularity of Lithium-ion batteries in EVs (electric vehicles), predicting the capacity quickly and accurately throughout a battery's full life-time is still a challenging issue for ensuring the reliability of EVs. This paper proposes an approach in predicting the varied capacity with discharge cycles based on metabolic grey theory and consider issues from two perspectives: 1) three metabolic grey models will be presented, including MGM (metabolic grey model), MREGM (metabolic Residual-error grey model), and MMREGM (metabolic Markov-residual-error grey model); 2) the universality of these models will be explored under different conditions (such as various discharge rates and temperatures). Furthermore, the research findings in this paper demonstrate the excellent performance of the prediction depending on the three models; however, the precision of the MREGM model is inferior compared to the others. Therefore, we have obtained the conclusion in which the MGM model and the MMREGM model have excellent performances in predicting the capacity under a variety of load conditions, even using few data points for modeling. Also, the universality of the metabolic grey prediction theory is verified by predicting the capacity of batteries under different discharge rates and different temperatures. - Highlights: • The metabolic mechanism is introduced in a grey system for capacity prediction. • Three metabolic grey models are presented and studied. • The universality of these models under different conditions is assessed. • A few data points are required for predicting the capacity with these models.

  20. A Kinetic Model for Predicting the Relative Humidity in Modified Atmosphere Packaging and Its Application in Lentinula edodes Packages

    Directory of Open Access Journals (Sweden)

    Li-xin Lu

    2013-01-01

    Full Text Available Adjusting and controlling the relative humidity (RH inside package is crucial for ensuring the quality of modified atmosphere packaging (MAP of fresh produce. In this paper, an improved kinetic model for predicting the RH in MAP was developed. The model was based on heat exchange and gases mass transport phenomena across the package, gases heat convection inside the package, and mass and heat balances accounting for the respiration and transpiration behavior of fresh produce. Then the model was applied to predict the RH in MAP of fresh Lentinula edodes (one kind of Chinese mushroom. The model equations were solved numerically using Adams-Moulton method to predict the RH in model packages. In general, the model predictions agreed well with the experimental data, except that the model predictions were slightly high in the initial period. The effect of the initial gas composition on the RH in packages was notable. In MAP of lower oxygen and higher carbon dioxide concentrations, the ascending rate of the RH was reduced, and the RH inside packages was saturated slowly during storage. The influence of the initial gas composition on the temperature inside package was not much notable.