WorldWideScience

Sample records for model predictions generally

  1. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  2. Towards a generalized energy prediction model for machine tools.

    Science.gov (United States)

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan

    2017-04-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.

  3. Multi-year predictability in a coupled general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Power, Scott; Colman, Rob [Bureau of Meteorology Research Centre, Melbourne, VIC (Australia)

    2006-02-01

    Multi-year to decadal variability in a 100-year integration of a BMRC coupled atmosphere-ocean general circulation model (CGCM) is examined. The fractional contribution made by the decadal component generally increases with depth and latitude away from surface waters in the equatorial Indo-Pacific Ocean. The relative importance of decadal variability is enhanced in off-equatorial ''wings'' in the subtropical eastern Pacific. The model and observations exhibit ''ENSO-like'' decadal patterns. Analytic results are derived, which show that the patterns can, in theory, occur in the absence of any predictability beyond ENSO time-scales. In practice, however, modification to this stochastic view is needed to account for robust differences between ENSO-like decadal patterns and their interannual counterparts. An analysis of variability in the CGCM, a wind-forced shallow water model, and a simple mixed layer model together with existing and new theoretical results are used to improve upon this stochastic paradigm and to provide a new theory for the origin of decadal ENSO-like patterns like the Interdecadal Pacific Oscillation and Pacific Decadal Oscillation. In this theory, ENSO-driven wind-stress variability forces internal equatorially-trapped Kelvin waves that propagate towards the eastern boundary. Kelvin waves can excite reflected internal westward propagating equatorially-trapped Rossby waves (RWs) and coastally-trapped waves (CTWs). CTWs have no impact on the off-equatorial sub-surface ocean outside the coastal wave guide, whereas the RWs do. If the frequency of the incident wave is too high, then only CTWs are excited. At lower frequencies, both CTWs and RWs can be excited. The lower the frequency, the greater the fraction of energy transmitted to RWs. This lowers the characteristic frequency of variability off the equator relative to its equatorial counterpart. Both the eastern boundary interactions and the accumulation of

  4. Prediction of cloud droplet number in a general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-04-01

    We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.

  5. Explicit prediction of ice clouds in general circulation models

    Science.gov (United States)

    Kohler, Martin

    1999-11-01

    Although clouds play extremely important roles in the radiation budget and hydrological cycle of the Earth, there are large quantitative uncertainties in our understanding of their generation, maintenance and decay mechanisms, representing major obstacles in the development of reliable prognostic cloud water schemes for General Circulation Models (GCMs). Recognizing their relative neglect in the past, both observationally and theoretically, this work places special focus on ice clouds. A recent version of the UCLA - University of Utah Cloud Resolving Model (CRM) that includes interactive radiation is used to perform idealized experiments to study ice cloud maintenance and decay mechanisms under various conditions in term of: (1) background static stability, (2) background relative humidity, (3) rate of cloud ice addition over a fixed initial time-period and (4) radiation: daytime, nighttime and no-radiation. Radiation is found to have major effects on the life-time of layer-clouds. Optically thick ice clouds decay significantly slower than expected from pure microphysical crystal fall-out (taucld = 0.9--1.4 h as opposed to no-motion taumicro = 0.5--0.7 h). This is explained by the upward turbulent fluxes of water induced by IR destabilization, which partially balance the downward transport of water by snowfall. Solar radiation further slows the ice-water decay by destruction of the inversion above cloud-top and the resulting upward transport of water. Optically thin ice clouds, on the other hand, may exhibit even longer life-times (>1 day) in the presence of radiational cooling. The resulting saturation mixing ratio reduction provides for a constant cloud ice source. These CRM results are used to develop a prognostic cloud water scheme for the UCLA-GCM. The framework is based on the bulk water phase model of Ose (1993). The model predicts cloud liquid water and cloud ice separately, and which is extended to split the ice phase into suspended cloud ice (predicted

  6. Generalized model for predicting methane conversion to syngas in ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology ... Abstract. Present work aims to provide a conceptual framework for predicting methane conversion efficiency and CO selectivity in a membrane reactor which may assist in selecting the type of membrane and minimizing the cost of syngas production.

  7. Bayesian prediction of spatial count data using generalized linear mixed models

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge

    2002-01-01

    Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...

  8. Bayesian prediction of spatial count data using generalized linear mixed models

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge

    2002-01-01

    Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, we...

  9. Spatial downscaling of soil prediction models based on weighted generalized additive models in smallholder farm settings.

    Science.gov (United States)

    Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P; Nair, Vimala D

    2017-09-11

    Digital soil mapping (DSM) is gaining momentum as a technique to help smallholder farmers secure soil security and food security in developing regions. However, communications of the digital soil mapping information between diverse audiences become problematic due to the inconsistent scale of DSM information. Spatial downscaling can make use of accessible soil information at relatively coarse spatial resolution to provide valuable soil information at relatively fine spatial resolution. The objective of this research was to disaggregate the coarse spatial resolution soil exchangeable potassium (K ex ) and soil total nitrogen (TN) base map into fine spatial resolution soil downscaled map using weighted generalized additive models (GAMs) in two smallholder villages in South India. By incorporating fine spatial resolution spectral indices in the downscaling process, the soil downscaled maps not only conserve the spatial information of coarse spatial resolution soil maps but also depict the spatial details of soil properties at fine spatial resolution. The results of this study demonstrated difference between the fine spatial resolution downscaled maps and fine spatial resolution base maps is smaller than the difference between coarse spatial resolution base maps and fine spatial resolution base maps. The appropriate and economical strategy to promote the DSM technique in smallholder farms is to develop the relatively coarse spatial resolution soil prediction maps or utilize available coarse spatial resolution soil maps at the regional scale and to disaggregate these maps to the fine spatial resolution downscaled soil maps at farm scale.

  10. A statistical intercomparison of temperature and precipitation predicted by four general circulation models with historical data

    International Nuclear Information System (INIS)

    Grotch, S.L.

    1991-01-01

    This study is a detailed intercomparison of the results produced by four general circulation models (GCMs) that have been used to estimate the climatic consequences of a doubling of the CO 2 concentration. Two variables, surface air temperature and precipitation, annually and seasonally averaged, are compared for both the current climate and for the predicted equilibrium changes after a doubling of the atmospheric CO 2 concentration. The major question considered here is: how well do the predictions from different GCMs agree with each other and with historical climatology over different areal extents, from the global scale down to the range of only several gridpoints? Although the models often agree well when estimating averages over large areas, substantial disagreements become apparent as the spatial scale is reduced. At scales below continental, the correlations observed between different model predictions are often very poor. The implications of this work for investigation of climatic impacts on a regional scale are profound. For these two important variables, at least, the poor agreement between model simulations of the current climate on the regional scale calls into question the ability of these models to quantitatively estimate future climatic change on anything approaching the scale of a few (< 10) gridpoints, which is essential if these results are to be used in meaningful resource-assessment studies. A stronger cooperative effort among the different modeling groups will be necessary to assure that we are getting better agreement for the right reasons, a prerequisite for improving confidence in model projections. 11 refs.; 10 figs

  11. A statistical intercomparison of temperature and precipitation predicted by four general circulation models with historical data

    International Nuclear Information System (INIS)

    Grotch, S.L.

    1990-01-01

    This study is a detailed intercomparison of the results produced by four general circulation models (GCMs) that have been used to estimate the climatic consequences of a doubling of the CO 2 concentration. Two variables, surface air temperature and precipitation, annually and seasonally averaged, are compared for both the current climate and for the predicted equilibrium changes after a doubling of the atmospheric CO 2 concentration. The major question considered here is: how well do the predictions from different GCMs agree with each other and with historical climatology over different areal extents, from the global scale down to the range of only several gridpoints? Although the models often agree well when estimating averages over large areas, substantial disagreements become apparent as the spatial scale is reduced. At scales below continental, the correlations observed between different model predictions are often very poor. The implications of this work for investigation of climatic impacts on a regional scale are profound. For these two important variables, at least, the poor agreement between model simulations of the current climate on the regional scale calls into question the ability of these models to quantitatively estimate future climatic change on anything approaching the scale of a few (< 10) gridpoints, which is essential if these results are to be used in meaningful resource-assessment studies. A stronger cooperative effort among the different modeling groups will be necessary to assure that we are getting better agreement for the right reasons, a prerequisite for improving confidence in model projections

  12. Developing a methodology to predict PM10 concentrations in urban areas using generalized linear models.

    Science.gov (United States)

    Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G

    2016-09-01

    A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.

  13. Protein structure validation by generalized linear model root-mean-square deviation prediction.

    Science.gov (United States)

    Bagaria, Anurag; Jaravine, Victor; Huang, Yuanpeng J; Montelione, Gaetano T; Güntert, Peter

    2012-02-01

    Large-scale initiatives for obtaining spatial protein structures by experimental or computational means have accentuated the need for the critical assessment of protein structure determination and prediction methods. These include blind test projects such as the critical assessment of protein structure prediction (CASP) and the critical assessment of protein structure determination by nuclear magnetic resonance (CASD-NMR). An important aim is to establish structure validation criteria that can reliably assess the accuracy of a new protein structure. Various quality measures derived from the coordinates have been proposed. A universal structural quality assessment method should combine multiple individual scores in a meaningful way, which is challenging because of their different measurement units. Here, we present a method based on a generalized linear model (GLM) that combines diverse protein structure quality scores into a single quantity with intuitive meaning, namely the predicted coordinate root-mean-square deviation (RMSD) value between the present structure and the (unavailable) "true" structure (GLM-RMSD). For two sets of structural models from the CASD-NMR and CASP projects, this GLM-RMSD value was compared with the actual accuracy given by the RMSD value to the corresponding, experimentally determined reference structure from the Protein Data Bank (PDB). The correlation coefficients between actual (model vs. reference from PDB) and predicted (model vs. "true") heavy-atom RMSDs were 0.69 and 0.76, for the two datasets from CASD-NMR and CASP, respectively, which is considerably higher than those for the individual scores (-0.24 to 0.68). The GLM-RMSD can thus predict the accuracy of protein structures more reliably than individual coordinate-based quality scores. Copyright © 2011 The Protein Society.

  14. Sparse generalized functional linear model for predicting remission status of depression patients.

    Science.gov (United States)

    Liu, Yashu; Nie, Zhi; Zhou, Jiayu; Farnum, Michael; Narayan, Vaibhav A; Wittenberg, Gayle; Ye, Jieping

    2014-01-01

    Complex diseases such as major depression affect people over time in complicated patterns. Longitudinal data analysis is thus crucial for understanding and prognosis of such diseases and has received considerable attention in the biomedical research community. Traditional classification and regression methods have been commonly applied in a simple (controlled) clinical setting with a small number of time points. However, these methods cannot be easily extended to the more general setting for longitudinal analysis, as they are not inherently built for time-dependent data. Functional regression, in contrast, is capable of identifying the relationship between features and outcomes along with time information by assuming features and/or outcomes as random functions over time rather than independent random variables. In this paper, we propose a novel sparse generalized functional linear model for the prediction of treatment remission status of the depression participants with longitudinal features. Compared to traditional functional regression models, our model enables high-dimensional learning, smoothness of functional coefficients, longitudinal feature selection and interpretable estimation of functional coefficients. Extensive experiments have been conducted on the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) data set and the results show that the proposed sparse functional regression method achieves significantly higher prediction power than existing approaches.

  15. Improving the predictive accuracy of hurricane power outage forecasts using generalized additive models.

    Science.gov (United States)

    Han, Seung-Ryong; Guikema, Seth D; Quiring, Steven M

    2009-10-01

    Electric power is a critical infrastructure service after hurricanes, and rapid restoration of electric power is important in order to minimize losses in the impacted areas. However, rapid restoration of electric power after a hurricane depends on obtaining the necessary resources, primarily repair crews and materials, before the hurricane makes landfall and then appropriately deploying these resources as soon as possible after the hurricane. This, in turn, depends on having sound estimates of both the overall severity of the storm and the relative risk of power outages in different areas. Past studies have developed statistical, regression-based approaches for estimating the number of power outages in advance of an approaching hurricane. However, these approaches have either not been applicable for future events or have had lower predictive accuracy than desired. This article shows that a different type of regression model, a generalized additive model (GAM), can outperform the types of models used previously. This is done by developing and validating a GAM based on power outage data during past hurricanes in the Gulf Coast region and comparing the results from this model to the previously used generalized linear models.

  16. Generalized Concentration Addition Modeling Predicts Mixture Effects of Environmental PPARγ Agonists.

    Science.gov (United States)

    Watt, James; Webster, Thomas F; Schlezinger, Jennifer J

    2016-09-01

    The vast array of potential environmental toxicant combinations necessitates the development of efficient strategies for predicting toxic effects of mixtures. Current practices emphasize the use of concentration addition to predict joint effects of endocrine disrupting chemicals in coexposures. Generalized concentration addition (GCA) is one such method for predicting joint effects of coexposures to chemicals and has the advantage of allowing for mixture components to have differences in efficacy (ie, dose-response curve maxima). Peroxisome proliferator-activated receptor gamma (PPARγ) is a nuclear receptor that plays a central role in regulating lipid homeostasis, insulin sensitivity, and bone quality and is the target of an increasing number of environmental toxicants. Here, we tested the applicability of GCA in predicting mixture effects of therapeutic (rosiglitazone and nonthiazolidinedione partial agonist) and environmental PPARγ ligands (phthalate compounds identified using EPA's ToxCast database). Transcriptional activation of human PPARγ1 by individual compounds and mixtures was assessed using a peroxisome proliferator response element-driven luciferase reporter. Using individual dose-response parameters and GCA, we generated predictions of PPARγ activation by the mixtures, and we compared these predictions with the empirical data. At high concentrations, GCA provided a better estimation of the experimental response compared with 3 alternative models: toxic equivalency factor, effect summation and independent action. These alternatives provided reasonable fits to the data at low concentrations in this system. These experiments support the implementation of GCA in mixtures analysis with endocrine disrupting compounds and establish PPARγ as an important target for further studies of chemical mixtures. © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e

  17. The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun

    2017-01-01

    Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.

  18. Spatially explicit models, generalized reproduction numbers and the prediction of patterns of waterborne disease

    Science.gov (United States)

    Rinaldo, A.; Gatto, M.; Mari, L.; Casagrandi, R.; Righetto, L.; Bertuzzo, E.; Rodriguez-Iturbe, I.

    2012-12-01

    Metacommunity and individual-based theoretical models are studied in the context of the spreading of infections of water-borne diseases along the ecological corridors defined by river basins and networks of human mobility. The overarching claim is that mathematical models can indeed provide predictive insight into the course of an ongoing epidemic, potentially aiding real-time emergency management in allocating health care resources and by anticipating the impact of alternative interventions. To support the claim, we examine the ex-post reliability of published predictions of the 2010-2011 Haiti cholera outbreak from four independent modeling studies that appeared almost simultaneously during the unfolding epidemic. For each modeled epidemic trajectory, it is assessed how well predictions reproduced the observed spatial and temporal features of the outbreak to date. The impact of different approaches is considered to the modeling of the spatial spread of V. cholera, the mechanics of cholera transmission and in accounting for the dynamics of susceptible and infected individuals within different local human communities. A generalized model for Haitian epidemic cholera and the related uncertainty is thus constructed and applied to the year-long dataset of reported cases now available. Specific emphasis will be dedicated to models of human mobility, a fundamental infection mechanism. Lessons learned and open issues are discussed and placed in perspective, supporting the conclusion that, despite differences in methods that can be tested through model-guided field validation, mathematical modeling of large-scale outbreaks emerges as an essential component of future cholera epidemic control. Although explicit spatial modeling is made routinely possible by widespread data mapping of hydrology, transportation infrastructure, population distribution, and sanitation, the precise condition under which a waterborne disease epidemic can start in a spatially explicit setting is

  19. General fugacity-based model to predict the environmental fate of multiple chemical species.

    Science.gov (United States)

    Cahill, Thomas M; Cousins, Ian; Mackay, Donald

    2003-03-01

    A general multimedia environmental fate model is presented that is capable of simulating the fate of up to four interconverting chemical species. It is an extension of the existing equilibrium criterion (EQC) fugacity model, which is limited to single-species assessments. It is suggested that multispecies chemical assessments are warranted when a degradation product of a released chemical is either more toxic or more persistent than the parent chemical or where there is cycling between species, as occurs with association, disassociation, or ionization. The model is illustratively applied to three chemicals, namely chlorpyrifos, pentachlorophenol, and perfluorooctane sulfonate, for which multispecies assessments are advisable. The model results compare favorably with field data for chlorpyrifos and pentachlorophenol, while the perfluorooctane sulfonate simulation is more speculative due to uncertainty in input parameters and the paucity of field data to validate the predictions. The model thus provides a tool for assessing the environmental fate and behavior of a group of chemicals that hitherto have not been addressed by evaluative models such as EQC.

  20. Development and Validation of a Risk Model for Prediction of Hazardous Alcohol Consumption in General Practice Attendees: The PredictAL Study

    Science.gov (United States)

    King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I.; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin

    2011-01-01

    Background Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. Methods A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. Results 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). Conclusions The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse. PMID:21853028

  1. Comparing artificial neural networks, general linear models and support vector machines in building predictive models for small interfering RNAs.

    Directory of Open Access Journals (Sweden)

    Kyle A McQuisten

    2009-10-01

    Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are

  2. Generalized additive models used to predict species abundance in the Gulf of Mexico: an ecosystem modeling tool.

    Directory of Open Access Journals (Sweden)

    Michael Drexler

    Full Text Available Spatially explicit ecosystem models of all types require an initial allocation of biomass, often in areas where fisheries independent abundance estimates do not exist. A generalized additive modelling (GAM approach is used to describe the abundance of 40 species groups (i.e. functional groups across the Gulf of Mexico (GoM using a large fisheries independent data set (SEAMAP and climate scale oceanographic conditions. Predictor variables included in the model are chlorophyll a, sediment type, dissolved oxygen, temperature, and depth. Despite the presence of a large number of zeros in the data, a single GAM using a negative binomial distribution was suitable to make predictions of abundance for multiple functional groups. We present an example case study using pink shrimp (Farfantepenaeus duroarum and compare the results to known distributions. The model successfully predicts the known areas of high abundance in the GoM, including those areas where no data was inputted into the model fitting. Overall, the model reliably captures areas of high and low abundance for the large majority of functional groups observed in SEAMAP. The result of this method allows for the objective setting of spatial distributions for numerous functional groups across a modeling domain, even where abundance data may not exist.

  3. Generalized Predictive and Neural Generalized Predictive Control of Aerospace Systems

    Science.gov (United States)

    Kelkar, Atul G.

    2000-01-01

    The research work presented in this thesis addresses the problem of robust control of uncertain linear and nonlinear systems using Neural network-based Generalized Predictive Control (NGPC) methodology. A brief overview of predictive control and its comparison with Linear Quadratic (LQ) control is given to emphasize advantages and drawbacks of predictive control methods. It is shown that the Generalized Predictive Control (GPC) methodology overcomes the drawbacks associated with traditional LQ control as well as conventional predictive control methods. It is shown that in spite of the model-based nature of GPC it has good robustness properties being special case of receding horizon control. The conditions for choosing tuning parameters for GPC to ensure closed-loop stability are derived. A neural network-based GPC architecture is proposed for the control of linear and nonlinear uncertain systems. A methodology to account for parametric uncertainty in the system is proposed using on-line training capability of multi-layer neural network. Several simulation examples and results from real-time experiments are given to demonstrate the effectiveness of the proposed methodology.

  4. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

    International Nuclear Information System (INIS)

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

    2014-01-01

    Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography

  5. Predicting the multi-domain progression of Parkinson's disease: a Bayesian multivariate generalized linear mixed-effect model.

    Science.gov (United States)

    Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei

    2017-09-25

    It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).

  6. Predictability of 2-year La Niña events in a coupled general circulation model

    Science.gov (United States)

    DiNezio, Pedro N.; Deser, Clara; Okumura, Yuko; Karspeck, Alicia

    2017-12-01

    The predictability of the duration of La Niña is assessed using the Community Earth System Model Version 1 (CESM1), a coupled climate model capable of simulating key features of the El Niño/Southern Oscillation (ENSO) phenomenon, including the multi-year duration of La Niña. Statistical analysis of a 1800 year long control simulation indicates that a strong thermocline discharge or a strong El Niño can lead to La Niña conditions that last 2 years (henceforth termed 2-year LN). This relationship suggest that 2-year LN maybe predictable 18 to 24 months in advance. Perfect model forecasts performed with CESM1 are used to further explore the link between 2-year LN and the "Discharge" and "Peak El Niño" predictors. Ensemble forecasts are initialized on January and July coinciding with ocean states characterized by peak El Niño amplitudes and peak thermocline discharge respectively. Three cases with different magnitudes of these predictors are considered resulting in a total of six ensembles. Each "Peak El Niño" and "Discharge" ensemble forecast consists of 30 or 20 members respectively, generated by adding a infinitesimally small perturbation to the atmospheric initial conditions unique to each member. The forecasts show that the predictability of 2-year LN, measured by the potential prediction utility (PPU) of the Niño-3.4 SST index during the second year, is related to the magnitude of the initial conditions. Forecasts initialized with strong thermocline discharge or strong peak El Niño amplitude show higher PPU than those with initial conditions of weaker magnitude. Forecasts initialized from states characterized by weaker predictors are less predictable, mainly because the ensemble-mean signal is smaller, and therefore PPU is reduced due to the influence of forecast spread. The error growth of the forecasts, measured by the spread of the Niño-3.4 SST index, is independent of the initial conditions and appears to be driven by wind variability over the

  7. Prediction of periodontal disease: modelling and validation in different general German populations.

    Science.gov (United States)

    Zhan, Yiqiang; Holtfreter, Birte; Meisel, Peter; Hoffmann, Thomas; Micheelis, Wolfgang; Dietrich, Thomas; Kocher, Thomas

    2014-03-01

    To develop models for periodontitis using self-reported questions and to validate them externally. The Study of Health in Pomerania (SHIP-0) was used for model development. Periodontitis was defined according to the definitions of the Center for Disease Control and Prevention-American Academy of Periodontology, the 5th European Workshop in Periodontology, and Dietrich et al. (≥2 teeth with inter-proximal clinical attachment loss of ≥4 mm and 6 mm as moderate and severe periodontitis) respectively. These models were validated in SHIP-Trend and the Fourth German Oral Health Study (DMS IV). Final models included age, gender, education, smoking, bleeding on brushing and self-reported presence of mobile teeth. Concordance-statistics (C-statistics) of the final models from SHIP-0 were 0.84, 0.82 and 0.85 for the three definitions respectively. Validation in SHIP-Trend revealed C-statistics of 0.82, 0.81 and 0.82 respectively. As bleeding on brushing and presence of mobile teeth were unavailable in DMS IV, reduced models were developed. C-statistics of reduced models were 0.82, 0.81 and 0.83 respectively. Validation in DMS IV revealed C-statistics of 0.72, 0.78 and 0.72 for the three definitions respectively. All p values of the goodness-of-fit tests were >0.05. The models yielded a moderate usefulness for prediction of periodontitis. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Group spike-and-slab lasso generalized linear models for disease prediction and associated genes detection by incorporating pathway information.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun

    2018-03-15

    Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.

  9. A mechanistic model for predicting flow-assisted and general corrosion of carbon steel in reactor primary coolants

    International Nuclear Information System (INIS)

    Lister, D.

    2002-01-01

    Flow-assisted corrosion (FAC) of carbon steel in high-temperature lithiated water can be described with a model that invokes dissolution of the protective oxide film and erosion of oxide particles that are loosened as a result. General corrosion under coolant conditions where oxide is not dissolved is described as well. In the model, the electrochemistry of magnetite dissolution and precipitation and the effect of particle size on solubility move the dependence on film thickness of the diffusion processes (and therefore the corrosion rate) away from reciprocal. Particle erosion under dissolving conditions is treated stochastically and depends upon the fluid shear stress at the surface. The corrosion rate dependence on coolant flow under FAC conditions then becomes somewhat less than that arising purely from fluid shear (proportional to the velocity squared). Under non-dissolving conditions, particle erosion occurs infrequently and general corrosion is almost unaffected by flow For application to a CANDU primary circuit and its feeders, the model was bench-marked against the outlet feeder S08 removed from the Point Lepreau reactor, which furnished one value of film thickness and one of corrosion rate for a computed average coolant velocity. Several constants and parameters in the model had to be assumed or were optimised, since values for them were not available. These uncertainties are no doubt responsible for the rather high values of potential that evolved as steps in the computation. The model predicts film thickness development and corrosion rate for the whole range of coolant velocities in outlet feeders very well. In particular, the detailed modelling of FAC in the complex geometry of one outlet feeder (F11) is in good agreement with measurements. When the particle erosion computations are inserted in the balance equations for the circuit, realistic values of crud level are obtained. The model also predicts low corrosion rates and thick oxide films for inlet

  10. Predictability and interpretability of hybrid link-level crash frequency models for urban arterials compared to cluster-based and general negative binomial regression models.

    Science.gov (United States)

    Najaf, Pooya; Duddu, Venkata R; Pulugurtha, Srinivas S

    2018-03-01

    Machine learning (ML) techniques have higher prediction accuracy compared to conventional statistical methods for crash frequency modelling. However, their black-box nature limits the interpretability. The objective of this research is to combine both ML and statistical methods to develop hybrid link-level crash frequency models with high predictability and interpretability. For this purpose, M5' model trees method (M5') is introduced and applied to classify the crash data and then calibrate a model for each homogenous class. The data for 1134 and 345 randomly selected links on urban arterials in the city of Charlotte, North Carolina was used to develop and validate models, respectively. The outputs from the hybrid approach are compared with the outputs from cluster-based negative binomial regression (NBR) and general NBR models. Findings indicate that M5' has high predictability and is very reliable to interpret the role of different attributes on crash frequency compared to other developed models.

  11. Comparison of mid-Pliocene climate predictions produced by the HadAM3 and GCMAM3 General Circulation Models

    Science.gov (United States)

    Haywood, A.M.; Chandler, M.A.; Valdes, P.J.; Salzmann, U.; Lunt, D.J.; Dowsett, H.J.

    2009-01-01

    The mid-Pliocene warm period (ca. 3 to 3.3??million years ago) has become an important interval of time for palaeoclimate modelling exercises, with a large number of studies published during the last decade. However, there has been no attempt to assess the degree of model dependency of the results obtained. Here we present an initial comparison of mid-Pliocene climatologies produced by the Goddard Institute for Space Studies and Hadley Centre for Climate Prediction and Research atmosphere-only General Circulation Models (GCMAM3 and HadAM3). Whilst both models are consistent in the simulation of broad-scale differences in mid-Pliocene surface air temperature and total precipitation rates, significant variation is noted on regional and local scales. There are also significant differences in the model predictions of total cloud cover. A terrestrial data/model comparison, facilitated by the BIOME 4 model and a new data set of Piacenzian Stage land cover [Salzmann, U., Haywood, A.M., Lunt, D.J., Valdes, P.J., Hill, D.J., (2008). A new global biome reconstruction and data model comparison for the Middle Pliocene. Global Ecology and Biogeography 17, 432-447, doi:10.1111/j.1466-8238.2007.00381.x] and combined with the use of Kappa statistics, indicates that HadAM3-based biome predictions provide a closer fit to proxy data in the mid to high-latitudes. However, GCMAM3-based biomes in the tropics provide the closest fit to proxy data. ?? 2008 Elsevier B.V.

  12. Improving groundwater predictions utilizing seasonal precipitation forecasts from general circulation models forced with sea surface temperature forecasts

    Science.gov (United States)

    Almanaseer, Naser; Sankarasubramanian, A.; Bales, Jerad

    2014-01-01

    Recent studies have found a significant association between climatic variability and basin hydroclimatology, particularly groundwater levels, over the southeast United States. The research reported in this paper evaluates the potential in developing 6-month-ahead groundwater-level forecasts based on the precipitation forecasts from ECHAM 4.5 General Circulation Model Forced with Sea Surface Temperature forecasts. Ten groundwater wells and nine streamgauges from the USGS Groundwater Climate Response Network and Hydro-Climatic Data Network were selected to represent groundwater and surface water flows, respectively, having minimal anthropogenic influences within the Flint River Basin in Georgia, United States. The writers employ two low-dimensional models [principle component regression (PCR) and canonical correlation analysis (CCA)] for predicting groundwater and streamflow at both seasonal and monthly timescales. Three modeling schemes are considered at the beginning of January to predict winter (January, February, and March) and spring (April, May, and June) streamflow and groundwater for the selected sites within the Flint River Basin. The first scheme (model 1) is a null model and is developed using PCR for every streamflow and groundwater site using previous 3-month observations (October, November, and December) available at that particular site as predictors. Modeling schemes 2 and 3 are developed using PCR and CCA, respectively, to evaluate the role of precipitation forecasts in improving monthly and seasonal groundwater predictions. Modeling scheme 3, which employs a CCA approach, is developed for each site by considering observed groundwater levels from nearby sites as predictands. The performance of these three schemes is evaluated using two metrics (correlation coefficient and relative RMS error) by developing groundwater-level forecasts based on leave-five-out cross-validation. Results from the research reported in this paper show that using

  13. A Generalized Process Model of Human Action Selection and Error and its Application to Error Prediction

    Science.gov (United States)

    2014-07-01

    times, issues of interruptions and multitasking have become mainstream concerns. For example, Time magazine (Wallis, 2006) and the New York Times...Thompson, 2005) both reported stories about interruptions and multitasking and how they affect performance. The information technology research firm...talking to a friend, it is easy to collect data. Second, providing ordered information to another person is a general class of problems that include

  14. Evaluation of the uniformity of fit of general outcome prediction models

    NARCIS (Netherlands)

    Moreno, R; Apolone, G; Miranda, DR

    Objective: To compare the performance of the New Simplified Acute Physiology Score (SAPS II) and the New Admission Mortality Probability Model (MPM II0) within relevant subgroups using formal statistical assessment (uniformity of fit), Design: Analysis of the database of a multi-centre,

  15. How Well Can Saliency Models Predict Fixation Selection in Scenes Beyond Central Bias? A New Approach to Model Evaluation Using Generalized Linear Mixed Models.

    Science.gov (United States)

    Nuthmann, Antje; Einhäuser, Wolfgang; Schütz, Immo

    2017-01-01

    Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead ("central bias"). This problem is further exacerbated in the context of model comparisons, because some-but not all-models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox "GridFix" available.

  16. How Well Can Saliency Models Predict Fixation Selection in Scenes Beyond Central Bias? A New Approach to Model Evaluation Using Generalized Linear Mixed Models

    Directory of Open Access Journals (Sweden)

    Antje Nuthmann

    2017-10-01

    Full Text Available Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead (“central bias”. This problem is further exacerbated in the context of model comparisons, because some—but not all—models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM, building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox “GridFix” available.

  17. A general method for assessing the effects of uncertainty in individual-tree volume model predictions on large-area volume estimates with a subtropical forest illustration

    Science.gov (United States)

    Ronald E. McRoberts; Paolo Moser; Laio Zimermann Oliveira; Alexander C. Vibrans

    2015-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding the model predictions of volumes for individual trees at the plot level, calculating the mean over plots, and expressing the result on a per unit area basis. The uncertainty in the model predictions is generally ignored, with the result that the precision of the large-area...

  18. Applying Regression Models with Mixed Frequency Data in Modeling and Prediction of Iran's Wheat Import Value (Generalized OLS-based ARDL Approach

    Directory of Open Access Journals (Sweden)

    mitra jalerajabi

    2014-10-01

    Full Text Available Due to the importance of the import management, this study applies generalized ARDL approach to estimate MIDAS regression for wheat import value and to compare the accuracy of forecasts with those competed by the regression with adjusted data model. Mixed frequency sampling models aim to extract information with high frequency indicators so that independent variables with lower frequencies are modeled and foorcasted. Due to a more precise identification of the relationships among the variables, more accurate prediction is expected. Based on the results of both estimated regression with adjusted frequency models and MIDAS for the years 1978-2003 as a training period, wheat import value with internal products and exchange rate was positively related, while the relative price variable had an adverse relation with the Iran's wheat import value. Based on the results from the conventional statistics such as RMSE, MAD, MAPE and the statistical significance, MIDAS models using data sets of annual wheat import value, internal products, relative price and seasonal exchange rate significantly improves prediction of annual wheat import value for the years2004-2008 as a testing period. Hence, it is recommended that applying prediction approaches with mixed data improves modeling and prediction of agricultural import value, especially for strategic import products.

  19. Sparse generalized linear model withL0approximation for feature selection and prediction with big omics data.

    Science.gov (United States)

    Liu, Zhenqiu; Sun, Fengzhu; McGovern, Dermot P

    2017-01-01

    Feature selection and prediction are the most important tasks for big data mining. The common strategies for feature selection in big data mining are L 1 , SCAD and MC+. However, none of the existing algorithms optimizes L 0 , which penalizes the number of nonzero features directly. In this paper, we develop a novel sparse generalized linear model (GLM) with L 0 approximation for feature selection and prediction with big omics data. The proposed approach approximate the L 0 optimization directly. Even though the original L 0 problem is non-convex, the problem is approximated by sequential convex optimizations with the proposed algorithm. The proposed method is easy to implement with only several lines of code. Novel adaptive ridge algorithms ( L 0 ADRIDGE) for L 0 penalized GLM with ultra high dimensional big data are developed. The proposed approach outperforms the other cutting edge regularization methods including SCAD and MC+ in simulations. When it is applied to integrated analysis of mRNA, microRNA, and methylation data from TCGA ovarian cancer, multilevel gene signatures associated with suboptimal debulking are identified simultaneously. The biological significance and potential clinical importance of those genes are further explored. The developed Software L 0 ADRIDGE in MATLAB is available at https://github.com/liuzqx/L0adridge.

  20. The general dynamic model

    DEFF Research Database (Denmark)

    Borregaard, Michael K.; Matthews, Thomas J.; Whittaker, Robert James

    2016-01-01

    Aim: Island biogeography focuses on understanding the processes that underlie a set of well-described patterns on islands, but it lacks a unified theoretical framework for integrating these processes. The recently proposed general dynamic model (GDM) of oceanic island biogeography offers a step...... towards this goal. Here, we present an analysis of causality within the GDM and investigate its potential for the further development of island biogeographical theory. Further, we extend the GDM to include subduction-based island arcs and continental fragment islands. Location: A conceptual analysis...... dynamics of distinct island types are predicted to lead to markedly different evolutionary dynamics. This sets the stage for a more predictive theory incorporating the processes governing temporal dynamics of species diversity on islands....

  1. A new general dynamic model predicting radionuclide concentrations and fluxes in coastal areas from readily accessible driving variables

    International Nuclear Information System (INIS)

    Haakanson, Lars

    2004-01-01

    This paper presents a general, process-based dynamic model for coastal areas for radionuclides (metals, organics and nutrients) from both single pulse fallout and continuous deposition. The model gives radionuclide concentrations in water (total, dissolved and particulate phases and concentrations in sediments and fish) for entire defined coastal areas. The model gives monthly variations. It accounts for inflow from tributaries, direct fallout to the coastal area, internal fluxes (sedimentation, resuspension, diffusion, burial, mixing and biouptake and retention in fish) and fluxes to and from the sea outside the defined coastal area and/or adjacent coastal areas. The fluxes of water and substances between the sea and the coastal area are differentiated into three categories of coast types: (i) areas where the water exchange is regulated by tidal effects; (ii) open coastal areas where the water exchange is regulated by coastal currents; and (iii) semi-enclosed archipelago coasts. The coastal model gives the fluxes to and from the following four abiotic compartments: surface water, deep water, ET areas (i.e., areas where fine sediment erosion and transport processes dominate the bottom dynamic conditions and resuspension appears) and A-areas (i.e., areas of continuous fine sediment accumulation). Criteria to define the boundaries for the given coastal area towards the sea, and to define whether a coastal area is open or closed are given in operational terms. The model is simple to apply since all driving variables may be readily accessed from maps and standard monitoring programs. The driving variables are: latitude, catchment area, mean annual precipitation, fallout and month of fallout and parameters expressing coastal size and form as determined from, e.g., digitized bathymetric maps using a GIS program. Selected results: the predictions of radionuclide concentrations in water and fish largely depend on two factors, the concentration in the sea outside the given

  2. Concentration addition, independent action and generalized concentration addition models for mixture effect prediction of sex hormone synthesis in vitro

    DEFF Research Database (Denmark)

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael

    2013-01-01

    and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast...... was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects...

  3. Tapping generalized essentialism to predict outgroup prejudices.

    Science.gov (United States)

    Hodson, Gordon; Skorska, Malvina N

    2015-06-01

    Psychological essentialism, the perception that groups possess inherent properties binding them and differentiating them from others, is theoretically relevant to predicting prejudice. Recent developments isolate two key dimensions: essentialistic entitativity (EE; groups as unitary, whole, entity-like) and essentialistic naturalness (EN; groups as fixed and immutable). We introduce a novel question: does tapping the covariance between EE and EN, rather than pitting them against each other, boost prejudice prediction? In Study 1 (re-analysis of Roets & Van Hiel, 2011b, Samples 1-3, in Belgium) and Study 2 (new Canadian data) their common/shared variance, modelled as generalized essentialism, doubles the predictive power relative to regression-based approaches with regard to racism (but not anti-gay or -schizophrenic prejudices). Theoretical implications are discussed. © 2014 The British Psychological Society.

  4. General distress, hopelessness-suicidal ideation and worrying in adolescence: concurrent and predictive validity of a symptom-level bifactor model for clinical diagnoses.

    Science.gov (United States)

    Brodbeck, J; Goodyer, I M; Abbott, R A; Dunn, V J; St Clair, M C; Owens, M; Jones, P B; Croudace, T J

    2014-01-01

    Clinical disorders often share common symptoms and aetiological factors. Bifactor models acknowledge the role of an underlying general distress component and more specific sub-domains of psychopathology which specify the unique components of disorders over and above a general factor. A bifactor model jointly calibrated data on subjective distress from The Mood and Feelings Questionnaire and the Revised Children's Manifest Anxiety Scale. The bifactor model encompassed a general distress factor, and specific factors for (a) hopelessness-suicidal ideation, (b) generalised worrying and (c) restlessness-fatigue at age 14 which were related to lifetime clinical diagnoses established by interviews at ages 14 (concurrent validity) and current diagnoses at 17 years (predictive validity) in a British population sample of 1159 adolescents. Diagnostic interviews confirmed the validity of a symptom-level bifactor model. The underlying general distress factor was a powerful but non-specific predictor of affective, anxiety and behaviour disorders. The specific factors for hopelessness-suicidal ideation and generalised worrying contributed to predictive specificity. Hopelessness-suicidal ideation predicted concurrent and future affective disorder; generalised worrying predicted concurrent and future anxiety, specifically concurrent generalised anxiety disorders. Generalised worrying was negatively associated with behaviour disorders. The analyses of gender differences and the prediction of specific disorders was limited due to a low frequency of disorders other than depression. The bifactor model was able to differentiate concurrent and predict future clinical diagnoses. This can inform the development of targeted as well as non-specific interventions for prevention and treatment of different disorders. © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  5. River-flow predictions for the South African mid-summer using a coupled general circulation model

    CSIR Research Space (South Africa)

    Olivier, C

    2013-09-01

    Full Text Available , R.C., Griffes, S.M. (1998) MOM 3.0 manual. NOAA/Geophysical Fluid Dynamics Laboratory, Princeton, NJ, 608 pp. Roeckner, E.K., Arpe, L., Bengtsson, M., Christoph, M., Claussen, L., Dűmenil, M., Esch, M., Giorgetta and Schulzwedia, U. (1996... GENERAL CIRCULATION MODEL C Olivier1, WA Landman2, AF Beraki1 South African Weather Service, Pretoria, 0181, South Africa1 Council for Scientific and Industrial Research, Pretoria, 0184, South Africa2 There are limited sources of streamflow data...

  6. The generalized circular model

    NARCIS (Netherlands)

    Webers, H.M.

    1995-01-01

    In this paper we present a generalization of the circular model. In this model there are two concentric circular markets, which enables us to study two types of markets simultaneously. There are switching costs involved for moving from one circle to the other circle, which can also be thought of as

  7. The 2009–2010 Arctic stratospheric winter – general evolution, mountain waves and predictability of an operational weather forecast model

    Directory of Open Access Journals (Sweden)

    A. Dörnbrack

    2012-04-01

    Full Text Available The relatively warm 2009–2010 Arctic winter was an exceptional one as the North Atlantic Oscillation index attained persistent extreme negative values. Here, selected aspects of the Arctic stratosphere during this winter inspired by the analysis of the international field experiment RECONCILE are presented. First of all, and as a kind of reference, the evolution of the polar vortex in its different phases is documented. Special emphasis is put on explaining the formation of the exceptionally cold vortex in mid winter after a sequence of stratospheric disturbances which were caused by upward propagating planetary waves. A major sudden stratospheric warming (SSW occurring near the end of January 2010 concluded the anomalous cold vortex period. Wave ice polar stratospheric clouds were frequently observed by spaceborne remote-sensing instruments over the Arctic during the cold period in January 2010. Here, one such case observed over Greenland is analysed in more detail and an attempt is made to correlate flow information of an operational numerical weather prediction model to the magnitude of the mountain-wave induced temperature fluctuations. Finally, it is shown that the forecasts of the ECMWF ensemble prediction system for the onset of the major SSW were very skilful and the ensemble spread was very small. However, the ensemble spread increased dramatically after the major SSW, displaying the strong non-linearity and internal variability involved in the SSW event.

  8. Spatial prediction of landslide susceptibility using an adaptive neuro-fuzzy inference system combined with frequency ratio, generalized additive model, and support vector machine techniques

    Science.gov (United States)

    Chen, Wei; Pourghasemi, Hamid Reza; Panahi, Mahdi; Kornejady, Aiding; Wang, Jiale; Xie, Xiaoshen; Cao, Shubo

    2017-11-01

    The spatial prediction of landslide susceptibility is an important prerequisite for the analysis of landslide hazards and risks in any area. This research uses three data mining techniques, such as an adaptive neuro-fuzzy inference system combined with frequency ratio (ANFIS-FR), a generalized additive model (GAM), and a support vector machine (SVM), for landslide susceptibility mapping in Hanyuan County, China. In the first step, in accordance with a review of the previous literature, twelve conditioning factors, including slope aspect, altitude, slope angle, topographic wetness index (TWI), plan curvature, profile curvature, distance to rivers, distance to faults, distance to roads, land use, normalized difference vegetation index (NDVI), and lithology, were selected. In the second step, a collinearity test and correlation analysis between the conditioning factors and landslides were applied. In the third step, we used three advanced methods, namely, ANFIS-FR, GAM, and SVM, for landslide susceptibility modeling. Subsequently, the results of their accuracy were validated using a receiver operating characteristic curve. The results showed that all three models have good prediction capabilities, while the SVM model has the highest prediction rate of 0.875, followed by the ANFIS-FR and GAM models with prediction rates of 0.851 and 0.846, respectively. Thus, the landslide susceptibility maps produced in the study area can be applied for management of hazards and risks in landslide-prone Hanyuan County.

  9. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    refining formal, inductive predictive models is the quality of the archaeological and environmental data. To build models efficiently, relevant...geomorphology, and historic information . Lessons Learned: The original model was focused on the identification of prehistoric resources. This...system but uses predictive modeling informally . For example, there is no probability for buried archaeological deposits on the Burton Mesa, but there is

  10. General predictive control using the delta operator

    DEFF Research Database (Denmark)

    Jensen, Morten Rostgaard; Poulsen, Niels Kjølstad; Ravn, Ole

    1993-01-01

    This paper deals with two-discrete-time operators, the conventional forward shift-operator and the δ-operator. Both operators are treated in view of construction of suitable solutions to the Diophantine equation for the purpose of prediction. A general step-recursive scheme is presented. Finally...... a general predictive control (GPC) is formulated and applied adaptively to a continuous-time plant...

  11. Development of a general model to predict the rate of radionuclide release (source term) from a low-level waste shallow land burial facility

    International Nuclear Information System (INIS)

    Sullivan, T.M.; Kempf, C.R.; Suen, C.J.; Mughabghab, S.M.

    1988-01-01

    Federal Code of Regulations 10 CFR 61 requires that any near surface disposal site be capable of being characterized, analyzed, and modeled. The objective of this program is to assist NRC in developing the ability to model a disposal site that conforms to these regulations. In particular, a general computer model capable of predicting the quantity and rate of radionuclide release from a shallow land burial trench, i.e., the source term, is being developed. The framework for this general model has been developed and consists of four basic compartments that represent the major processes that influence release. These compartments are: water flow, container degradation, release from the waste packages, and radionuclide transport. Models for water flow and radionuclide transport rely on the use of the computer codes FEMWATER and FEMWASTE. These codes are generally regarded as being state-of-the-art and required little modification for their application to this project. Models for container degradation and release from waste packages have been specifically developed for this project. This paper provides a brief description of the models being used in the source term project and examples of their use over a range of potential conditions. 13 refs

  12. Differential Prediction Generalization in College Admissions Testing

    Science.gov (United States)

    Aguinis, Herman; Culpepper, Steven A.; Pierce, Charles A.

    2016-01-01

    We introduce the concept of "differential prediction generalization" in the context of college admissions testing. Specifically, we assess the extent to which predicted first-year college grade point average (GPA) based on high-school grade point average (HSGPA) and SAT scores depends on a student's ethnicity and gender and whether this…

  13. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  14. A general unified non-equilibrium model for predicting saturated and subcooled critical two-phase flow rates through short and long tubes

    Energy Technology Data Exchange (ETDEWEB)

    Fraser, D.W.H. [Univ. of British Columbia (Canada); Abdelmessih, A.H. [Univ. of Toronto, Ontario (Canada)

    1995-09-01

    A general unified model is developed to predict one-component critical two-phase pipe flow. Modelling of the two-phase flow is accomplished by describing the evolution of the flow between the location of flashing inception and the exit (critical) plane. The model approximates the nonequilibrium phase change process via thermodynamic equilibrium paths. Included are the relative effects of varying the location of flashing inception, pipe geometry, fluid properties and length to diameter ratio. The model predicts that a range of critical mass fluxes exist and is bound by a maximum and minimum value for a given thermodynamic state. This range is more pronounced at lower subcooled stagnation states and can be attributed to the variation in the location of flashing inception. The model is based on the results of an experimental study of the critical two-phase flow of saturated and subcooled water through long tubes. In that study, the location of flashing inception was accurately controlled and adjusted through the use of a new device. The data obtained revealed that for fixed stagnation conditions, the maximum critical mass flux occurred with flashing inception located near the pipe exit; while minimum critical mass fluxes occurred with the flashing front located further upstream. Available data since 1970 for both short and long tubes over a wide range of conditions are compared with the model predictions. This includes test section L/D ratios from 25 to 300 and covers a temperature and pressure range of 110 to 280{degrees}C and 0.16 to 6.9 MPa. respectively. The predicted maximum and minimum critical mass fluxes show an excellent agreement with the range observed in the experimental data.

  15. Ocean General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jin-Ho; Ma, Po-Lun

    2012-09-30

    1. Definition of Subject The purpose of this text is to provide an introduction to aspects of oceanic general circulation models (OGCMs), an important component of Climate System or Earth System Model (ESM). The role of the ocean in ESMs is described in Chapter XX (EDITOR: PLEASE FIND THE COUPLED CLIMATE or EARTH SYSTEM MODELING CHAPTERS). The emerging need for understanding the Earth’s climate system and especially projecting its future evolution has encouraged scientists to explore the dynamical, physical, and biogeochemical processes in the ocean. Understanding the role of these processes in the climate system is an interesting and challenging scientific subject. For example, a research question how much extra heat or CO2 generated by anthropogenic activities can be stored in the deep ocean is not only scientifically interesting but also important in projecting future climate of the earth. Thus, OGCMs have been developed and applied to investigate the various oceanic processes and their role in the climate system.

  16. General predictive model of friction behavior regimes for metal contacts based on the formation stability and evolution of nanocrystalline surface films.

    Energy Technology Data Exchange (ETDEWEB)

    Argibay, Nicolas [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Cheng, Shengfeng [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Sawyer, W. G. [Univ. of Florida, Gainesville, FL (United States); Michael, Joseph R. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Chandross, Michael E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-09-01

    The prediction of macro-scale friction and wear behavior based on first principles and material properties has remained an elusive but highly desirable target for tribologists and material scientists alike. Stochastic processes (e.g. wear), statistically described parameters (e.g. surface topography) and their evolution tend to defeat attempts to establish practical general correlations between fundamental nanoscale processes and macro-scale behaviors. We present a model based on microstructural stability and evolution for the prediction of metal friction regimes, founded on recently established microstructural deformation mechanisms of nanocrystalline metals, that relies exclusively on material properties and contact stress models. We show through complementary experimental and simulation results that this model overcomes longstanding practical challenges and successfully makes accurate and consistent predictions of friction transitions for a wide range of contact conditions. This framework not only challenges the assumptions of conventional causal relationships between hardness and friction, and between friction and wear, but also suggests a pathway for the design of higher performance metal alloys.

  17. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  18. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  19. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models.......This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Danish...

  20. Link prediction via generalized coupled tensor factorisation

    DEFF Research Database (Denmark)

    Ermiş, Beyza; Evrim, Acar Ataman; Taylan Cemgil, A.

    2012-01-01

    and higher-order tensors. We propose to use an approach based on probabilistic interpretation of tensor factorisation models, i.e., Generalised Coupled Tensor Factorisation, which can simultaneously fit a large class of tensor models to higher-order tensors/matrices with com- mon latent factors using...... different loss functions. Numerical experiments demonstrate that joint analysis of data from multiple sources via coupled factorisation improves the link prediction performance and the selection of right loss function and tensor model is crucial for accurately predicting missing links....

  1. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  2. Using the Job-Demands-Resources model to predict turnover in the information technology workforce – General effects and gender

    Directory of Open Access Journals (Sweden)

    Peter Hoonakker

    2014-01-01

    Full Text Available High employee turnover has always been a major issue for Information Technology (IT. In particular, turnover of women is very high. In this study, we used the Job Demand/Resources (JD-R model to examine the relationship between job demands and job resources, stress/burnout and job satisfaction/commitment, and turnover intention and tested the model for gender differences. Data were collected in five IT companies. A sample of 624 respondents (return rate: 56%; 54% males; mean age: 39.7 years was available for statistical analyses. Results of our study show that relationships between job demands and turnover intention are mediated by emotional exhaustion (burnout and relationships between job resources and turnover intention are mediated by job satisfaction. We found noticeable gender differences in these relationships, which can explain differences in turnover intention between male and female employees. The results of our study have consequences for organizational retention strategies to keep men and women in the IT work force.

  3. Glauber model and its generalizations

    International Nuclear Information System (INIS)

    Bialkowski, G.

    The physical aspects of the Glauber model problems are studied: potential model, profile function and Feynman diagrams approaches. Different generalizations of the Glauber model are discussed: particularly higher and lower energy processes and large angles [fr

  4. Generalized instrumental variable models

    OpenAIRE

    Andrew Chesher; Adam Rosen

    2014-01-01

    This paper develops characterizations of identified sets of structures and structural features for complete and incomplete models involving continuous or discrete variables. Multiple values of unobserved variables can be associated with particular combinations of observed variables. This can arise when there are multiple sources of heterogeneity, censored or discrete endogenous variables, or inequality restrictions on functions of observed and unobserved variables. The models g...

  5. Generalized Predictive Control for Non-Stationary Systems

    DEFF Research Database (Denmark)

    Palsson, Olafur Petur; Madsen, Henrik; Søgaard, Henning Tangen

    1994-01-01

    This paper shows how the generalized predictive control (GPC) can be extended to non-stationary (time-varying) systems. If the time-variation is slow, then the classical GPC can be used in context with an adaptive estimation procedure of a time-invariant ARIMAX model. However, in this paper prior...... knowledge concerning the nature of the parameter variations is assumed available. The GPC is based on the assumption that the prediction of the system output can be expressed as a linear combination of present and future controls. Since the Diophantine equation cannot be used due to the time......-variation of the parameters, the optimal prediction is found as the general conditional expectation of the system output. The underlying model is of an ARMAX-type instead of an ARIMAX-type as in the original version of the GPC (Clarke, D. W., C. Mohtadi and P. S. Tuffs (1987). Automatica, 23, 137-148) and almost all later...

  6. The impacts of data constraints on the predictive performance of a general process-based crop model (PeakN-crop v1.0)

    Science.gov (United States)

    Caldararu, Silvia; Purves, Drew W.; Smith, Matthew J.

    2017-04-01

    Improving international food security under a changing climate and increasing human population will be greatly aided by improving our ability to modify, understand and predict crop growth. What we predominantly have at our disposal are either process-based models of crop physiology or statistical analyses of yield datasets, both of which suffer from various sources of error. In this paper, we present a generic process-based crop model (PeakN-crop v1.0) which we parametrise using a Bayesian model-fitting algorithm to three different sources: data-space-based vegetation indices, eddy covariance productivity measurements and regional crop yields. We show that the model parametrised without data, based on prior knowledge of the parameters, can largely capture the observed behaviour but the data-constrained model greatly improves both the model fit and reduces prediction uncertainty. We investigate the extent to which each dataset contributes to the model performance and show that while all data improve on the prior model fit, the satellite-based data and crop yield estimates are particularly important for reducing model error and uncertainty. Despite these improvements, we conclude that there are still significant knowledge gaps, in terms of available data for model parametrisation, but our study can help indicate the necessary data collection to improve our predictions of crop yields and crop responses to environmental changes.

  7. Predicting Melting Points of Organic Molecules: Applications to Aqueous Solubility Prediction Using the General Solubility Equation.

    Science.gov (United States)

    McDonagh, J L; van Mourik, T; Mitchell, J B O

    2015-11-01

    In this work we make predictions of several important molecular properties of academic and industrial importance to seek answers to two questions: 1) Can we apply efficient machine learning techniques, using inexpensive descriptors, to predict melting points to a reasonable level of accuracy? 2) Can values of this level of accuracy be usefully applied to predicting aqueous solubility? We present predictions of melting points made by several novel machine learning models, previously applied to solubility prediction. Additionally, we make predictions of solubility via the General Solubility Equation (GSE) and monitor the impact of varying the logP prediction model (AlogP and XlogP) on the GSE. We note that the machine learning models presented, using a modest number of 2D descriptors, can make melting point predictions in line with the current state of the art prediction methods (RMSE≥40 °C). We also find that predicted melting points, with an RMSE of tens of degrees Celsius, can be usefully applied to the GSE to yield accurate solubility predictions (log10 S RMSE<1) over a small dataset of drug-like molecules. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  9. Pilots Rate Augmented Generalized Predictive Control for Reconfiguration

    Science.gov (United States)

    Soloway, Don; Haley, Pam

    2004-01-01

    The objective of this paper is to report the results from the research being conducted in reconfigurable fight controls at NASA Ames. A study was conducted with three NASA Dryden test pilots to evaluate two approaches of reconfiguring an aircraft's control system when failures occur in the control surfaces and engine. NASA Ames is investigating both a Neural Generalized Predictive Control scheme and a Neural Network based Dynamic Inverse controller. This paper highlights the Predictive Control scheme where a simple augmentation to reduce zero steady-state error led to the neural network predictor model becoming redundant for the task. Instead of using a neural network predictor model, a nominal single point linear model was used and then augmented with an error corrector. This paper shows that the Generalized Predictive Controller and the Dynamic Inverse Neural Network controller perform equally well at reconfiguration, but with less rate requirements from the actuators. Also presented are the pilot ratings for each controller for various failure scenarios and two samples of the required control actuation during reconfiguration. Finally, the paper concludes by stepping through the Generalized Predictive Control's reconfiguration process for an elevator failure.

  10. Neural Generalized Predictive Control of a non-linear Process

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1998-01-01

    qualities. The controller is a non-linear version of the well-known generalized predictive controller developed in linear control theory. It involves minimization of a cost function which in the present case has to be done numerically. Therefore, we develop the numerical algorithms necessary in substantial......The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability...... detail and discuss the implementation difficulties. The neural generalized predictive controller is tested on a pneumatic servo sys-tem....

  11. Generalized, Linear, and Mixed Models

    CERN Document Server

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  12. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  13. The general NFP hospital model.

    Science.gov (United States)

    Al-Amin, Mona

    2012-01-01

    Throughout the past 30 years, there has been a lot of controversy surrounding the proliferation of new forms of health care delivery organizations that challenge and compete with general NFP community hospitals. Traditionally, the health care system in the United States has been dominated by general NFP (NFP) voluntary hospitals. With the number of for-profit general hospitals, physician-owned specialty hospitals, and ambulatory surgical centers increasing, a question arises: “Why is the general NFP community hospital the dominant model?” In order to address this question, this paper reexamines the history of the hospital industry. By understanding how the “general NFP hospital” model emerged and dominated, we attempt to explain the current dominance of general NFP hospitals in the ever changing hospital industry in the United States.

  14. Introduction to generalized linear models

    CERN Document Server

    Dobson, Annette J

    2008-01-01

    Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...

  15. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  16. Predicting the temporal and spatial probability of orographic cloud cover in the Luquillo Experimental Forest in Puerto Rico using generalized linear (mixed) models.

    Science.gov (United States)

    Wei Wu; Charlesb Hall; Lianjun Zhang

    2006-01-01

    We predicted the spatial pattern of hourly probability of cloud cover in the Luquillo Experimental Forest (LEF) in North-Eastern Puerto Rico using four different models. The probability of cloud cover (defined as “the percentage of the area covered by clouds in each pixel on the map” in this paper) at any hour and any place is a function of three topographic variables...

  17. Implementation of routine ash predictions using a general purpose atmospheric dispersion model (HYSPLIT) adapted for calculating ash thickness on the ground.

    Science.gov (United States)

    Hurst, Tony; Davis, Cory; Deligne, Natalia

    2016-04-01

    GNS Science currently produces twice-daily forecasts of the likely ash deposition if any of the active or recently active volcanoes in New Zealand was to erupt, with a number of alternative possible eruptions for each volcano. These use our ASHFALL program for calculating ash thickness, which uses 1-D wind profiles at the location of each volcano derived from Numerical Weather Prediction (NWP) model output supplied by MetService. HYSPLIT is a hybrid Lagrangian dispersion model, developed by NOAA/ARL, which is used by MetService in its role as a Volcanic Ash Advisory Centre, to model airborne volcanic ash, with meteorological data provided by external and in-house NWP models. A by-product of the HYSPLIT volcanic ash dispersion simulations is the deposition rate at the ground surface. Comparison of HYSPLIT with ASHFALL showed that alterations to the standard fall velocity model were required to deal with ash particles larger than about 50 microns, which make up the bulk of ash deposits near a volcano. It also required the ash injected into the dispersion model to have a concentration based on a typical umbrella-shaped eruption column, rather than uniform across all levels. The different parameters used in HYSPLIT also caused us to revisit what possible combinations of eruption size and column height were appropriate to model as a likely eruption. We are now running HYSPLIT to produce alternative ash forecasts. It is apparent that there are many times at which the 3-D wind model used in HYSPLIT gives a substantially different ash deposition pattern to the 1-D wind model of ASHFALL, and the use of HYSPLIT will give more accurate predictions. ASHFALL is likely still to be used for probabilistic hazard forecasting, in which very large numbers of runs are required, as HYSPLIT takes much more computer time.

  18. Adaptive Generalized Predictive Control for Mechatronic Systems

    Czech Academy of Sciences Publication Activity Database

    Belda, Květoslav; Böhm, Josef

    2006-01-01

    Roč. 5, č. 8 (2006), s. 1830-1837 ISSN 1109-2777 R&D Projects: GA ČR GP102/06/P275; GA ČR GA102/05/0271 Institutional research plan: CEZ:AV0Z10750506 Keywords : on-line identification * predictive control * input/output equations of predictions * real-time control Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/historie/belda-0040149.pdf

  19. Analyzing the capacity of the Daphnia magna and Pseudokirchneriella subcapitata bioavailability models to predict chronic zinc toxicity at high pH and low calcium concentrations and formulation of a generalized bioavailability model for D. magna.

    Science.gov (United States)

    Van Regenmortel, Tina; Berteloot, Olivier; Janssen, Colin R; De Schamphelaere, Karel A C

    2017-10-01

    Risk assessment in the European Union implements Zn bioavailability models to derive predicted-no-effect concentrations for Zn. These models are validated within certain boundaries (i.e., pH ≤ 8 and Ca concentrations ≥ 5mg/L), but a substantial fraction of the European surface waters falls outside these boundaries. Therefore, we evaluated whether the chronic Zn biotic ligand model (BLM) for Daphnia magna and the chronic bioavailability model for Pseudokirchneriella subcapitata could be extrapolated to pH > 8 and Ca concentrations magna experiments suggested that the BLM is not able to reflect the pH effect over a broad pH range (5.5-8.5). In addition, because of Ca deficiency of D. magna in the soft water tests, we cannot conclude whether the BLM is applicable below its Ca boundary. Results for P. subcapitata experiments showed that the bioavailability model can accurately predict Zn toxicity for Ca concentrations down to 0.8 mg/L and pH values up to 8.5. Because the chronic Zn BLM for D. magna could not be extrapolated beyond its validity boundaries for pH, a generalized bioavailability model (gBAM) was developed. Of 4 gBAMs developed, we recommend the use of gBAM-D, which combines a log-linear relation between the 21-d median effective concentrations (expressed as free Zn 2+ ion activity) and pH, with more conventional BLM-type competition constants for Na, Ca, and Mg. This model is a first step in further improving the accuracy of chronic toxicity predictions of Zn as a function of water chemistry, which can decrease the uncertainty in implementing the bioavailability-based predicted-no-effect concentration in the risk assessment of high-pH and low-Ca concentration regions in Europe. Environ Toxicol Chem 2017;36:2781-2798. © 2017 SETAC. © 2017 SETAC.

  20. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation......, then rival strategies can still be compared based on repeated bootstraps of the same data. Often, however, the overall performance of rival strategies is similar and it is thus difficult to decide for one model. Here, we investigate the variability of the prediction models that results when the same...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  1. Multivariate covariance generalized linear models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Jørgensen, Bent

    2016-01-01

    We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...... measures and longitudinal structures, and the third involves a spatiotemporal analysis of rainfall data. The models take non-normality into account in the conventional way by means of a variance function, and the mean structure is modelled by means of a link function and a linear predictor. The models...

  2. Real-time Adaptive Control Using Neural Generalized Predictive Control

    Science.gov (United States)

    Haley, Pam; Soloway, Don; Gold, Brian

    1999-01-01

    The objective of this paper is to demonstrate the feasibility of a Nonlinear Generalized Predictive Control algorithm by showing real-time adaptive control on a plant with relatively fast time-constants. Generalized Predictive Control has classically been used in process control where linear control laws were formulated for plants with relatively slow time-constants. The plant of interest for this paper is a magnetic levitation device that is nonlinear and open-loop unstable. In this application, the reference model of the plant is a neural network that has an embedded nominal linear model in the network weights. The control based on the linear model provides initial stability at the beginning of network training. In using a neural network the control laws are nonlinear and online adaptation of the model is possible to capture unmodeled or time-varying dynamics. Newton-Raphson is the minimization algorithm. Newton-Raphson requires the calculation of the Hessian, but even with this computational expense the low iteration rate make this a viable algorithm for real-time control.

  3. Cosmological models in general relativity

    Indian Academy of Sciences (India)

    Cosmological models in general relativity. B B PAUL. Department of Physics, Nowgong College, Nagaon, Assam, India. MS received 4 October 2002; revised 6 March 2003; accepted 21 May 2003. Abstract. LRS Bianchi type-I space-time filled with perfect fluid is considered here with deceler- ation parameter as variable.

  4. Experimental Investigations of Generalized Predictive Control for Tiltrotor Stability Augmentation

    Science.gov (United States)

    Nixon, Mark W.; Langston, Chester W.; Singleton, Jeffrey D.; Piatak, David J.; Kvaternik, Raymond G.; Bennett, Richard L.; Brown, Ross K.

    2001-01-01

    A team of researchers from the Army Research Laboratory, NASA Langley Research Center (LaRC), and Bell Helicopter-Textron, Inc. have completed hover-cell and wind-tunnel testing of a 1/5-size aeroelastically-scaled tiltrotor model using a new active control system for stability augmentation. The active system is based on a generalized predictive control (GPC) algorithm originally developed at NASA LaRC in 1997 for un-known disturbance rejection. Results of these investigations show that GPC combined with an active swashplate can significantly augment the damping and stability of tiltrotors in both hover and high-speed flight.

  5. Multiple Steps Prediction with Nonlinear ARX Models

    OpenAIRE

    Zhang, Qinghua; Ljung, Lennart

    2007-01-01

    NLARX (NonLinear AutoRegressive with eXogenous inputs) models are frequently used in black-box nonlinear system identication. Though it is easy to make one step ahead prediction with such models, multiple steps prediction is far from trivial. The main difficulty is that in general there is no easy way to compute the mathematical expectation of an output conditioned by past measurements. An optimal solution would require intensive numerical computations related to nonlinear filltering. The pur...

  6. Predicting student success in General Chemistry

    Science.gov (United States)

    Figueroa, Daphne Elizabeth

    The goal of this research was to determine the predictors of student success in college level General Chemistry. The potential predictors were categorized as cognitive, non-cognitive, affective, or demographic factors. A broader goal of the study was to provide a reference for academic personnel to better judge the prerequisite skills, knowledge and attitudes that students should attain before enrolling in General Chemistry. Therefore, the study is relevant to chemical educators who are attempting to matriculate candidates for the scientific workforce and to chemical education researches who are interested in student success, student retention and curricular reform. The major hypotheses were that several factors from each category would emerge as significant predictors and that these would differ for students enrolled at three different post-secondary institutions: a community college, a private university and a public university. These hypotheses were tested using multiple regression techniques to analyze grade, student survey and post-test data collected from General Chemistry students at the three institutions. Over-all, twelve factors (six demographic, three cognitive and three affective) emerged as strong, significant predictors of student success. In addition, there were marked differences in which factors emerged based on the type of institution and on how student success was defined. Thus, the major hypotheses of the study were supported. Over-all, this study has significant implications for educational policy, theory, and practice. With regard to policy, there is a need for institutions and departments that offer General Chemistry to provide support for a diverse population of students. And, at the community college level, in particular, there is a need for better academic advising and more institutional support for underprepared students. In the classroom, the professor plays a critical role in influencing students' academic self-concept, which in turn

  7. Stability analysis of embedded nonlinear predictor neural generalized predictive controller

    Directory of Open Access Journals (Sweden)

    Hesham F. Abdel Ghaffar

    2014-03-01

    Full Text Available Nonlinear Predictor-Neural Generalized Predictive Controller (NGPC is one of the most advanced control techniques that are used with severe nonlinear processes. In this paper, a hybrid solution from NGPC and Internal Model Principle (IMP is implemented to stabilize nonlinear, non-minimum phase, variable dead time processes under high disturbance values over wide range of operation. Also, the superiority of NGPC over linear predictive controllers, like GPC, is proved for severe nonlinear processes over wide range of operation. The necessary conditions required to stabilize NGPC is derived using Lyapunov stability analysis for nonlinear processes. The NGPC stability conditions and improvement in disturbance suppression are verified by both simulation using Duffing’s nonlinear equation and real-time using continuous stirred tank reactor. Up to our knowledge, the paper offers the first hardware embedded Neural GPC which has been utilized to verify NGPC–IMP improvement in realtime.

  8. Generalized predictive control in the delta-domain

    DEFF Research Database (Denmark)

    Lauritsen, Morten Bach; Jensen, Morten Rostgaard; Poulsen, Niels Kjølstad

    1995-01-01

    This paper describes new approaches to generalized predictive control formulated in the delta (δ) domain. A new δ-domain version of the continuous-time emulator-based predictor is presented. It produces the optimal estimate in the deterministic case whenever the predictor order is chosen greater...... than or equal to the number of future predicted samples, however a “good” estimate is usually obtained in a much longer range of samples. This is particularly advantageous at fast sampling rates where a “conventional” predictor is bound to become very computationally demanding. Two controllers...... are considered: one having a well-defined limit as the sampling period tends to zero, the other being a close approximation to the conventional discrete-time GPC. Both algorithms are discrete in nature and well-suited for adaptive control. The fact, that δ-domain model are used does not introduce...

  9. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  10. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  11. Topics in the generalized vector dominance model

    International Nuclear Information System (INIS)

    Chavin, S.

    1976-01-01

    Two topics are covered in the generalized vector dominance model. In the first topic a model is constructed for dilepton production in hadron-hadron interactions based on the idea of generalized vector-dominance. It is argued that in the high mass region the generalized vector-dominance model and the Drell-Yan parton model are alternative descriptions of the same underlying physics. In the low mass regions the models differ; the vector-dominance approach predicts a greater production of dileptons. It is found that the high mass vector mesons which are the hallmark of the generalized vector-dominance model make little contribution to the large yield of leptons observed in the transverse-momentum range 1 less than p/sub perpendicular/ less than 6 GeV. The recently measured hadronic parameters lead one to believe that detailed fits to the data are possible under the model. The possibility was expected, and illustrated with a simple model the extreme sensitivity of the large-p/sub perpendicular/ lepton yield to the large-transverse-momentum tail of vector-meson production. The second topic is an attempt to explain the mysterious phenomenon of photon shadowing in nuclei utilizing the contribution of the longitudinally polarized photon. It is argued that if the scalar photon anti-shadows, it could compensate for the transverse photon, which is presumed to shadow. It is found in a very simple model that the scalar photon could indeed anti-shadow. The principal feature of the model is a cancellation of amplitudes. The scheme is consistent with scalar photon-nucleon data as well. The idea is tested with two simple GVDM models and finds that the anti-shadowing contribution of the scalar photon is not sufficient to compensate for the contribution of the transverse photon. It is found doubtful that the scalar photon makes a significant contribution to the total photon-nuclear cross section

  12. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... the performance of HIRLAM in particular with respect to wind predictions. To estimate the performance of the model two spatial resolutions (0,5 Deg. and 0.2 Deg.) and different sets of HIRLAM variables were used to predict wind speed and energy production. The predictions of energy production for the wind farms...... are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production...

  13. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... Linear MPC. 1. Uses linear model: ˙x = Ax + Bu. 2. Quadratic cost function: F = xT Qx + uT Ru. 3. Linear constraints: Hx + Gu < 0. 4. Quadratic program. Nonlinear MPC. 1. Nonlinear model: ˙x = f(x, u). 2. Cost function can be nonquadratic: F = (x, u). 3. Nonlinear constraints: h(x, u) < 0. 4. Nonlinear program.

  14. Fermions as generalized Ising models

    Directory of Open Access Journals (Sweden)

    C. Wetterich

    2017-04-01

    Full Text Available We establish a general map between Grassmann functionals for fermions and probability or weight distributions for Ising spins. The equivalence between the two formulations is based on identical transfer matrices and expectation values of products of observables. The map preserves locality properties and can be realized for arbitrary dimensions. We present a simple example where a quantum field theory for free massless Dirac fermions in two-dimensional Minkowski space is represented by an asymmetric Ising model on a euclidean square lattice.

  15. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  16. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  18. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  19. Self-Tuning of Design Variables for Generalized Predictive Control

    Science.gov (United States)

    Lin, Chaung; Juang, Jer-Nan

    2000-01-01

    Three techniques are introduced to determine the order and control weighting for the design of a generalized predictive controller. These techniques are based on the application of fuzzy logic, genetic algorithms, and simulated annealing to conduct an optimal search on specific performance indexes or objective functions. Fuzzy logic is found to be feasible for real-time and on-line implementation due to its smooth and quick convergence. On the other hand, genetic algorithms and simulated annealing are applicable for initial estimation of the model order and control weighting, and final fine-tuning within a small region of the solution space, Several numerical simulations for a multiple-input and multiple-output system are given to illustrate the techniques developed in this paper.

  20. Predictions models with neural nets

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2008-01-01

    Full Text Available The contribution is oriented to basic problem trends solution of economic pointers, using neural networks. Problems include choice of the suitable model and consequently configuration of neural nets, choice computational function of neurons and the way prediction learning. The contribution contains two basic models that use structure of multilayer neural nets and way of determination their configuration. It is postulate a simple rule for teaching period of neural net, to get most credible prediction.Experiments are executed with really data evolution of exchange rate Kč/Euro. The main reason of choice this time series is their availability for sufficient long period. In carry out of experiments the both given basic kind of prediction models with most frequent use functions of neurons are verified. Achieve prediction results are presented as in numerical and so in graphical forms.

  1. Employment of the generalized adsorption model for the prediction of the solid-water distribution of radiocesium in the river-estuary-ocean system

    International Nuclear Information System (INIS)

    Fan, Qiaohui; Takahashi, Yoshio

    2017-01-01

    Since last century, a large amount of radiocesium (RCs) released from atomic weapon tests and nuclear accidents, such as in Chernobyl and Fukushima, was directly introduced into the environment through atmospheric transportation and deposition on land surface soil, discharged into river systems by erosion effects during rainfall, and finally released into the ocean. In this study, a generalized adsorption model (GAM) for Cs + was employed to estimate the solid-water distribution of Cs + in the river-estuary-ocean system. The results confirmed that the capacity of each adsorption site of river sediments, i.e., interlayer site, type II site, and planar site, can be precisely optimized through the adsorption isotherm of Cs + on the river sediments combined with the radiocesium interception potential (RIP) and cation exchange capacity (CEC). According to the GAM, the main contributor for Cs + adsorption is the frayed edge site rather than others due to the very low concentration of Cs + in the river-estuary-ocean system. The different solid-water distribution of Cs + in the river-estuary-ocean system was dominantly controlled by the salinity in the aqueous phase. Therefore, Cs + should be highly reactive with strong adsorptive character to particulate matter in the river system, whereas a conservative distribution must be dominant in ocean with much weaker affinity to particulate matter because of the high salinity. - Highlights: • A new method to extend the utility range of GAM from illite to natural samples. • GAM was adapted to quantitatively explore the transportation of radiocesium in river in rive-estuary-ocean system. • High reactivity in river water and conservative behavior in seawater were clarified.

  2. Which method predicts recidivism best?: A comparison of statistical, machine learning, and data mining predictive models

    OpenAIRE

    Tollenaar, N.; van der Heijden, P.G.M.

    2012-01-01

    Using criminal population conviction histories of recent offenders, prediction mod els are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining and machine learning provide an improvement in predictive performance over classical statistical methods, namely logistic regression and linear discrim inant analysis. These models are compared ...

  3. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  4. Model complexity control for hydrologic prediction

    Science.gov (United States)

    Schoups, G.; van de Giesen, N. C.; Savenije, H. H. G.

    2008-12-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore needed. We compare three model complexity control methods for hydrologic prediction, namely, cross validation (CV), Akaike's information criterion (AIC), and structural risk minimization (SRM). Results show that simulation of water flow using non-physically-based models (polynomials in this case) leads to increasingly better calibration fits as the model complexity (polynomial order) increases. However, prediction uncertainty worsens for complex non-physically-based models because of overfitting of noisy data. Incorporation of physically based constraints into the model (e.g., storage-discharge relationship) effectively bounds prediction uncertainty, even as the number of parameters increases. The conclusion is that overparameterization and equifinality do not lead to a continued increase in prediction uncertainty, as long as models are constrained by such physical principles. Complexity control of hydrologic models reduces parameter equifinality and identifies the simplest model that adequately explains the data, thereby providing a means of hydrologic generalization and classification. SRM is a promising technique for this purpose, as it (1) provides analytic upper bounds on prediction uncertainty, hence avoiding the computational burden of CV, and (2) extends the applicability of classic methods such as AIC to finite data. The main hurdle in applying SRM is the need for an a priori estimation of the complexity of the hydrologic model, as measured by its Vapnik-Chernovenkis (VC) dimension. Further research is needed in this area.

  5. Thermodynamic modeling of activity coefficient and prediction of solubility: Part 1. Predictive models.

    Science.gov (United States)

    Mirmehrabi, Mahmoud; Rohani, Sohrab; Perry, Luisa

    2006-04-01

    A new activity coefficient model was developed from excess Gibbs free energy in the form G(ex) = cA(a) x(1)(b)...x(n)(b). The constants of the proposed model were considered to be function of solute and solvent dielectric constants, Hildebrand solubility parameters and specific volumes of solute and solvent molecules. The proposed model obeys the Gibbs-Duhem condition for activity coefficient models. To generalize the model and make it as a purely predictive model without any adjustable parameters, its constants were found using the experimental activity coefficient and physical properties of 20 vapor-liquid systems. The predictive capability of the proposed model was tested by calculating the activity coefficients of 41 binary vapor-liquid equilibrium systems and showed good agreement with the experimental data in comparison with two other predictive models, the UNIFAC and Hildebrand models. The only data used for the prediction of activity coefficients, were dielectric constants, Hildebrand solubility parameters, and specific volumes of the solute and solvent molecules. Furthermore, the proposed model was used to predict the activity coefficient of an organic compound, stearic acid, whose physical properties were available in methanol and 2-butanone. The predicted activity coefficient along with the thermal properties of the stearic acid were used to calculate the solubility of stearic acid in these two solvents and resulted in a better agreement with the experimental data compared to the UNIFAC and Hildebrand predictive models.

  6. What do saliency models predict?

    Science.gov (United States)

    Koehler, Kathryn; Guo, Fei; Zhang, Sheng; Eckstein, Miguel P.

    2014-01-01

    Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. PMID:24618107

  7. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  8. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  9. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  10. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  11. Metacognition Beliefs and General Health in Predicting Alexithymia in Students.

    Science.gov (United States)

    Babaei, Samaneh; Ranjbar Varandi, Shahryar; Hatami, Zohre; Gharechahi, Maryam

    2015-06-12

    The present study was conducted to investigate the role of metacognition beliefs and general health in alexithymia in Iranian students. This descriptive and correlational study included 200 participants of high schools students, selected randomly from students of two cities (Sari and Dargaz), Iran. Metacognitive Strategies Questionnaire (MCQ-30); the General Health Questionnaire (GHQ) and Farsi Version of the Toronto Alexithymia Scale (TAS-20) were used for gathering the data. Using the Pearson's correlation method and regression, the data were analyzed. The findings indicated significant positive relationships between alexithymia and all subscales of general health. The highest correlation was between alexithymia and anxiety subscale (r=0.36, Pmetacognitive strategies. The highest significant negative relationship was seen between alexithymia and the sub-scale of risk uncontrollability (r=-0.359, P Metacognition beliefs predicted about 8% of the variance of alexithymia (β=-0.028, Pmetacognition beliefs and general health had important role in predicting of alexithymia in students.

  12. A general phenomenological model for work function

    Science.gov (United States)

    Brodie, I.; Chou, S. H.; Yuan, H.

    2014-07-01

    A general phenomenological model is presented for obtaining the zero Kelvin work function of any crystal facet of metals and semiconductors, both clean and covered with a monolayer of electropositive atoms. It utilizes the known physical structure of the crystal and the Fermi energy of the two-dimensional electron gas assumed to form on the surface. A key parameter is the number of electrons donated to the surface electron gas per surface lattice site or adsorbed atom, which is taken to be an integer. Initially this is found by trial and later justified by examining the state of the valence electrons of the relevant atoms. In the case of adsorbed monolayers of electropositive atoms a satisfactory justification could not always be found, particularly for cesium, but a trial value always predicted work functions close to the experimental values. The model can also predict the variation of work function with temperature for clean crystal facets. The model is applied to various crystal faces of tungsten, aluminium, silver, and select metal oxides, and most demonstrate good fits compared to available experimental values.

  13. Predicting glycated hemoglobin levels in the non-diabetic general population

    DEFF Research Database (Denmark)

    Rauh, Simone P; Heymans, Martijn W; Koopman, Anitra D M

    2017-01-01

    AIMS/HYPOTHESIS: To develop a prediction model that can predict HbA1c levels after six years in the non-diabetic general population, including previously used readily available predictors. METHODS: Data from 5,762 initially non-diabetic subjects from three population-based cohorts (Hoorn Study, I...

  14. Webinar of paper 2013, Which method predicts recidivism best? A comparison of statistical, machine learning and data mining predictive models

    NARCIS (Netherlands)

    Tollenaar, N.; Van der Heijden, P.G.M.

    2013-01-01

    Using criminal population criminal conviction history information, prediction models are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining

  15. Simulation modelling in agriculture: General considerations. | R.I. ...

    African Journals Online (AJOL)

    The computer does all the necessary arithmetic when the hypothesis is invoked to predict the future behaviour of the simulated system under given conditions.A general ... in the advisory service. Keywords: agriculture; botany; computer simulation; modelling; simulation model; simulation modelling; south africa; techniques ...

  16. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  17. An international risk prediction algorithm for the onset of generalized anxiety and panic syndromes in general practice attendees : predictA

    NARCIS (Netherlands)

    King, M.; Bottomley, C.; Bellon-Saameno, J. A.; Torres-Gonzalez, F.; Svab, I.; Rifel, J.; Maaroos, H. -I.; Aluoja, A.; Geerlings, M. I.; Xavier, M.; Carraca, I.; Vicente, B.; Saldivia, S.; Nazareth, I.

    Background. There are no risk models for the prediction of anxiety that may help in prevention. We aimed to develop a risk algorithm for the onset of generalized anxiety and panic syndromes. Method. Family practice attendees were recruited between April 2003 and February 2005 and followed over 24

  18. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  19. Generalized ESO and Predictive Control Based Robust Autopilot Design

    Directory of Open Access Journals (Sweden)

    Bhavnesh Panchal

    2016-01-01

    Full Text Available A novel continuous time predictive control and generalized extended state observer (GESO based acceleration tracking pitch autopilot design is proposed for a tail controlled, skid-to-turn tactical missile. As the dynamics of missile are significantly uncertain with mismatched uncertainty, GESO is employed to estimate the state and uncertainty in an integrated manner. The estimates are used to meet the requirement of state and to robustify the output tracking predictive controller designed for nominal system. Closed loop stability for the controller-observer structure is established. An important feature of the proposed design is that it does not require any specific information about the uncertainty. Also the predictive control design yields the feedback control gain and disturbance compensation gain simultaneously. Effectiveness of GESO in estimation of the states and uncertainties and in robustifying the predictive controller in the presence of parametric uncertainties, external disturbances, unmodeled dynamics, and measurement noise is illustrated by simulation.

  20. Multivariate generalized linear mixed models using R

    CERN Document Server

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  1. Spatial prediction of Soil Organic Carbon contents in croplands, grasslands and forests using environmental covariates and Generalized Additive Models (Southern Belgium)

    Science.gov (United States)

    Chartin, Caroline; Stevens, Antoine; van Wesemael, Bas

    2015-04-01

    Providing spatially continuous Soil Organic Carbon data (SOC) is needed to support decisions regarding soil management, and inform the political debate with quantified estimates of the status and change of the soil resource. Digital Soil Mapping techniques are based on relations existing between a soil parameter (measured at different locations in space at a defined period) and relevant covariates (spatially continuous data) that are factors controlling soil formation and explaining the spatial variability of the target variable. This study aimed at apply DSM techniques to recent SOC content measurements (2005-2013) in three different landuses, i.e. cropland, grassland, and forest, in the Walloon region (Southern Belgium). For this purpose, SOC databases of two regional Soil Monitoring Networks (CARBOSOL for croplands and grasslands, and IPRFW for forests) were first harmonized, totalising about 1,220 observations. Median values of SOC content for croplands, grasslands, and forests, are respectively of 12.8, 29.0, and 43.1 g C kg-1. Then, a set of spatial layers were prepared with a resolution of 40 meters and with the same grid topology, containing environmental covariates such as, landuses, Digital Elevation Model and its derivatives, soil texture, C factor, carbon inputs by manure, and climate. Here, in addition to the three classical texture classes (clays, silt, and sand), we tested the use of clays + fine silt content (particles agricultural soils and forests was for the first time computed for the Walloon region.

  2. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  3. A Simple General Model of Evolutionary Dynamics

    Science.gov (United States)

    Thurner, Stefan

    Evolution is a process in which some variations that emerge within a population (of, e.g., biological species or industrial goods) get selected, survive, and proliferate, whereas others vanish. Survival probability, proliferation, or production rates are associated with the "fitness" of a particular variation. We argue that the notion of fitness is an a posteriori concept in the sense that one can assign higher fitness to species or goods that survive but one can generally not derive or predict fitness per se. Whereas proliferation rates can be measured, fitness landscapes, that is, the inter-dependence of proliferation rates, cannot. For this reason we think that in a physical theory of evolution such notions should be avoided. Here we review a recent quantitative formulation of evolutionary dynamics that provides a framework for the co-evolution of species and their fitness landscapes (Thurner et al., 2010, Physica A 389, 747; Thurner et al., 2010, New J. Phys. 12, 075029; Klimek et al., 2009, Phys. Rev. E 82, 011901 (2010). The corresponding model leads to a generic evolutionary dynamics characterized by phases of relative stability in terms of diversity, followed by phases of massive restructuring. These dynamical modes can be interpreted as punctuated equilibria in biology, or Schumpeterian business cycles (Schumpeter, 1939, Business Cycles, McGraw-Hill, London) in economics. We show that phase transitions that separate phases of high and low diversity can be approximated surprisingly well by mean-field methods. We demonstrate that the mathematical framework is suited to understand systemic properties of evolutionary systems, such as their proneness to collapse, or their potential for diversification. The framework suggests that evolutionary processes are naturally linked to self-organized criticality and to properties of production matrices, such as their eigenvalue spectra. Even though the model is phrased in general terms it is also practical in the sense

  4. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  6. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and fie...... as support decision making. However, several other factors affect decision making such as, ethics, politics and economics. Furthermore, the insight gained when models are build leads to point out areas where knowledge is lacking....... of FMD spread that can provide useful and trustworthy advises, there are four important issues, which the model should represent: 1) The herd structure of the country in question, 2) the dynamics of animal movements and contacts between herds, 3) the biology of the disease, and 4) the regulations...

  7. Gravitational redshift of galaxies in clusters as predicted by general relativity.

    Science.gov (United States)

    Wojtak, Radosław; Hansen, Steen H; Hjorth, Jens

    2011-09-28

    The theoretical framework of cosmology is mainly defined by gravity, of which general relativity is the current model. Recent tests of general relativity within the Lambda Cold Dark Matter (ΛCDM) model have found a concordance between predictions and the observations of the growth rate and clustering of the cosmic web. General relativity has not hitherto been tested on cosmological scales independently of the assumptions of the ΛCDM model. Here we report an observation of the gravitational redshift of light coming from galaxies in clusters at the 99 per cent confidence level, based on archival data. Our measurement agrees with the predictions of general relativity and its modification created to explain cosmic acceleration without the need for dark energy (the f(R) theory), but is inconsistent with alternative models designed to avoid the presence of dark matter. © 2011 Macmillan Publishers Limited. All rights reserved

  8. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  9. Cosmological models in the generalized Einstein action

    International Nuclear Information System (INIS)

    Arbab, A.I.

    2007-12-01

    We have studied the evolution of the Universe in the generalized Einstein action of the form R + β R 2 , where R is the scalar curvature and β = const. We have found exact cosmological solutions that predict the present cosmic acceleration. These models predict an inflationary de-Sitter era occurring in the early Universe. The cosmological constant (Λ) is found to decay with the Hubble constant (H) as, Λ ∝ H 4 . In this scenario the cosmological constant varies quadratically with the energy density (ρ), i.e., Λ ∝ ρ 2 . Such a variation is found to describe a two-component cosmic fluid in the Universe. One of the components accelerated the Universe in the early era, and the other in the present era. The scale factor of the Universe varies as a ∼ t n = 1/2 in the radiation era. The cosmological constant vanishes when n = 4/3 and n =1/2. We have found that the inclusion of the term R 2 mimics a cosmic matter that could substitute the ordinary matter. (author)

  10. General Computational Model for Human Musculoskeletal System of Spine

    Directory of Open Access Journals (Sweden)

    Kyungsoo Kim

    2012-01-01

    Full Text Available A general computational model of the human lumbar spine and trunk muscles including optimization formulations was provided. For a given condition, the trunk muscle forces could be predicted considering the human physiology including the follower load concept. The feasibility of the solution could be indirectly validated by comparing the compressive force, the shear force, and the joint moment. The presented general computational model and optimization technology can be fundamental tools to understand the control principle of human trunk muscles.

  11. Predicting metal toxicity revisited: general properties vs. specific effects.

    Science.gov (United States)

    Wolterbeek, H T; Verburg, T G

    2001-11-12

    The present paper addresses the prediction of metal toxicity by evaluation of the relationships between general metal properties and toxic effects. For this, metal toxicity data were taken from 30 literature data sets, which varied largely in exposure times, organisms, effects and effect levels. General metal properties were selected on basis of literature reviewing of basic metal property classifications: used were the electrochemical potential deltaE0; the ionization potential IP; the ratio between atomic radius and atomic weight AR/AW; and the electronegativity Xm. The results suggest that toxicity prediction may be performed on basis of these fixed metal properties without any adoption to specific organisms, without any division of metals into classes, or grouping of toxicity tests. The results further indicate that metal properties contribute to the observed effects in relative importances which depend on specific effects, effect levels, exposure times, selected organisms and ambient conditions. The discussion strongly suggests that prediction should be by interpolation rather than by extrapolation of calibrated toxicity data: the concept here is that unknown metal toxicities are predicted on basis of observed metal toxicities in calibration experiments. Considering the used metal properties, the calibration covers the largest number of metals by the simultanuous use of Ge(IV), Cs(I), Li(I), Mn(VII), Sc and Bi in toxicity studies. Based on the data from the 30 studies considered, metal toxicities could be ordered in a relative way. This ordering indicates that the natural abundance of metals or metal ions in the Earth's crust may be regarded as a general comparative measure of the metal toxicities. The problems encountered in toxicity interpretation and ordering of toxicities indicate that control of the solution acidity, the metal's solubility and the metal's oxidation state may be key problems to overcome in future metal ion toxicity studies.

  12. Predictive Modeling of Partitioned Systems: Implementation and Applications

    OpenAIRE

    Latten, Christine

    2014-01-01

    A general mathematical methodology for predictive modeling of coupled multi-physics systems is implemented and has been applied without change to an illustrative heat conduction example and reactor physics benchmarks.

  13. Actuarial statistics with generalized linear mixed models

    NARCIS (Netherlands)

    Antonio, K.; Beirlant, J.

    2007-01-01

    Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics

  14. A model to predict the beginning of the pollen season

    DEFF Research Database (Denmark)

    Toldam-Andersen, Torben Bo

    1991-01-01

    In order to predict the beginning of the pollen season, a model comprising the Utah phenoclirnatography Chill Unit (CU) and ASYMCUR-Growing Degree Hour (GDH) submodels were used to predict the first bloom in Alms, Ulttirrs and Berirln. The model relates environmental temperatures to rest completion...... and bud development. As phenologic parameter 14 years of pollen counts were used. The observed datcs for the beginning of the pollen seasons were defined from the pollen counts and compared with the model prediction. The CU and GDH submodels were used as: 1. A fixed day model, using only the GDH model...... for fruit trees are generally applicable, and give a reasonable description of the growth processes of other trees. This type of model can therefore be of value in predicting the start of the pollen season. The predicted dates were generally within 3-5 days of the observed. Finally the possibility of frost...

  15. Generalized Linear Models in Family Studies

    Science.gov (United States)

    Wu, Zheng

    2005-01-01

    Generalized linear models (GLMs), as defined by J. A. Nelder and R. W. M. Wedderburn (1972), unify a class of regression models for categorical, discrete, and continuous response variables. As an extension of classical linear models, GLMs provide a common body of theory and methodology for some seemingly unrelated models and procedures, such as…

  16. Micro Data and General Equilibrium Models

    DEFF Research Database (Denmark)

    Browning, Martin; Hansen, Lars Peter; Heckman, James J.

    1999-01-01

    Dynamic general equilibrium models are required to evaluate policies applied at the national level. To use these models to make quantitative forecasts requires knowledge of an extensive array of parameter values for the economy at large. This essay describes the parameters required for different...... economic models, assesses the discordance between the macromodels used in policy evaluation and the microeconomic models used to generate the empirical evidence. For concreteness, we focus on two general equilibrium models: the stochastic growth model extended to include some forms of heterogeneity...

  17. Effective and Robust Generalized Predictive Speed Control of Induction Motor

    Directory of Open Access Journals (Sweden)

    Patxi Alkorta

    2013-01-01

    Full Text Available This paper presents and validates a new proposal for effective speed vector control of induction motors based on linear Generalized Predictive Control (GPC law. The presented GPC-PI cascade configuration simplifies the design with regard to GPC-GPC cascade configuration, maintaining the advantages of the predictive control algorithm. The robust stability of the closed loop system is demonstrated by the poles placement method for several typical cases of uncertainties in induction motors. The controller has been tested using several simulations and experiments and has been compared with Proportional Integral Derivative (PID and Sliding Mode (SM control schemes, obtaining outstanding results in speed tracking even in the presence of parameter uncertainties, unknown load disturbance, and measurement noise in the loop signals, suggesting its use in industrial applications.

  18. Predictive capabilities of various constitutive models for arterial tissue.

    Science.gov (United States)

    Schroeder, Florian; Polzer, Stanislav; Slažanský, Martin; Man, Vojtěch; Skácel, Pavel

    2018-02-01

    Aim of this study is to validate some constitutive models by assessing their capabilities in describing and predicting uniaxial and biaxial behavior of porcine aortic tissue. 14 samples from porcine aortas were used to perform 2 uniaxial and 5 biaxial tensile tests. Transversal strains were furthermore stored for uniaxial data. The experimental data were fitted by four constitutive models: Holzapfel-Gasser-Ogden model (HGO), model based on generalized structure tensor (GST), Four-Fiber-Family model (FFF) and Microfiber model. Fitting was performed to uniaxial and biaxial data sets separately and descriptive capabilities of the models were compared. Their predictive capabilities were assessed in two ways. Firstly each model was fitted to biaxial data and its accuracy (in term of R 2 and NRMSE) in prediction of both uniaxial responses was evaluated. Then this procedure was performed conversely: each model was fitted to both uniaxial tests and its accuracy in prediction of 5 biaxial responses was observed. Descriptive capabilities of all models were excellent. In predicting uniaxial response from biaxial data, microfiber model was the most accurate while the other models showed also reasonable accuracy. Microfiber and FFF models were capable to reasonably predict biaxial responses from uniaxial data while HGO and GST models failed completely in this task. HGO and GST models are not capable to predict biaxial arterial wall behavior while FFF model is the most robust of the investigated constitutive models. Knowledge of transversal strains in uniaxial tests improves robustness of constitutive models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A general consumer-resource population model

    Science.gov (United States)

    Lafferty, Kevin D.; DeLeo, Giulio; Briggs, Cheryl J.; Dobson, Andrew P.; Gross, Thilo; Kuris, Armand M.

    2015-01-01

    Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model.

  20. Conformity and Dissonance in Generalized Voter Models

    Science.gov (United States)

    Page, Scott E.; Sander, Leonard M.; Schneider-Mizell, Casey M.

    2007-09-01

    We generalize the voter model to include social forces that produce conformity among voters and avoidance of cognitive dissonance of opinions within a voter. The time for both conformity and consistency (which we call the exit time) is, in general, much longer than for either process alone. We show that our generalized model can be applied quite widely: it is a form of Wright's island model of population genetics, and is related to problems in the physical sciences. We give scaling arguments, numerical simulations, and analytic estimates for the exit time for a range of relative strengths in the tendency to conform and to avoid dissonance.

  1. A Generalized Random Regret Minimization Model

    NARCIS (Netherlands)

    Chorus, C.G.

    2013-01-01

    This paper presents, discusses and tests a generalized Random Regret Minimization (G-RRM) model. The G-RRM model is created by replacing a fixed constant in the attribute-specific regret functions of the RRM model, by a regret-weight variable. Depending on the value of the regret-weights, the G-RRM

  2. A general circulation model (GCM) parameterization of Pinatubo aerosols

    Energy Technology Data Exchange (ETDEWEB)

    Lacis, A.A.; Carlson, B.E.; Mishchenko, M.I. [NASA Goddard Institute for Space Studies, New York, NY (United States)

    1996-04-01

    The June 1991 volcanic eruption of Mt. Pinatubo is the largest and best documented global climate forcing experiment in recorded history. The time development and geographical dispersion of the aerosol has been closely monitored and sampled. Based on preliminary estimates of the Pinatubo aerosol loading, general circulation model predictions of the impact on global climate have been made.

  3. Predicting and Modeling RNA Architecture

    Science.gov (United States)

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  4. EOP MIT General Circulation Model (MITgcm)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data contains a regional implementation of the Massachusetts Institute of Technology general circulation model (MITgcm) at a 1-km spatial resolution for the...

  5. Generalized Reduced Order Model Generation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...

  6. Generalized Reduced Order Model Generation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...

  7. Empirical generalization assessment of neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1995-01-01

    competing models. Since all models are trained on the same data, a key issue is to take this dependency into account. The optimal split of the data set of size N into a cross-validation set of size Nγ and a training set of size N(1-γ) is discussed. Asymptotically (large data sees), γopt→1......This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model....... This enables the formulation of a bulk of new generalization performance measures. Numerical results demonstrate the viability of the approach compared to the standard technique of using algebraic estimates like the FPE. Moreover, we consider the problem of comparing the generalization performance of different...

  8. Foundations of linear and generalized linear models

    CERN Document Server

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  9. Predictive Modeling by the Cerebellum Improves Proprioception

    Science.gov (United States)

    Bhanpuri, Nasir H.; Okamura, Allison M.

    2013-01-01

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance. PMID:24005283

  10. Perturbed generalized multicritical one-matrix models

    Science.gov (United States)

    Ambjørn, J.; Chekhov, L.; Makeenko, Y.

    2018-03-01

    We study perturbations around the generalized Kazakov multicritical one-matrix model. The multicritical matrix model has a potential where the coefficients of zn only fall off as a power 1 /n s + 1. This implies that the potential and its derivatives have a cut along the real axis, leading to technical problems when one performs perturbations away from the generalized Kazakov model. Nevertheless it is possible to relate the perturbed partition function to the tau-function of a KdV hierarchy and solve the model by a genus expansion in the double scaling limit.

  11. GENERALIZED VISCOPLASTIC MODELING OF DEBRIS FLOW.

    Science.gov (United States)

    Chen, Cheng-lung

    1988-01-01

    The earliest model developed by R. A. Bagnold was based on the concept of the 'dispersive' pressure generated by grain collisions. Some efforts have recently been made by theoreticians in non-Newtonian fluid mechanics to modify or improve Bagnold's concept or model. A viable rheological model should consist both of a rate-independent part and a rate-dependent part. A generalized viscoplastic fluid (GVF) model that has both parts as well as two major rheological properties (i. e. , the normal stress effect and soil yield criterion) is shown to be sufficiently accurate, yet practical for general use in debris-flow modeling. In fact, Bagnold's model is found to be only a particular case of the GVF model. analytical solutions for (steady) uniform debris flows in wide channels are obtained from the GVF model based on Bagnold's simplified assumption of constant grain concentration.

  12. Reduced order modelling and predictive control of multivariable ...

    Indian Academy of Sciences (India)

    Anuj Abraham

    2018-03-16

    Mar 16, 2018 ... The performance of constraint generalized predictive control scheme is found to be superior to that of the conventional PID controller in terms of overshoot, settling time and performance indices, mainly ISE, IAE and MSE. Keywords. Predictive control; distillation column; reduced order model; dominant pole; ...

  13. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  14. Generalization of the quark rearrangement model

    International Nuclear Information System (INIS)

    Fields, T.; Chen, C.K.

    1976-01-01

    An extension and generalization of the quark rearrangement model of baryon annihilation is described which can be applied to all annihilation reactions and which incorporates some of the features of the highly successful quark parton model. Some p anti-p interactions are discussed

  15. Geometrical efficiency in computerized tomography: generalized model

    International Nuclear Information System (INIS)

    Costa, P.R.; Robilotta, C.C.

    1992-01-01

    A simplified model for producing sensitivity and exposure profiles in computerized tomographic system was recently developed allowing the forecast of profiles behaviour in the rotation center of the system. The generalization of this model for some point of the image plane was described, and the geometrical efficiency could be evaluated. (C.G.C.)

  16. Mixed models for predictive modeling in actuarial science

    NARCIS (Netherlands)

    Antonio, K.; Zhang, Y.

    2012-01-01

    We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques

  17. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  18. Use of a Machine-learning Method for Predicting Highly Cited Articles Within General Radiology Journals.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Doshi, Ankur M; Ginocchio, Luke A; Aphinyanaphongs, Yindalon

    2016-12-01

    This study aimed to assess the performance of a text classification machine-learning model in predicting highly cited articles within the recent radiological literature and to identify the model's most influential article features. We downloaded from PubMed the title, abstract, and medical subject heading terms for 10,065 articles published in 25 general radiology journals in 2012 and 2013. Three machine-learning models were applied to predict the top 10% of included articles in terms of the number of citations to the article in 2014 (reflecting the 2-year time window in conventional impact factor calculations). The model having the highest area under the curve was selected to derive a list of article features (words) predicting high citation volume, which was iteratively reduced to identify the smallest possible core feature list maintaining predictive power. Overall themes were qualitatively assigned to the core features. The regularized logistic regression (Bayesian binary regression) model had highest performance, achieving an area under the curve of 0.814 in predicting articles in the top 10% of citation volume. We reduced the initial 14,083 features to 210 features that maintain predictivity. These features corresponded with topics relating to various imaging techniques (eg, diffusion-weighted magnetic resonance imaging, hyperpolarized magnetic resonance imaging, dual-energy computed tomography, computed tomography reconstruction algorithms, tomosynthesis, elastography, and computer-aided diagnosis), particular pathologies (prostate cancer; thyroid nodules; hepatic adenoma, hepatocellular carcinoma, non-alcoholic fatty liver disease), and other topics (radiation dose, electroporation, education, general oncology, gadolinium, statistics). Machine learning can be successfully applied to create specific feature-based models for predicting articles likely to achieve high influence within the radiological literature. Copyright © 2016 The Association of University

  19. Generalized linear model for partially ordered data.

    Science.gov (United States)

    Zhang, Qiang; Ip, Edward Haksing

    2012-01-13

    Within the rich literature on generalized linear models, substantial efforts have been devoted to models for categorical responses that are either completely ordered or completely unordered. Few studies have focused on the analysis of partially ordered outcomes, which arise in practically every area of study, including medicine, the social sciences, and education. To fill this gap, we propose a new class of generalized linear models--the partitioned conditional model--that includes models for both ordinal and unordered categorical data as special cases. We discuss the specification of the partitioned conditional model and its estimation. We use an application of the method to a sample of the National Longitudinal Study of Youth to illustrate how the new method is able to extract from partially ordered data useful information about smoking youths that is not possible using traditional methods. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Troponin I and cardiovascular risk prediction in the general population

    DEFF Research Database (Denmark)

    Blankenberg, Stefan; Salomaa, Veikko; Makarova, Nataliya

    2016-01-01

    population-based studies including 74 738 participants. We investigated the value of adding troponin I levels to conventional risk factors for prediction of cardiovascular disease by calculating measures of discrimination (C-index) and net reclassification improvement (NRI). We further tested the clinical....... The addition of troponin I information to a prognostic model for cardiovascular death constructed of ESC SCORE variables increased the C-index discrimination measure by 0.007 and yielded an NRI of 0.048, whereas the addition to prognostic models for cardiovascular disease and total mortality led to lesser C......-index discrimination and NRI increment. In individuals above 6 ng/L of troponin I, a concentration near the upper quintile in BiomarCaRE (5.9 ng/L) and JUPITER (5.8 ng/L), rosuvastatin therapy resulted in higher absolute risk reduction compared with individuals

  1. DEEP: a general computational framework for predicting enhancers

    KAUST Repository

    Kleftogiannis, Dimitrios A.

    2014-11-05

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer\\'s properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.

  2. Predictions of models for environmental radiological assessment

    International Nuclear Information System (INIS)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando

    2011-01-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  3. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  4. A general model for membrane-based separation processes

    DEFF Research Database (Denmark)

    Soni, Vipasha; Abildskov, Jens; Jonsson, Gunnar Eigil

    2009-01-01

    behaviour will play an important role. In this paper, modelling of membrane-based processes for separation of gas and liquid mixtures are considered. Two general models, one for membrane-based liquid separation processes (with phase change) and another for membrane-based gas separation are presented....... The separation processes covered are: membrane-based gas separation processes, pervaporation and various types of membrane distillation processes. The specific model for each type of membrane-based process is generated from the two general models by applying the specific system descriptions and the corresponding......A separation process could be defined as a process that transforms a given mixture of chemicals into two or more compositionally distinct end-use products. One way to design these separation processes is to employ a model-based approach, where mathematical models that reliably predict the process...

  5. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  6. Generalizations of the noisy-or model

    Czech Academy of Sciences Publication Activity Database

    Vomlel, Jiří

    2015-01-01

    Roč. 51, č. 3 (2015), s. 508-524 ISSN 0023-5954 R&D Projects: GA ČR GA13-20012S Institutional support: RVO:67985556 Keywords : Bayesian networks * noisy-or model * classification * generalized linear models Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.628, year: 2015 http://library.utia.cas.cz/separaty/2015/MTR/vomlel-0447357.pdf

  7. Toward General Analysis of Recursive Probability Models

    OpenAIRE

    Pless, Daniel; Luger, George

    2013-01-01

    There is increasing interest within the research community in the design and use of recursive probability models. Although there still remains concern about computational complexity costs and the fact that computing exact solutions can be intractable for many nonrecursive models and impossible in the general case for recursive problems, several research groups are actively developing computational techniques for recursive stochastic languages. We have developed an extension to the traditional...

  8. Modeling for prediction of restrained shrinkage effect in concrete repair

    International Nuclear Information System (INIS)

    Yuan Yingshu; Li Guo; Cai Yue

    2003-01-01

    A general model of autogenous shrinkage caused by chemical reaction (chemical shrinkage) is developed by means of Arrhenius' law and a degree of chemical reaction. Models of tensile creep and relaxation modulus are built based on a viscoelastic, three-element model. Tests of free shrinkage and tensile creep were carried out to determine some coefficients in the models. Two-dimensional FEM analysis based on the models and other constitutions can predict the development of tensile strength and cracking. Three groups of patch-repaired beams were designed for analysis and testing. The prediction from the analysis shows agreement with the test results. The cracking mechanism after repair is discussed

  9. Generalized Joint Hypermobility Is Predictive of Hip Capsular Thickness.

    Science.gov (United States)

    Devitt, Brian M; Smith, Bjorn N; Stapf, Robert; Tacey, Mark; O'Donnell, John M

    2017-04-01

    The pathomechanics of hip microinstability are not clearly defined but are thought to involve anatomical abnormalities, repetitive forces across the hip, and ligamentous laxity. The purpose of this study was to explore the relationship between generalized joint hypermobility (GJH) and hip capsular thickness. The hypothesis was that GJH would be predictive of a thin hip capsule. Cross-sectional study; Level of evidence, 3. A prospective study was performed on 100 consecutive patients undergoing primary hip arthroscopy for the treatment of hip pain. A Beighton test score (BTS) was obtained prior to each procedure. The maximum score was 9, and a score of ≥4 was defined as hypermobile. Capsular thickness at the level of the anterior portal, corresponding to the location of the iliofemoral ligament, was measured arthroscopically using a calibrated probe. The presence of ligamentum teres (LT) pathology was also recorded. Fifty-five women and 45 men were included in the study. The mean age was 32 years (range, 18-45 years). The median hip capsule thickness was statistically greater in men than women (12.5 and 7.5 mm, respectively). The median BTS for men was 1 compared with 4 for women ( P BTS and capsular thickness; a BTS of BTS ≥4 correlates with a capsular thickness of BTS of ≥4 ( P BTS of BTS ≥4 correlates significantly with a thickness of <10 mm.

  10. Generalized Joint Hypermobility Is Predictive of Hip Capsular Thickness

    Science.gov (United States)

    Devitt, Brian M.; Smith, Bjorn N.; Stapf, Robert; Tacey, Mark; O’Donnell, John M.

    2017-01-01

    Background: The pathomechanics of hip microinstability are not clearly defined but are thought to involve anatomical abnormalities, repetitive forces across the hip, and ligamentous laxity. Purpose/Hypothesis: The purpose of this study was to explore the relationship between generalized joint hypermobility (GJH) and hip capsular thickness. The hypothesis was that GJH would be predictive of a thin hip capsule. Study Design: Cross-sectional study; Level of evidence, 3. Methods: A prospective study was performed on 100 consecutive patients undergoing primary hip arthroscopy for the treatment of hip pain. A Beighton test score (BTS) was obtained prior to each procedure. The maximum score was 9, and a score of ≥4 was defined as hypermobile. Capsular thickness at the level of the anterior portal, corresponding to the location of the iliofemoral ligament, was measured arthroscopically using a calibrated probe. The presence of ligamentum teres (LT) pathology was also recorded. Results: Fifty-five women and 45 men were included in the study. The mean age was 32 years (range, 18-45 years). The median hip capsule thickness was statistically greater in men than women (12.5 and 7.5 mm, respectively). The median BTS for men was 1 compared with 4 for women (P hip capsular thickness. A BTS of <4 correlates significantly with a capsular thickness of ≥10 mm, while a BTS ≥4 correlates significantly with a thickness of <10 mm. PMID:28451620

  11. General Equilibrium Models: Improving the Microeconomics Classroom

    Science.gov (United States)

    Nicholson, Walter; Westhoff, Frank

    2009-01-01

    General equilibrium models now play important roles in many fields of economics including tax policy, environmental regulation, international trade, and economic development. The intermediate microeconomics classroom has not kept pace with these trends, however. Microeconomics textbooks primarily focus on the insights that can be drawn from the…

  12. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  13. Predictive Model Assessment for Count Data

    National Research Council Canada - National Science Library

    Czado, Claudia; Gneiting, Tilmann; Held, Leonhard

    2007-01-01

    .... In case studies, we critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. Key words: Calibration...

  14. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs......) for modeling and forecasting. It is argued that this gives models and predictions which better reflect reality. The SDE approach also offers a more adequate framework for modeling and a number of efficient tools for model building. A software package (CTSM-R) for SDE-based modeling is briefly described....... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...

  15. Recent and past musical activity predicts cognitive aging variability: direct comparison with general lifestyle activities.

    Science.gov (United States)

    Hanna-Pladdy, Brenda; Gajewski, Byron

    2012-01-01

    Studies evaluating the impact of modifiable lifestyle factors on cognition offer potential insights into sources of cognitive aging variability. Recently, we reported an association between extent of musical instrumental practice throughout the life span (greater than 10 years) on preserved cognitive functioning in advanced age. These findings raise the question of whether there are training-induced brain changes in musicians that can transfer to non-musical cognitive abilities to allow for compensation of age-related cognitive declines. However, because of the relationship between engagement in general lifestyle activities and preserved cognition, it remains unclear whether these findings are specifically driven by musical training or the types of individuals likely to engage in greater activities in general. The current study controlled for general activity level in evaluating cognition between musicians and nomusicians. Also, the timing of engagement (age of acquisition, past versus recent) was assessed in predictive models of successful cognitive aging. Seventy age and education matched older musicians (>10 years) and non-musicians (ages 59-80) were evaluated on neuropsychological tests and general lifestyle activities. Musicians scored higher on tests of phonemic fluency, verbal working memory, verbal immediate recall, visuospatial judgment, and motor dexterity, but did not differ in other general leisure activities. Partition analyses were conducted on significant cognitive measures to determine aspects of musical training predictive of enhanced cognition. The first partition analysis revealed education best predicted visuospatial functions in musicians, followed by recent musical engagement which offset low education. In the second partition analysis, early age of musical acquisition (memory in musicians, while analyses for other measures were not predictive. Recent and past musical activity, but not general lifestyle activities, predicted variability

  16. Proposal of computation chart for general use for diffusion prediction of discharged warm water

    International Nuclear Information System (INIS)

    Wada, Akira; Kadoyu, Masatake

    1976-01-01

    The authors have developed the unique simulation analysis method using the numerical models for the prediction of discharged warm water diffusion. At the present stage, the method is adopted for the precise analysis computation in order to make the prediction of the diffusion of discharged warm water at each survey point, but instead of this method, it is strongly requested that some simple and easy prediction methods should be established. For the purpose of meeting this demand, in this report, the computation chart for general use is given to predict simply the diffusion range of discharged warm water, after classifying the semi-infinite sea region into several flow patterns according to the sea conditions and conducting the systematic simulation analysis with the numerical model of each pattern, respectively. (1) Establishment of the computation conditions: The special sea region was picked up as the area to be investigated, which is semi-infinite facing the outer sea and along the rectilineal coast line from many sea regions surrounding Japan, and from the viewpoint of the flow and the diffusion characteristics, the sea region was classified into three patterns. 51 cases in total various parameters were obtained, and finally the simulation analysis was performed. (2) Drawing up the general use chart: 28 sheets of the computation chart for general use were drawn, which are available for computing the approximate temperature rise caused by the discharged warm water diffusion. The example of Anegasaki Thermal Power Station is given. (Kako, I.)

  17. Predictive models for arteriovenous fistula maturation.

    Science.gov (United States)

    Al Shakarchi, Julien; McGrogan, Damian; Van der Veer, Sabine; Sperrin, Matthew; Inston, Nicholas

    2016-05-07

    Haemodialysis (HD) is a lifeline therapy for patients with end-stage renal disease (ESRD). A critical factor in the survival of renal dialysis patients is the surgical creation of vascular access, and international guidelines recommend arteriovenous fistulas (AVF) as the gold standard of vascular access for haemodialysis. Despite this, AVFs have been associated with high failure rates. Although risk factors for AVF failure have been identified, their utility for predicting AVF failure through predictive models remains unclear. The objectives of this review are to systematically and critically assess the methodology and reporting of studies developing prognostic predictive models for AVF outcomes and assess them for suitability in clinical practice. Electronic databases were searched for studies reporting prognostic predictive models for AVF outcomes. Dual review was conducted to identify studies that reported on the development or validation of a model constructed to predict AVF outcome following creation. Data were extracted on study characteristics, risk predictors, statistical methodology, model type, as well as validation process. We included four different studies reporting five different predictive models. Parameters identified that were common to all scoring system were age and cardiovascular disease. This review has found a small number of predictive models in vascular access. The disparity between each study limits the development of a unified predictive model.

  18. Model Predictive Control Fundamentals | Orukpe | Nigerian Journal ...

    African Journals Online (AJOL)

    Model Predictive Control (MPC) has developed considerably over the last two decades, both within the research control community and in industries. MPC strategy involves the optimization of a performance index with respect to some future control sequence, using predictions of the output signal based on a process model, ...

  19. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optim...

  20. Testing General Relativistic Predictions with the LAGEOS Satellites

    Directory of Open Access Journals (Sweden)

    Roberto Peron

    2014-01-01

    Full Text Available The spacetime around Earth is a good environment in order to perform tests of gravitational theories. According to Einstein’s view of gravitational phenomena, the Earth mass-energy content curves the surrounding spacetime in a peculiar way. This (relatively quiet dynamical environment enables a good reconstruction of geodetic satellites (test masses orbit, provided that high-quality tracking data are available. This is the case of the LAGEOS satellites, built and launched mainly for geodetic and geodynamical purposes, but equally good for fundamental physics studies. A review of these studies is presented, focusing on data, models, and analysis strategies. Some recent and less recent results are presented. All of them indicate general relativity theory as a very good description of gravitational phenomena, at least in the studied environment.

  1. Models Predicting Success of Infertility Treatment: A Systematic Review

    Science.gov (United States)

    Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi

    2016-01-01

    Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461

  2. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  3. Hybrid approaches to physiologic modeling and prediction

    Science.gov (United States)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  4. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  5. Improved Generalized Force Model considering the Comfortable Driving Behavior

    Directory of Open Access Journals (Sweden)

    De-Jie Xu

    2015-01-01

    Full Text Available This paper presents an improved generalized force model (IGFM that considers the driver’s comfortable driving behavior. Through theoretical analysis, we propose the calculation methods of comfortable driving distance and velocity. Then the stability condition of the model is obtained by the linear stability analysis. The problems of the unrealistic acceleration of the leading car existing in the previous models were solved. Furthermore, the simulation results show that IGFM can predict correct delay time of car motion and kinematic wave speed at jam density, and it can exactly describe the driver’s behavior under an urgent case, where no collision occurs. The dynamic properties of IGFM also indicate that stability has improved compared to the generalized force model.

  6. Using a Prediction Model to Manage Cyber Security Threats.

    Science.gov (United States)

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  7. Using a Prediction Model to Manage Cyber Security Threats

    Directory of Open Access Journals (Sweden)

    Venkatesh Jaganathan

    2015-01-01

    Full Text Available Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  8. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  9. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  10. General regression and representation model for classification.

    Directory of Open Access Journals (Sweden)

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  11. Current definition and a generalized federbush model

    International Nuclear Information System (INIS)

    Singh, L.P.S.; Hagen, C.R.

    1978-01-01

    The Federbush model is studied, with particular attention being given to the definition of currents. Inasmuch as there is no a priori restriction of local gauge invariance, the currents in the interacting case can be defined more generally than in Q.E.D. It is found that two arbitrary parameters are thereby introduced into the theory. Lowest order perturbation calculations for the current correlation functions and the Fermion propagators indicate that the theory admits a whole class of solutions dependent upon these parameters with the closed solution of Federbush emerging as a special case. The theory is shown to be locally covariant, and a conserved energy--momentum tensor is displayed. One finds in addition that the generators of gauge transformations for the fields are conserved. Finally it is shown that the general theory yields the Federbush solution if suitable Thirring model type counterterms are added

  12. Generalized Additive Models for Nowcasting Cloud Shading

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Paulescu, M.; Badescu, V.

    2014-01-01

    Roč. 101, March (2014), s. 272-282 ISSN 0038-092X R&D Projects: GA MŠk LD12009 Grant - others:European Cooperation in Science and Technology(XE) COST ES1002 Institutional support: RVO:67985807 Keywords : sunshine number * nowcasting * generalized additive model * Markov chain Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.469, year: 2014

  13. Generalized data stacking programming model with applications

    OpenAIRE

    Hala Samir Elhadidy; Rawya Yehia Rizk; Hassen Taher Dorrah

    2016-01-01

    Recent researches have shown that, everywhere in various sciences the systems are following stacked-based stored change behavior when subjected to events or varying environments “on and above” their normal situations. This paper presents a generalized data stack programming (GDSP) model which is developed to describe the system changes under varying environment. These changes which are captured with different ways such as sensor reading are stored in matrices. Extraction algorithm and identif...

  14. A new general model for predicting melting thermodynamics of complementary and mismatched B-form duplexes containing locked nucleic acids: application to probe design for digital PCR detection of somatic mutations.

    Science.gov (United States)

    Hughesman, Curtis; Fakhfakh, Kareem; Bidshahri, Roza; Lund, H Louise; Haynes, Charles

    2015-02-17

    Advances in real-time polymerase chain reaction (PCR), as well as the emergence of digital PCR (dPCR) and useful modified nucleotide chemistries, including locked nucleic acids (LNAs), have created the potential to improve and expand clinical applications of PCR through their ability to better quantify and differentiate amplification products, but fully realizing this potential will require robust methods for designing dual-labeled hydrolysis probes and predicting their hybridization thermodynamics as a function of their sequence, chemistry, and template complementarity. We present here a nearest-neighbor thermodynamic model that accurately predicts the melting thermodynamics of a short oligonucleotide duplexed either to its perfect complement or to a template containing mismatched base pairs. The model may be applied to pure-DNA duplexes or to duplexes for which one strand contains any number and pattern of LNA substitutions. Perturbations to duplex stability arising from mismatched DNA:DNA or LNA:DNA base pairs are treated at the Gibbs energy level to maintain statistical significance in the regressed model parameters. This approach, when combined with the model's accounting of the temperature dependencies of the melting enthalpy and entropy, permits accurate prediction of T(m) values for pure-DNA homoduplexes or LNA-substituted heteroduplexes containing one or two independent mismatched base pairs. Terms accounting for changes in solution conditions and terminal addition of fluorescent dyes and quenchers are then introduced so that the model may be used to accurately predict and thereby tailor the T(m) of a pure-DNA or LNA-substituted hydrolysis probe when duplexed either to its perfect-match template or to a template harboring a noncomplementary base. The model, which builds on classic nearest-neighbor thermodynamics, should therefore be of use to clinicians and biologists who require probes that distinguish and quantify two closely related alleles in either a

  15. A General Business Model for Marine Reserves

    Science.gov (United States)

    Sala, Enric; Costello, Christopher; Dougherty, Dawn; Heal, Geoffrey; Kelleher, Kieran; Murray, Jason H.; Rosenberg, Andrew A.; Sumaila, Rashid

    2013-01-01

    Marine reserves are an effective tool for protecting biodiversity locally, with potential economic benefits including enhancement of local fisheries, increased tourism, and maintenance of ecosystem services. However, fishing communities often fear short-term income losses associated with closures, and thus may oppose marine reserves. Here we review empirical data and develop bioeconomic models to show that the value of marine reserves (enhanced adjacent fishing + tourism) may often exceed the pre-reserve value, and that economic benefits can offset the costs in as little as five years. These results suggest the need for a new business model for creating and managing reserves, which could pay for themselves and turn a profit for stakeholder groups. Our model could be expanded to include ecosystem services and other benefits, and it provides a general framework to estimate costs and benefits of reserves and to develop such business models. PMID:23573192

  16. A Global Model for Bankruptcy Prediction.

    Science.gov (United States)

    Alaminos, David; Del Castillo, Agustín; Fernández, Manuel Ángel

    2016-01-01

    The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy.

  17. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  18. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  19. Prediction of Periodontitis Occurrence: Influence of Classification and Sociodemographic and General Health Information

    DEFF Research Database (Denmark)

    Manzolli Leite, Fabio Renato; Peres, Karen Glazer; Do, Loc Giang

    2017-01-01

    BACKGROUND: Prediction of periodontitis development is challenging. Use of oral health-related data alone, especially in a young population, might underestimate disease risk. This study investigates accuracy of oral, systemic, and socioeconomic data on estimating periodontitis development...... in a population-based prospective cohort. METHODS: General health history and sociodemographic information were collected throughout the life-course of individuals. Oral examinations were performed at ages 24 and 31 years in the Pelotas 1982 birth cohort. Periodontitis at age 31 years according to six...... classifications was used as the gold standard to compute area under the receiver operating characteristic curve (AUC). Multivariable binomial regression models were used to evaluate the effects of oral health, general health, and socioeconomic characteristics on accuracy of periodontitis development prediction...

  20. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  1. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  8. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  9. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. A generalization of the bond fluctuation model to viscoelastic environments

    International Nuclear Information System (INIS)

    Fritsch, Christian C

    2014-01-01

    A lattice-based simulation method for polymer diffusion in a viscoelastic medium is presented. This method combines the eight-site bond fluctuation model with an algorithm for the simulation of fractional Brownian motion on the lattice. The method applies to unentangled self-avoiding chains and is probed for anomalous diffusion exponents α between 0.7 and 1.0. The simulation results are in very good agreement with the predictions of the generalized Rouse model of a self-avoiding chain polymer in a viscoelastic medium. (paper)

  13. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  14. A generalized additive regression model for survival times

    DEFF Research Database (Denmark)

    Scheike, Thomas H.

    2001-01-01

    Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...

  15. Generalized data stacking programming model with applications

    Directory of Open Access Journals (Sweden)

    Hala Samir Elhadidy

    2016-09-01

    Full Text Available Recent researches have shown that, everywhere in various sciences the systems are following stacked-based stored change behavior when subjected to events or varying environments “on and above” their normal situations. This paper presents a generalized data stack programming (GDSP model which is developed to describe the system changes under varying environment. These changes which are captured with different ways such as sensor reading are stored in matrices. Extraction algorithm and identification technique are proposed to extract the different layers between images and identify the stack class the object follows; respectively. The general multi-stacking network is presented including the interaction between various stack-based layering of some applications. The experiments prove that the concept of stack matrix gives average accuracy of 99.45%.

  16. Predictive models of prolonged mechanical ventilation yield moderate accuracy.

    Science.gov (United States)

    Figueroa-Casas, Juan B; Dwivedi, Alok K; Connery, Sean M; Quansah, Raphael; Ellerbrook, Lowell; Galvis, Juan

    2015-06-01

    To develop a model to predict prolonged mechanical ventilation within 48 hours of its initiation. In 282 general intensive care unit patients, multiple variables from the first 2 days on mechanical ventilation and their total ventilation duration were prospectively collected. Three models accounting for early deaths were developed using different analyses: (a) multinomial logistic regression to predict duration > 7 days vs duration ≤ 7 days alive vs duration ≤ 7 days death; (b) binary logistic regression to predict duration > 7 days for the entire cohort and for survivors only, separately; and (c) Cox regression to predict time to being free of mechanical ventilation alive. Positive end-expiratory pressure, postoperative state (negatively), and Sequential Organ Failure Assessment score were independently associated with prolonged mechanical ventilation. The multinomial regression model yielded an accuracy (95% confidence interval) of 60% (53%-64%). The binary regression models yielded accuracies of 67% (61%-72%) and 69% (63%-75%) for the entire cohort and for survivors, respectively. The Cox regression model showed an equivalent to area under the curve of 0.67 (0.62-0.71). Different predictive models of prolonged mechanical ventilation in general intensive care unit patients achieve a moderate level of overall accuracy, likely insufficient to assist in clinical decisions. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  18. Modelling debris flows down general channels

    Directory of Open Access Journals (Sweden)

    S. P. Pudasaini

    2005-01-01

    Full Text Available This paper is an extension of the single-phase cohesionless dry granular avalanche model over curved and twisted channels proposed by Pudasaini and Hutter (2003. It is a generalisation of the Savage and Hutter (1989, 1991 equations based on simple channel topography to a two-phase fluid-solid mixture of debris material. Important terms emerging from the correct treatment of the kinematic and dynamic boundary condition, and the variable basal topography are systematically taken into account. For vanishing fluid contribution and torsion-free channel topography our new model equations exactly degenerate to the previous Savage-Hutter model equations while such a degeneration was not possible by the Iverson and Denlinger (2001 model, which, in fact, also aimed to extend the Savage and Hutter model. The model equations of this paper have been rigorously derived; they include the effects of the curvature and torsion of the topography, generally for arbitrarily curved and twisted channels of variable channel width. The equations are put into a standard conservative form of partial differential equations. From these one can easily infer the importance and influence of the pore-fluid-pressure distribution in debris flow dynamics. The solid-phase is modelled by applying a Coulomb dry friction law whereas the fluid phase is assumed to be an incompressible Newtonian fluid. Input parameters of the equations are the internal and bed friction angles of the solid particles, the viscosity and volume fraction of the fluid, the total mixture density and the pore pressure distribution of the fluid at the bed. Given the bed topography and initial geometry and the initial velocity profile of the debris mixture, the model equations are able to describe the dynamics of the depth profile and bed parallel depth-averaged velocity distribution from the initial position to the final deposit. A shock capturing, total variation diminishing numerical scheme is implemented to

  19. Assessment of performance of survival prediction models for cancer prognosis

    Directory of Open Access Journals (Sweden)

    Chen Hung-Chia

    2012-07-01

    Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

  20. Quantifying predictive accuracy in survival models.

    Science.gov (United States)

    Lirette, Seth T; Aban, Inmaculada

    2017-12-01

    For time-to-event outcomes in medical research, survival models are the most appropriate to use. Unlike logistic regression models, quantifying the predictive accuracy of these models is not a trivial task. We present the classes of concordance (C) statistics and R 2 statistics often used to assess the predictive ability of these models. The discussion focuses on Harrell's C, Kent and O'Quigley's R 2 , and Royston and Sauerbrei's R 2 . We present similarities and differences between the statistics, discuss the software options from the most widely used statistical analysis packages, and give a practical example using the Worcester Heart Attack Study dataset.

  1. Predictive power of nuclear-mass models

    Directory of Open Access Journals (Sweden)

    Yu. A. Litvinov

    2013-12-01

    Full Text Available Ten different theoretical models are tested for their predictive power in the description of nuclear masses. Two sets of experimental masses are used for the test: the older set of 2003 and the newer one of 2011. The predictive power is studied in two regions of nuclei: the global region (Z, N ≥ 8 and the heavy-nuclei region (Z ≥ 82, N ≥ 126. No clear correlation is found between the predictive power of a model and the accuracy of its description of the masses.

  2. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  3. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard...... linear contrast in a generalized linear model using the probit link function. All methods developed in the paper are implemented in our free R-package sensR (http://www.cran.r-project.org/package=sensR/). This includes the basic power and sample size calculations for these four discrimination tests...

  4. The Generalized Quantum Episodic Memory Model.

    Science.gov (United States)

    Trueblood, Jennifer S; Hemmer, Pernille

    2017-11-01

    Recent evidence suggests that experienced events are often mapped to too many episodic states, including those that are logically or experimentally incompatible with one another. For example, episodic over-distribution patterns show that the probability of accepting an item under different mutually exclusive conditions violates the disjunction rule. A related example, called subadditivity, occurs when the probability of accepting an item under mutually exclusive and exhaustive instruction conditions sums to a number >1. Both the over-distribution effect and subadditivity have been widely observed in item and source-memory paradigms. These phenomena are difficult to explain using standard memory frameworks, such as signal-detection theory. A dual-trace model called the over-distribution (OD) model (Brainerd & Reyna, 2008) can explain the episodic over-distribution effect, but not subadditivity. Our goal is to develop a model that can explain both effects. In this paper, we propose the Generalized Quantum Episodic Memory (GQEM) model, which extends the Quantum Episodic Memory (QEM) model developed by Brainerd, Wang, and Reyna (2013). We test GQEM by comparing it to the OD model using data from a novel item-memory experiment and a previously published source-memory experiment (Kellen, Singmann, & Klauer, 2014) examining the over-distribution effect. Using the best-fit parameters from the over-distribution experiments, we conclude by showing that the GQEM model can also account for subadditivity. Overall these results add to a growing body of evidence suggesting that quantum probability theory is a valuable tool in modeling recognition memory. Copyright © 2016 Cognitive Science Society, Inc.

  5. Hierarchical models for informing general biomass equations with felled tree data

    Science.gov (United States)

    Brian J. Clough; Matthew B. Russell; Christopher W. Woodall; Grant M. Domke; Philip J. Radtke

    2015-01-01

    We present a hierarchical framework that uses a large multispecies felled tree database to inform a set of general models for predicting tree foliage biomass, with accompanying uncertainty, within the FIA database. Results suggest significant prediction uncertainty for individual trees and reveal higher errors when predicting foliage biomass for larger trees and for...

  6. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  7. The epistemological status of general circulation models

    Science.gov (United States)

    Loehle, Craig

    2017-05-01

    Forecasts of both likely anthropogenic effects on climate and consequent effects on nature and society are based on large, complex software tools called general circulation models (GCMs). Forecasts generated by GCMs have been used extensively in policy decisions related to climate change. However, the relation between underlying physical theories and results produced by GCMs is unclear. In the case of GCMs, many discretizations and approximations are made, and simulating Earth system processes is far from simple and currently leads to some results with unknown energy balance implications. Statistical testing of GCM forecasts for degree of agreement with data would facilitate assessment of fitness for use. If model results need to be put on an anomaly basis due to model bias, then both visual and quantitative measures of model fit depend strongly on the reference period used for normalization, making testing problematic. Epistemology is here applied to problems of statistical inference during testing, the relationship between the underlying physics and the models, the epistemic meaning of ensemble statistics, problems of spatial and temporal scale, the existence or not of an unforced null for climate fluctuations, the meaning of existing uncertainty estimates, and other issues. Rigorous reasoning entails carefully quantifying levels of uncertainty.

  8. Generalized continuous linear model of international trade

    Directory of Open Access Journals (Sweden)

    Kostenko Elena

    2014-01-01

    Full Text Available The probability-based approach to the linear model of international trade based on the theory of Markov processes with continuous time is analysed. A generalized continuous model of international trade is built, in which the transition of the system from state to state is described by linear differential equations. The methodology of how to obtain the intensity matrices, which are differential in nature, is shown, and the same is done for their corresponding transition matrices for processes of purchasing and selling. In the process of the creation of the continuous model, functions and operations of matrices were used in addition to the Laplace transform, which gave the analytical form of the transition matrices, and therefore the expressions for the state vectors of the system. The obtained expressions simplify analysis and calculations in comparison to other methods. The values of the continuous transition matrices include in themselves the results of discrete model of international trade at moments in time proportional to the time step. The continuous model improves the quality of planning and the effectiveness of control of international trade agreements.

  9. The epistemological status of general circulation models

    Science.gov (United States)

    Loehle, Craig

    2018-03-01

    Forecasts of both likely anthropogenic effects on climate and consequent effects on nature and society are based on large, complex software tools called general circulation models (GCMs). Forecasts generated by GCMs have been used extensively in policy decisions related to climate change. However, the relation between underlying physical theories and results produced by GCMs is unclear. In the case of GCMs, many discretizations and approximations are made, and simulating Earth system processes is far from simple and currently leads to some results with unknown energy balance implications. Statistical testing of GCM forecasts for degree of agreement with data would facilitate assessment of fitness for use. If model results need to be put on an anomaly basis due to model bias, then both visual and quantitative measures of model fit depend strongly on the reference period used for normalization, making testing problematic. Epistemology is here applied to problems of statistical inference during testing, the relationship between the underlying physics and the models, the epistemic meaning of ensemble statistics, problems of spatial and temporal scale, the existence or not of an unforced null for climate fluctuations, the meaning of existing uncertainty estimates, and other issues. Rigorous reasoning entails carefully quantifying levels of uncertainty.

  10. Prediction of RNA secondary structure using generalized centroid estimators.

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Sato, Kengo; Mituyama, Toutai; Asai, Kiyoshi

    2009-02-15

    Recent studies have shown that the methods for predicting secondary structures of RNAs on the basis of posterior decoding of the base-pairing probabilities has an advantage with respect to prediction accuracy over the conventionally utilized minimum free energy methods. However, there is room for improvement in the objective functions presented in previous studies, which are maximized in the posterior decoding with respect to the accuracy measures for secondary structures. We propose novel estimators which improve the accuracy of secondary structure prediction of RNAs. The proposed estimators maximize an objective function which is the weighted sum of the expected number of the true positives and that of the true negatives of the base pairs. The proposed estimators are also improved versions of the ones used in previous works, namely CONTRAfold for secondary structure prediction from a single RNA sequence and McCaskill-MEA for common secondary structure prediction from multiple alignments of RNA sequences. We clarify the relations between the proposed estimators and the estimators presented in previous works, and theoretically show that the previous estimators include additional unnecessary terms in the evaluation measures with respect to the accuracy. Furthermore, computational experiments confirm the theoretical analysis by indicating improvement in the empirical accuracy. The proposed estimators represent extensions of the centroid estimators proposed in Ding et al. and Carvalho and Lawrence, and are applicable to a wide variety of problems in bioinformatics. Supporting information and the CentroidFold software are available online at: http://www.ncrna.org/software/centroidfold/.

  11. A stratiform cloud parameterization for General Circulation Models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  12. A stratiform cloud parameterization for general circulation models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in general circulation models (GCMs) is widely recognized as a major limitation in applying these models to predictions of global climate change. The purpose of this project is to develop in GCMs a stratiform cloud parameterization that expresses clouds in terms of bulk microphysical properties and their subgrid variability. Various clouds variables and their interactions are summarized. Precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  13. Aspects of general linear modelling of migration.

    Science.gov (United States)

    Congdon, P

    1992-01-01

    "This paper investigates the application of general linear modelling principles to analysing migration flows between areas. Particular attention is paid to specifying the form of the regression and error components, and the nature of departures from Poisson randomness. Extensions to take account of spatial and temporal correlation are discussed as well as constrained estimation. The issue of specification bears on the testing of migration theories, and assessing the role migration plays in job and housing markets: the direction and significance of the effects of economic variates on migration depends on the specification of the statistical model. The application is in the context of migration in London and South East England in the 1970s and 1980s." excerpt

  14. Superconductivity in a generalized Hubbard model

    Science.gov (United States)

    Arrachea, Liliana; Aligia, A. A.

    1997-02-01

    We consider a Hubbard model in the square lattice, with a generalized hopping between nearest-neighbor sites for spin up (down), which depends on the total occupation nb of spin down (up) electrons on both sites. We call the hopping parameters tAA, tAB, and tBB for nb = 0, 1 or 2 respectively. Using the Hartree-Fock and Bardeen-Cooper-Schrieffer mean-field approximations to decouple the two-body and three-body interactions, we find that the model exhibits extended s-wave superconductivity in the electron-hole symmetric case tAB > tAA = tBB for small values of the Coulomb repulsion U or small band fillings. For moderate values of U, the antiferromagnetic normal (AFN) state has lower energy. The translationally invariant d-wave superconducting state has always larger energy than the AFN state.

  15. Functional methods in the generalized Dicke model

    International Nuclear Information System (INIS)

    Alcalde, M. Aparicio; Lemos, A.L.L. de; Svaiter, N.F.

    2007-01-01

    The Dicke model describes an ensemble of N identical two-level atoms (qubits) coupled to a single quantized mode of a bosonic field. The fermion Dicke model should be obtained by changing the atomic pseudo-spin operators by a linear combination of Fermi operators. The generalized fermion Dicke model is defined introducing different coupling constants between the single mode of the bosonic field and the reservoir, g 1 and g 2 for rotating and counter-rotating terms respectively. In the limit N -> ∞, the thermodynamic of the fermion Dicke model can be analyzed using the path integral approach with functional method. The system exhibits a second order phase transition from normal to superradiance at some critical temperature with the presence of a condensate. We evaluate the critical transition temperature and present the spectrum of the collective bosonic excitations for the general case (g 1 ≠ 0 and g 2 ≠ 0). There is quantum critical behavior when the coupling constants g 1 and g 2 satisfy g 1 + g 2 =(ω 0 Ω) 1/2 , where ω 0 is the frequency of the mode of the field and Ω is the energy gap between energy eigenstates of the qubits. Two particular situations are analyzed. First, we present the spectrum of the collective bosonic excitations, in the case g 1 ≠ 0 and g 2 ≠ 0, recovering the well known results. Second, the case g 1 ≠ 0 and g 2 ≠ 0 is studied. In this last case, it is possible to have a super radiant phase when only virtual processes are introduced in the interaction Hamiltonian. Here also appears a quantum phase transition at the critical coupling g 2 (ω 0 Ω) 1/2 , and for larger values for the critical coupling, the system enter in this super radiant phase with a Goldstone mode. (author)

  16. Chaos in generalized Jaynes-Cummings model

    Energy Technology Data Exchange (ETDEWEB)

    Chotorlishvili, L. [Institute for Physik, Martin-Luther-University Halle-Wittenberg, Heinrich-Damerow-Str. 4, 06120 Halle (Germany)], E-mail: lchotor33@yahoo.com; Toklikishvili, Z. [Physics Department of the Tbilisi State University, Chavchavadze av. 3, 0128 Tbilisi (Georgia)

    2008-04-14

    The possibility of chaos formation is studied in terms of a generalized Jaynes-Cummings model which is a key model in the quantum electrodynamics of resonators. In particular, the dynamics of a three-level optical atom which is under the action of the resonator field is considered. The specific feature of the considered problem consists in that not all transitions between the atom levels are permitted. This asymmetry of the system accounts for the complexity of the problem and makes it different from the three-level systems studied previously. We consider the most general case, where the interaction of the system with the resonator depends on the system coordinate inside the resonator. It is shown that, contrary to the commonly accepted opinion, the absence of resonance detuning does not guarantee the system state controllability. In the course of evolution the system performs an irreversible transition from the purely quantum-mechanical state to the mixed state. It is shown that the asymmetry of the system levels accounts for the fact that the upper excited level turns out to be the most populated one.

  17. The Internal/External Frame of Reference Model Revisited: Incorporating General Cognitive Ability and General Academic Self-Concept.

    Science.gov (United States)

    Brunner, Martin; Lüdtke, Oliver; Trautwein, Ulrich

    2008-01-01

    The internal/external frame of reference model (I/E model; Marsh, 1986 ) is a highly influential model of self-concept formation, which predicts that domain-specific abilities have positive effects on academic self-concepts in the corresponding domain and negative effects across domains. Investigations of the I/E model do not typically incorporate general cognitive ability or general academic self-concept. This article investigates alternative measurement models for domain-specific and domain-general cognitive abilities and academic self-concepts within an extended I/E model framework using representative data from 25,301 9th-grade students. Empirical support was found for the external validity of a new measurement model for academic self-concepts with respect to key student characteristics (gender, school satisfaction, educational aspirations, domain-specific interests, grades). Moreover, the basic predictions of the I/E model were confirmed, and the new extension of the traditional I/E model permitted meaningful relations to be drawn between domain-general cognitive ability and domain-general academic self-concept as well as between the domain-specific elements of the model.

  18. Posterior predictive checking of multiple imputation models.

    Science.gov (United States)

    Nguyen, Cattram D; Lee, Katherine J; Carlin, John B

    2015-07-01

    Multiple imputation is gaining popularity as a strategy for handling missing data, but there is a scarcity of tools for checking imputation models, a critical step in model fitting. Posterior predictive checking (PPC) has been recommended as an imputation diagnostic. PPC involves simulating "replicated" data from the posterior predictive distribution of the model under scrutiny. Model fit is assessed by examining whether the analysis from the observed data appears typical of results obtained from the replicates produced by the model. A proposed diagnostic measure is the posterior predictive "p-value", an extreme value of which (i.e., a value close to 0 or 1) suggests a misfit between the model and the data. The aim of this study was to evaluate the performance of the posterior predictive p-value as an imputation diagnostic. Using simulation methods, we deliberately misspecified imputation models to determine whether posterior predictive p-values were effective in identifying these problems. When estimating the regression parameter of interest, we found that more extreme p-values were associated with poorer imputation model performance, although the results highlighted that traditional thresholds for classical p-values do not apply in this context. A shortcoming of the PPC method was its reduced ability to detect misspecified models with increasing amounts of missing data. Despite the limitations of posterior predictive p-values, they appear to have a valuable place in the imputer's toolkit. In addition to automated checking using p-values, we recommend imputers perform graphical checks and examine other summaries of the test quantity distribution. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  20. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  1. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  2. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  3. Verification and improvement of a predictive model for radionuclide migration

    International Nuclear Information System (INIS)

    Miller, C.W.; Benson, L.V.; Carnahan, C.L.

    1982-01-01

    Prediction of the rates of migration of contaminant chemical species in groundwater flowing through toxic waste repositories is essential to the assessment of a repository's capability of meeting standards for release rates. A large number of chemical transport models, of varying degrees of complexity, have been devised for the purpose of providing this predictive capability. In general, the transport of dissolved chemical species through a water-saturated porous medium is influenced by convection, diffusion/dispersion, sorption, formation of complexes in the aqueous phase, and chemical precipitation. The reliability of predictions made with the models which omit certain of these processes is difficult to assess. A numerical model, CHEMTRN, has been developed to determine which chemical processes govern radionuclide migration. CHEMTRN builds on a model called MCCTM developed previously by Lichtner and Benson

  4. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  5. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  6. Are animal models predictive for humans?

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2009-01-01

    Full Text Available Abstract It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics.

  7. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  8. Physical/chemical modeling for photovoltaic module life prediction

    Science.gov (United States)

    Moacanin, J.; Carroll, W. F.; Gupta, A.

    1979-01-01

    The paper presents a generalized methodology for identification and evaluation of potential degradation and failure of terrestrial photovoltaic encapsulation. Failure progression modeling and an interaction matrix are utilized to complement the conventional approach to failure degradation mode identification. Comparison of the predicted performance based on these models can produce: (1) constraints on system or component design, materials or operating conditions, (2) qualification (predicted satisfactory function), and (3) uncertainty. The approach has been applied to an investigation of an unexpected delamination failure; it is being used to evaluate thermomechanical interactions in photovoltaic modules and to study corrosion of contacts and interconnects.

  9. A general model of learning design objects

    Directory of Open Access Journals (Sweden)

    Azeddine Chikh

    2014-01-01

    Full Text Available Previous research on the development of learning objects has targeted either learners, as consumers of these objects, or instructors, as designers who reuse these objects in building new online courses. There is currently an urgent need for the sharing and reuse of both theoretical knowledge (literature reviews and practical knowledge (best practice in learning design. The primary aim of this paper is to develop a strategy for constructing a more powerful set of learning objects targeted at supporting instructors in designing their curricula. A key challenge in this work is the definition of a new class of learning design objects that combine two types of knowledge: (1 reusable knowledge, consisting of theoretical and practical information on education design, and (2 knowledge of reuse, which is necessary to describe the reusable knowledge using an extended learning object metadata language. In addition, we introduce a general model of learning design object repositories based on the Unified Modeling Language, and a learning design support framework is proposed based on the repository model. Finally, a first prototype is developed to provide a subjective evaluation of the new framework.

  10. A swarm intelligence-based tuning method for the Sliding Mode Generalized Predictive Control.

    Science.gov (United States)

    Oliveira, J B; Boaventura-Cunha, J; Moura Oliveira, P B; Freire, H

    2014-09-01

    This work presents an automatic tuning method for the discontinuous component of the Sliding Mode Generalized Predictive Controller (SMGPC) subject to constraints. The strategy employs Particle Swarm Optimization (PSO) to minimize a second aggregated cost function. The continuous component is obtained by the standard procedure, by Quadratic Programming (QP), thus yielding an online dual optimization scheme. Simulations and performance indexes for common process models in industry, such as nonminimum phase and time delayed systems, result in a better performance, improving robustness and tracking accuracy. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Model predictive controller design of hydrocracker reactors

    OpenAIRE

    GÖKÇE, Dila

    2014-01-01

    This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...

  12. Symplectic models for general insertion devices

    International Nuclear Information System (INIS)

    Wu, Y.; Forest, E.; Robin, D. S.; Nishimura, H.; Wolski, A.; Litvinenko, V. N.

    2001-01-01

    A variety of insertion devices (IDs), wigglers and undulators, linearly or elliptically polarized,are widely used as high brightness radiation sources at the modern light source rings. Long and high-field wigglers have also been proposed as the main source of radiation damping at next generation damping rings. As a result, it becomes increasingly important to understand the impact of IDs on the charged particle dynamics in the storage ring. In this paper, we report our recent development of a general explicit symplectic model for IDs with the paraxial ray approximation. High-order explicit symplectic integrators are developed to study real-world insertion devices with a number of wiggler harmonics and arbitrary polarizations

  13. New model for nucleon generalized parton distributions

    Energy Technology Data Exchange (ETDEWEB)

    Radyushkin, Anatoly V. [JLAB, Newport News, VA (United States)

    2014-01-01

    We describe a new type of models for nucleon generalized parton distributions (GPDs) H and E. They are heavily based on the fact nucleon GPDs require to use two forms of double distribution (DD) representations. The outcome of the new treatment is that the usual DD+D-term construction should be amended by an extra term, {xi} E{sub +}{sup 1} (x,{xi}) which has the DD structure {alpha}/{beta} e({beta},{alpha}, with e({beta},{alpha}) being the DD that generates GPD E(x,{xi}). We found that this function, unlike the D-term, has support in the whole -1 <= x <= 1 region. Furthermore, it does not vanish at the border points |x|={xi}.

  14. Generalized Laplacian eigenmaps for modeling and tracking human motions.

    Science.gov (United States)

    Martinez-del-Rincon, Jesus; Lewandowski, Michal; Nebel, Jean-Christophe; Makris, Dimitrios

    2014-09-01

    This paper presents generalized Laplacian eigenmaps, a novel dimensionality reduction approach designed to address stylistic variations in time series. It generates compact and coherent continuous spaces whose geometry is data-driven. This paper also introduces graph-based particle filter, a novel methodology conceived for efficient tracking in low dimensional space derived from a spectral dimensionality reduction method. Its strengths are a propagation scheme, which facilitates the prediction in time and style, and a noise model coherent with the manifold, which prevents divergence, and increases robustness. Experiments show that a combination of both techniques achieves state-of-the-art performance for human pose tracking in underconstrained scenarios.

  15. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  16. A Statistical Evaluation of Atmosphere-Ocean General Circulation Models: Complexity vs. Simplicity

    OpenAIRE

    Robert K. Kaufmann; David I. Stern

    2004-01-01

    The principal tools used to model future climate change are General Circulation Models which are deterministic high resolution bottom-up models of the global atmosphere-ocean system that require large amounts of supercomputer time to generate results. But are these models a cost-effective way of predicting future climate change at the global level? In this paper we use modern econometric techniques to evaluate the statistical adequacy of three general circulation models (GCMs) by testing thre...

  17. Prediction and reconstruction of future and missing unobservable modified Weibull lifetime based on generalized order statistics

    Directory of Open Access Journals (Sweden)

    Amany E. Aly

    2016-04-01

    Full Text Available When a system consisting of independent components of the same type, some appropriate actions may be done as soon as a portion of them have failed. It is, therefore, important to be able to predict later failure times from earlier ones. One of the well-known failure distributions commonly used to model component life, is the modified Weibull distribution (MWD. In this paper, two pivotal quantities are proposed to construct prediction intervals for future unobservable lifetimes based on generalized order statistics (gos from MWD. Moreover, a pivotal quantity is developed to reconstruct missing observations at the beginning of experiment. Furthermore, Monte Carlo simulation studies are conducted and numerical computations are carried out to investigate the efficiency of presented results. Finally, two illustrative examples for real data sets are analyzed.

  18. A revised prediction model for natural conception.

    Science.gov (United States)

    Bensdorp, Alexandra J; van der Steeg, Jan Willem; Steures, Pieternel; Habbema, J Dik F; Hompes, Peter G A; Bossuyt, Patrick M M; van der Veen, Fulco; Mol, Ben W J; Eijkemans, Marinus J C

    2017-06-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis was to assess whether additional predictors can refine the Hunault model and extend its applicability. Consecutive subfertile couples with unexplained and mild male subfertility presenting in fertility clinics were asked to participate in a prospective cohort study. We constructed a multivariable prediction model with the predictors from the Hunault model and new potential predictors. The primary outcome, natural conception leading to an ongoing pregnancy, was observed in 1053 women of the 5184 included couples (20%). All predictors of the Hunault model were selected into the revised model plus an additional seven (woman's body mass index, cycle length, basal FSH levels, tubal status,history of previous pregnancies in the current relationship (ongoing pregnancies after natural conception, fertility treatment or miscarriages), semen volume, and semen morphology. Predictions from the revised model seem to concur better with observed pregnancy rates compared with the Hunault model; c-statistic of 0.71 (95% CI 0.69 to 0.73) compared with 0.59 (95% CI 0.57 to 0.61). Copyright © 2017. Published by Elsevier Ltd.

  19. A General Role for Medial Prefrontal Cortex in Event Prediction

    Science.gov (United States)

    2014-07-11

    experiment, two classes of stimuli were used. In each stimulus class, two specific colors could be combined to generate Stroop stimuli. For example...is also engaged in tasks without a significant behavioral com- ponent, or when a specific motor command is neither planned nor executed (Büchel et al...Model activity was averaged over the first 20 model iterations following the onset of the target and flanker cues. SIMULATION 2: ITEM- SPECIFIC VS. GLOBAL

  20. MODEL OF BRAZILIAN URBANIZATION: GENERAL NOTES

    Directory of Open Access Journals (Sweden)

    Leandro da Silva Guimarães

    2016-07-01

    Full Text Available The full text format seeks to analyze the social inequality in Brazil through the spatial process of that inequality in this sense it analyzes, scratching the edges of what is known of the Brazilian urbanization model and how this same model produced gentrification cities and exclusive. So search the text discuss the country’s urban exclusion through consolidation of what is conventionally called peripheral areas, or more generally, of peripheries. The text on screen is the result of research carried out at the Federal Fluminense University in Masters level. In this study, we tried to understand the genesis of an urban housing development located in São Gonçalo, Rio de Janeiro called Jardim Catarina. Understand what the problem space partner who originated it. In this sense, his analysis becomes consubstantial to understand the social and spatial inequalities in Brazil, as well as the role of the state as planning manager socio-spatial planning and principal agent in the solution of such problems. It is expected that with the realization of a study of greater amounts, from which this article is just a micro work can contribute subsidies that contribute to the arrangement and crystallization of public policies that give account of social inequalities and serve to leverage a country more fair and equitable cities.

  1. Comparison of the models of financial distress prediction

    Directory of Open Access Journals (Sweden)

    Jiří Omelka

    2013-01-01

    Full Text Available Prediction of the financial distress is generally supposed as approximation if a business entity is closed on bankruptcy or at least on serious financial problems. Financial distress is defined as such a situation when a company is not able to satisfy its liabilities in any forms, or when its liabilities are higher than its assets. Classification of financial situation of business entities represents a multidisciplinary scientific issue that uses not only the economic theoretical bases but interacts to the statistical, respectively to econometric approaches as well.The first models of financial distress prediction have originated in the sixties of the 20th century. One of the most known is the Altman’s model followed by a range of others which are constructed on more or less conformable bases. In many existing models it is possible to find common elements which could be marked as elementary indicators of potential financial distress of a company. The objective of this article is, based on the comparison of existing models of prediction of financial distress, to define the set of basic indicators of company’s financial distress at conjoined identification of their critical aspects. The sample defined this way will be a background for future research focused on determination of one-dimensional model of financial distress prediction which would subsequently become a basis for construction of multi-dimensional prediction model.

  2. Statistical models for expert judgement and wear prediction

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1994-01-01

    This thesis studies the statistical analysis of expert judgements and prediction of wear. The point of view adopted is the one of information theory and Bayesian statistics. A general Bayesian framework for analyzing both the expert judgements and wear prediction is presented. Information theoretic interpretations are given for some averaging techniques used in the determination of consensus distributions. Further, information theoretic models are compared with a Bayesian model. The general Bayesian framework is then applied in analyzing expert judgements based on ordinal comparisons. In this context, the value of information lost in the ordinal comparison process is analyzed by applying decision theoretic concepts. As a generalization of the Bayesian framework, stochastic filtering models for wear prediction are formulated. These models utilize the information from condition monitoring measurements in updating the residual life distribution of mechanical components. Finally, the application of stochastic control models in optimizing operational strategies for inspected components are studied. Monte-Carlo simulation methods, such as the Gibbs sampler and the stochastic quasi-gradient method, are applied in the determination of posterior distributions and in the solution of stochastic optimization problems. (orig.) (57 refs., 7 figs., 1 tab.)

  3. Predictive modeling of coupled multi-physics systems: I. Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel

    2014-01-01

    Highlights: • We developed “predictive modeling of coupled multi-physics systems (PMCMPS)”. • PMCMPS reduces predicted uncertainties in predicted model responses and parameters. • PMCMPS treats efficiently very large coupled systems. - Abstract: This work presents an innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS).” This methodology takes into account fully the coupling terms between the systems but requires only the computational resources that would be needed to perform predictive modeling on each system separately. The PMCMPS methodology uses the maximum entropy principle to construct an optimal approximation of the unknown a priori distribution based on a priori known mean values and uncertainties characterizing the parameters and responses for both multi-physics models. This “maximum entropy”-approximate a priori distribution is combined, using Bayes’ theorem, with the “likelihood” provided by the multi-physics simulation models. Subsequently, the posterior distribution thus obtained is evaluated using the saddle-point method to obtain analytical expressions for the optimally predicted values for the multi-physics models parameters and responses along with corresponding reduced uncertainties. Noteworthy, the predictive modeling methodology for the coupled systems is constructed such that the systems can be considered sequentially rather than simultaneously, while preserving exactly the same results as if the systems were treated simultaneously. Consequently, very large coupled systems, which could perhaps exceed available computational resources if treated simultaneously, can be treated with the PMCMPS methodology presented in this work sequentially and without any loss of generality or information, requiring just the resources that would be needed if the systems were treated sequentially

  4. General single phase wellbore flow model

    Energy Technology Data Exchange (ETDEWEB)

    Ouyang, Liang-Biao; Arbabi, S.; Aziz, K.

    1997-02-05

    A general wellbore flow model, which incorporates not only frictional, accelerational and gravitational pressure drops, but also the pressure drop caused by inflow, is presented in this report. The new wellbore model is readily applicable to any wellbore perforation patterns and well completions, and can be easily incorporated in reservoir simulators or analytical reservoir inflow models. Three dimensionless numbers, the accelerational to frictional pressure gradient ratio R{sub af}, the gravitational to frictional pressure gradient ratio R{sub gf}, and the inflow-directional to accelerational pressure gradient ratio R{sub da}, have been introduced to quantitatively describe the relative importance of different pressure gradient components. For fluid flow in a production well, it is expected that there may exist up to three different regions of the wellbore: the laminar flow region, the partially-developed turbulent flow region, and the fully-developed turbulent flow region. The laminar flow region is located near the well toe, the partially-turbulent flow region lies in the middle of the wellbore, while the fully-developed turbulent flow region is at the downstream end or the heel of the wellbore. Length of each region depends on fluid properties, wellbore geometry and flow rate. As the distance from the well toe increases, flow rate in the wellbore increases and the ratios R{sub af} and R{sub da} decrease. Consequently accelerational and inflow-directional pressure drops have the greatest impact in the toe region of the wellbore. Near the well heel the local wellbore flow rate becomes large and close to the total well production rate, here R{sub af} and R{sub da} are small, therefore, both the accelerational and inflow-directional pressure drops can be neglected.

  5. Evaluating the double Poisson generalized linear model.

    Science.gov (United States)

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. A Note on the Identifiability of Generalized Linear Mixed Models

    DEFF Research Database (Denmark)

    Labouriau, Rodrigo

    2014-01-01

    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity ...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...

  7. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    Science.gov (United States)

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  8. Intra prediction based on Markov process modeling of images.

    Science.gov (United States)

    Kamisli, Fatih

    2013-10-01

    In recent video coding standards, intraprediction of a block of pixels is performed by copying neighbor pixels of the block along an angular direction inside the block. Each block pixel is predicted from only one or few directionally aligned neighbor pixels of the block. Although this is a computationally efficient approach, it ignores potentially useful correlation of other neighbor pixels of the block. To use this correlation, a general linear prediction approach is proposed, where each block pixel is predicted using a weighted sum of all neighbor pixels of the block. The disadvantage of this approach is the increased complexity because of the large number of weights. In this paper, we propose an alternative approach to intraprediction, where we model image pixels with a Markov process. The Markov process model accounts for the ignored correlation in standard intraprediction methods, but uses few neighbor pixels and enables a computationally efficient recursive prediction algorithm. Compared with the general linear prediction approach that has a large number of independent weights, the Markov process modeling approach uses a much smaller number of independent parameters and thus offers significantly reduced memory or computation requirements, while achieving similar coding gains with offline computed parameters.

  9. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  10. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....... and controlled have thus become essential factors for efficient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona...

  11. Implementation of a PETN failure model using ARIA's general chemistry framework

    Energy Technology Data Exchange (ETDEWEB)

    Hobbs, Michael L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-01

    We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model, implementation, and validation.

  12. Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks

    Science.gov (United States)

    Kanevski, Mikhail

    2015-04-01

    The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press

  13. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior......One of the major challenges with the increase in wind power generation is the uncertain nature of wind speed. So far the uncertainty about wind speed has been presented through probability distributions. Also the existing models that consider the uncertainty of the wind speed primarily view...

  14. Contextual interactions in a generalized energy model of complex cells.

    Science.gov (United States)

    Dellen, Babette K; Clark, John W; Wessel, Ralf

    2009-01-01

    We propose a generalized energy model of complex cells to describe modulatory contextual influences on the responses of neurons in the primary visual cortex (V1). Many orientation-selective cells in V1 respond to contrast of orientation and motion of stimuli exciting the classical receptive field (CRF) and the non-CRF, or surround. In the proposed model, a central spatiotemporal filter, defining the CRF, is nonlinearly combined with a spatiotemporal filter extending into the non-CRF. These filters are assumed to describe simple-cell responses, while the nonlinear combination of their responses describes the responses of complex cells. This mathematical operation accounts for the inherent nonlinearity of complex cells, such as phase independence and frequency doubling, and for nonlinear interactions between stimuli in the CRF and surround of the cell, including sensitivity to feature contrast. If only the CRF of the generalized complex cell is stimulated by a drifting grating, the model reduces to the standard energy model. The theoretical predictions of the model are supported by computer simulations and compared with experimental data from V1.

  15. A generalized methodology to characterize composite materials for pyrolysis models

    Science.gov (United States)

    McKinnon, Mark B.

    The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to

  16. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  17. A general mixture model for sediment laden flows

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping; Bombardelli, Fabián

    2017-09-01

    A mixture model for general description of sediment-laden flows is developed based on an Eulerian-Eulerian two-phase flow theory, with the aim at gaining computational speed in the prediction, but preserving the accuracy of the complete two-fluid model. The basic equations of the model include the mass and momentum conservation equations for the sediment-water mixture, and the mass conservation equation for sediment. However, a newly-obtained expression for the slip velocity between phases allows for the computation of the sediment motion, without the need of solving the momentum equation for sediment. The turbulent motion is represented for both the fluid and the particulate phases. A modified k-ε model is used to describe the fluid turbulence while an algebraic model is adopted for turbulent motion of particles. A two-dimensional finite difference method based on the SMAC scheme was used to numerically solve the mathematical model. The model is validated through simulations of fluid and suspended sediment motion in steady open-channel flows, both in equilibrium and non-equilibrium states, as well as in oscillatory flows. The computed sediment concentrations, horizontal velocity and turbulent kinetic energy of the mixture are all shown to be in good agreement with available experimental data, and importantly, this is done at a fraction of the computational efforts required by the complete two-fluid model.

  18. Predictive modeling in homogeneous catalysis: a tutorial

    NARCIS (Netherlands)

    Maldonado, A.G.; Rothenberg, G.

    2010-01-01

    Predictive modeling has become a practical research tool in homogeneous catalysis. It can help to pinpoint ‘good regions’ in the catalyst space, narrowing the search for the optimal catalyst for a given reaction. Just like any other new idea, in silico catalyst optimization is accepted by some

  19. Model predictive control of smart microgrids

    DEFF Research Database (Denmark)

    Hu, Jiefeng; Zhu, Jianguo; Guerrero, Josep M.

    2014-01-01

    required to realise high-performance of distributed generations and will realise innovative control techniques utilising model predictive control (MPC) to assist in coordinating the plethora of generation and load combinations, thus enable the effective exploitation of the clean renewable energy sources...

  20. Feedback model predictive control by randomized algorithms

    NARCIS (Netherlands)

    Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep

    2001-01-01

    In this paper we present a further development of an algorithm for stochastic disturbance rejection in model predictive control with input constraints based on randomized algorithms. The algorithm presented in our work can solve the problem of stochastic disturbance rejection approximately but with

  1. A Robustly Stabilizing Model Predictive Control Algorithm

    Science.gov (United States)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  2. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...

  3. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations ...

  4. General analysis of dark radiation in sequestered string models

    Energy Technology Data Exchange (ETDEWEB)

    Cicoli, Michele [ICTP,Strada Costiera 11, Trieste 34014 (Italy); Dipartimento di Fisica e Astronomia, Università di Bologna,via Irnerio 46, 40126 Bologna (Italy); INFN, Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy); Muia, Francesco [Dipartimento di Fisica e Astronomia, Università di Bologna,via Irnerio 46, 40126 Bologna (Italy); INFN, Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy)

    2015-12-22

    We perform a general analysis of axionic dark radiation produced from the decay of the lightest modulus in the sequestered LARGE Volume Scenario. We discuss several cases depending on the form of the Kähler metric for visible sector matter fields and the mechanism responsible for achieving a de Sitter vacuum. The leading decay channels which determine dark radiation predictions are to hidden sector axions, visible sector Higgses and SUSY scalars depending on their mass. We show that in most of the parameter space of split SUSY-like models squarks and sleptons are heavier than the lightest modulus. Hence dark radiation predictions previously obtained for MSSM-like cases hold more generally also for split SUSY-like cases since the decay channel to SUSY scalars is kinematically forbidden. However the inclusion of string loop corrections to the Kähler potential gives rise to a parameter space region where the decay channel to SUSY scalars opens up, leading to a significant reduction of dark radiation production. In this case, the simplest model with a shift-symmetric Higgs sector can suppress the excess of dark radiation ΔN{sub eff} to values as small as 0.14, in perfect agreement with current experimental bounds. Depending on the exact mass of the SUSY scalars all values in the range 0.14≲ΔN{sub eff}≲1.6 are allowed. Interestingly dark radiation overproduction can be avoided also in the absence of a Giudice-Masiero coupling.

  5. Multivariate generalized linear model for genetic pleiotropy.

    Science.gov (United States)

    Schaid, Daniel J; Tong, Xingwei; Batzler, Anthony; Sinnwell, Jason P; Qing, Jiang; Biernacka, Joanna M

    2017-12-16

    When a single gene influences more than one trait, known as pleiotropy, it is important to detect pleiotropy to improve the biological understanding of a gene. This can lead to improved screening, diagnosis, and treatment of diseases. Yet, most current multivariate methods to evaluate pleiotropy test the null hypothesis that none of the traits are associated with a variant; departures from the null could be driven by just one associated trait. A formal test of pleiotropy should assume a null hypothesis that one or fewer traits are associated with a genetic variant. We recently developed statistical methods to analyze pleiotropy for quantitative traits having a multivariate normal distribution. We now extend this approach to traits that can be modeled by generalized linear models, such as analysis of binary, ordinal, or quantitative traits, or a mixture of these types of traits. Based on methods from estimating equations, we developed a new test for pleiotropy. We then extended the testing framework to a sequential approach to test the null hypothesis that $k+1$ traits are associated, given that the null of $k$ associated traits was rejected. This provides a testing framework to determine the number of traits associated with a genetic variant, as well as which traits, while accounting for correlations among the traits. By simulations, we illustrate the Type-I error rate and power of our new methods, describe how they are influenced by sample size, the number of traits, and the trait correlations, and apply the new methods to a genome-wide association study of multivariate traits measuring symptoms of major depression. Our new approach provides a quantitative assessment of pleiotropy, enhancing current analytic practice. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Multivariate linear regression analysis to identify general factors for quantitative predictions of implant stability quotient values.

    Directory of Open Access Journals (Sweden)

    Hairong Huang

    Full Text Available This study identified potential general influencing factors for a mathematical prediction of implant stability quotient (ISQ values in clinical practice.We collected the ISQ values of 557 implants from 2 different brands (SICace and Osstem placed by 2 surgeons in 336 patients. Surgeon 1 placed 329 SICace implants, and surgeon 2 placed 113 SICace implants and 115 Osstem implants. ISQ measurements were taken at T1 (immediately after implant placement and T2 (before dental restoration. A multivariate linear regression model was used to analyze the influence of the following 11 candidate factors for stability prediction: sex, age, maxillary/mandibular location, bone type, immediate/delayed implantation, bone grafting, insertion torque, I-stage or II-stage healing pattern, implant diameter, implant length and T1-T2 time interval.The need for bone grafting as a predictor significantly influenced ISQ values in all three groups at T1 (weight coefficients ranging from -4 to -5. In contrast, implant diameter consistently influenced the ISQ values in all three groups at T2 (weight coefficients ranging from 3.4 to 4.2. Other factors, such as sex, age, I/II-stage implantation and bone type, did not significantly influence ISQ values at T2, and implant length did not significantly influence ISQ values at T1 or T2.These findings provide a rational basis for mathematical models to quantitatively predict the ISQ values of implants in clinical practice.

  7. A generalized model for homogenized reflectors

    International Nuclear Information System (INIS)

    Pogosbekyan, Leonid; Kim, Yeong Il; Kim, Young Jin; Joo, Hyung Kook

    1996-01-01

    A new concept of equivalent homogenization is proposed. The concept employs new set of homogenized parameters: homogenized cross sections (XS) and interface matrix (IM), which relates partial currents at the cell interfaces. The idea of interface matrix generalizes the idea of discontinuity factors (DFs), proposed and developed by K. Koebke and K. Smith. The method of K. Smith can be simulated within framework of new method, while the new method approximates hetero-geneous cell better in case of the steep flux gradients at the cell interfaces. The attractive shapes of new concept are:improved accuracy, simplicity of incorporation in the existing codes, equal numerical expenses in comparison to the K. Smith's approach. The new concept is useful for: (a) explicit reflector/baffle simulation; (b)control blades simulation; (c) mixed UO 2 /MOX core simulation. The offered model has been incorporated in the finite difference code and in the nodal code PANBOX. The numerical results show good accuracy of core calculations and insensitivity of homogenized parameters with respect to in-core conditions

  8. Climatology of the HOPE-G global ocean general circulation model - Sea ice general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Legutke, S. [Deutsches Klimarechenzentrum (DKRZ), Hamburg (Germany); Maier-Reimer, E. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany)

    1999-12-01

    The HOPE-G global ocean general circulation model (OGCM) climatology, obtained in a long-term forced integration is described. HOPE-G is a primitive-equation z-level ocean model which contains a dynamic-thermodynamic sea-ice model. It is formulated on a 2.8 grid with increased resolution in low latitudes in order to better resolve equatorial dynamics. The vertical resolution is 20 layers. The purpose of the integration was both to investigate the models ability to reproduce the observed general circulation of the world ocean and to obtain an initial state for coupled atmosphere - ocean - sea-ice climate simulations. The model was driven with daily mean data of a 15-year integration of the atmosphere general circulation model ECHAM4, the atmospheric component in later coupled runs. Thereby, a maximum of the flux variability that is expected to appear in coupled simulations is included already in the ocean spin-up experiment described here. The model was run for more than 2000 years until a quasi-steady state was achieved. It reproduces the major current systems and the main features of the so-called conveyor belt circulation. The observed distribution of water masses is reproduced reasonably well, although with a saline bias in the intermediate water masses and a warm bias in the deep and bottom water of the Atlantic and Indian Oceans. The model underestimates the meridional transport of heat in the Atlantic Ocean. The simulated heat transport in the other basins, though, is in good agreement with observations. (orig.)

  9. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  10. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  11. Characterizing and predicting rates of delirium across general hospital settings.

    Science.gov (United States)

    McCoy, Thomas H; Hart, Kamber L; Perlis, Roy H

    2017-05-01

    To better understand variation in reported rates of delirium, this study characterized delirium occurrence rate by department of service and primary admitting diagnosis. Nine consecutive years (2005-2013) of general hospital admissions (N=831,348) were identified across two academic medical centers using electronic health records. The primary admitting diagnosis and the treating clinical department were used to calculate occurrence rates of a previously published delirium definition composed of billing codes and natural language processing of discharge summaries. Delirium rates varied significantly across both admitting diagnosis group (X 2 10 =12786, pdelirium (86/109764; 0.08%) and neurological admissions the greatest (2851/25450; 11.2%). Although the rate of delirium varied across the two hospitals the relative rates within departments (r=0.96, pdelirium varies significantly across admitting diagnosis and hospital department. Both admitting diagnosis and department of care are even stronger predictors of risk than age; as such, simple risk stratification may offer avenues for targeted prevention and treatment efforts. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Link Prediction via Sparse Gaussian Graphical Model

    Directory of Open Access Journals (Sweden)

    Liangliang Zhang

    2016-01-01

    Full Text Available Link prediction is an important task in complex network analysis. Traditional link prediction methods are limited by network topology and lack of node property information, which makes predicting links challenging. In this study, we address link prediction using a sparse Gaussian graphical model and demonstrate its theoretical and practical effectiveness. In theory, link prediction is executed by estimating the inverse covariance matrix of samples to overcome information limits. The proposed method was evaluated with four small and four large real-world datasets. The experimental results show that the area under the curve (AUC value obtained by the proposed method improved by an average of 3% and 12.5% compared to 13 mainstream similarity methods, respectively. This method outperforms the baseline method, and the prediction accuracy is superior to mainstream methods when using only 80% of the training set. The method also provides significantly higher AUC values when using only 60% in Dolphin and Taro datasets. Furthermore, the error rate of the proposed method demonstrates superior performance with all datasets compared to mainstream methods.

  13. General prognostic scores in outcome prediction for cancer patients admitted to the intensive care unit.

    Science.gov (United States)

    Kopterides, Petros; Liberopoulos, Panayiotis; Ilias, Ioannis; Anthi, Anastasia; Pragkastis, Dimitrios; Tsangaris, Iraklis; Tsaknis, Georgios; Armaganidis, Apostolos; Dimopoulou, Ioanna

    2011-01-01

    Intensivists and nursing staff are often reluctant to admit patients with cancer to the intensive care unit even though these patients' survival rate has improved since the 1980s. To identify factors associated with mortality in cancer patients admitted to the intensive care unit and to assess and compare the effectiveness of 3 general prognostic models: the Acute Physiology and Chronic Health Evaluation (APACHE) II, the Simplified Acute Physiology Score (SAPS II), and the Sequential Organ Failure Assessment (SOFA). A prospective observational cohort study was performed in 2 general intensive care units. Discrimination was assessed by using area under the receiver operating characteristic curves, and calibration was evaluated by using Hosmer-Lemeshow goodness-of-fit tests. A total of 126 patients were included during a 3-year period. The observed mortality was 46.8%. All 3 general models showed excellent discrimination (area under the curve >0.8) and good calibration (P = .17, .14, and .22 for APACHE II, SAPS II, and SOFA, respectively). However, discrimination was significantly better with APACHE II scores than with SOFA scores (P = .02). Multivariate analyses indicated that independent of the 3 severity-of-illness scores, unfavorable risk factors for mortality included a patient's preadmission performance status, source of admission (internal medicine vs surgery department), and the presence of septic shock, infection, or anemia. Combining SOFA and SAPS II scores with these variables created prognostic models with improved calibration and discrimination. The general prognostic models seem fairly accurate in the prediction of mortality in critically ill cancer patients in the intensive care unit.

  14. Assessment of Specific Characteristics of Abnormal General Movements: Does It Enhance the Prediction of Cerebral Palsy?

    Science.gov (United States)

    Hamer, Elisa G.; Bos, Arend F.; Hadders-Algra, Mijna

    2011-01-01

    Aim: Abnormal general movements at around 3 months corrected age indicate a high risk of cerebral palsy (CP). We aimed to determine whether specific movement characteristics can improve the predictive power of definitely abnormal general movements. Method: Video recordings of 46 infants with definitely abnormal general movements at 9 to 13 weeks…

  15. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  16. Genetic models of homosexuality: generating testable predictions

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  17. Planetary wave prediction: Benefits of tropical data and global models

    Science.gov (United States)

    Somerville, R. C. J.

    1985-01-01

    Skillful numerical predictions of midlatitude atmospheric planetary waves generally require both tropical data for the initial conditions and a global domain for the forecast model. The lack of either adequate tropical observations or a global domain typically leads to a significant degradation of forecast skill in middle latitudes within the first one to three days of the forecast period. These effects were first discovered by numerical experimentation. They were subsequently explained theoretically, and their importance for practical forecasting was confirmed in a series of prediction experiments using FGGE data.

  18. [Treatment of cloud radiative effects in general circulation models

    International Nuclear Information System (INIS)

    Wang, W.C.

    1993-01-01

    This is a renewal proposal for an on-going project of the Department of Energy (DOE)/Atmospheric Radiation Measurement (ARM) Program. The objective of the ARM Program is to improve the treatment of radiation-cloud in GCMs so that reliable predictions of the timing and magnitude of greenhouse gas-induced global warming and regional responses can be made. The ARM Program supports two research areas: (I) The modeling and analysis of data related to the parameterization of clouds and radiation in general circulation models (GCMs); and (II) the development of advanced instrumentation for both mapping the three-dimensional structure of the atmosphere and high accuracy/precision radiometric observations. The present project conducts research in area (I) and focuses on GCM treatment of cloud life cycle, optical properties, and vertical overlapping. The project has two tasks: (1) Development and Refinement of GCM Radiation-Cloud Treatment Using ARM Data; and (2) Validation of GCM Radiation-Cloud Treatment

  19. A Chemical Containment Model for the General Purpose Work Station

    Science.gov (United States)

    Flippen, Alexis A.; Schmidt, Gregory K.

    1994-01-01

    Contamination control is a critical safety requirement imposed on experiments flying on board the Spacelab. The General Purpose Work Station, a Spacelab support facility used for life sciences space flight experiments, is designed to remove volatile compounds from its internal airpath and thereby minimize contamination of the Spacelab. This is accomplished through the use of a large, multi-stage filter known as the Trace Contaminant Control System. Many experiments planned for the Spacelab require the use of toxic, volatile fixatives in order to preserve specimens prior to postflight analysis. The NASA-Ames Research Center SLS-2 payload, in particular, necessitated the use of several toxic, volatile compounds in order to accomplish the many inflight experiment objectives of this mission. A model was developed based on earlier theories and calculations which provides conservative predictions of the resultant concentrations of these compounds given various spill scenarios. This paper describes the development and application of this model.

  20. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  1. PVT characterization and viscosity modeling and prediction of crude oils

    DEFF Research Database (Denmark)

    Cisneros, Eduardo Salvador P.; Dalberg, Anders; Stenby, Erling Halfdan

    2004-01-01

    method based on an accurate description of the fluid mass distribution is presented. The characterization procedure accurately matches the fluid saturation pressure. Additionally, a Peneloux volume translation scheme, capable of accurately reproducing the fluid density above and below the saturation...... deliver accurate viscosity predictions. The modeling approach presented in this work can deliver accurate viscosity and density modeling and prediction results over wide ranges of reservoir conditions, including the compositional changes induced by recovery processes such as gas injection.......In previous works, the general, one-parameter friction theory (f-theory), models have been applied to the accurate viscosity modeling of reservoir fluids. As a base, the f-theory approach requires a compositional characterization procedure for the application of an equation of state (EOS), in most...

  2. Computational modeling of oligonucleotide positional densities for human promoter prediction.

    Science.gov (United States)

    Narang, Vipin; Sung, Wing-Kin; Mittal, Ankush

    2005-01-01

    The gene promoter region controls transcriptional initiation of a gene, which is the most important step in gene regulation. In-silico detection of promoter region in genomic sequences has a number of applications in gene discovery and understanding gene expression regulation. However, computational prediction of eukaryotic poly-II promoters has remained a difficult task. This paper introduces a novel statistical technique for detecting promoter regions in long genomic sequences. A number of existing techniques analyze the occurrence frequencies of oligonucleotides in promoter sequences as compared to other genomic regions. In contrast, the present work studies the positional densities of oligonucleotides in promoter sequences. The analysis does not require any non-promoter sequence dataset or any model of the background oligonucleotide content of the genome. The statistical model learnt from a dataset of promoter sequences automatically recognizes a number of transcription factor binding sites simultaneously with their occurrence positions relative to the transcription start site. Based on this model, a continuous naïve Bayes classifier is developed for the detection of human promoters and transcription start sites in genomic sequences. The present study extends the scope of statistical models in general promoter modeling and prediction. Promoter sequence features learnt by the model correlate well with known biological facts. Results of human transcription start site prediction compare favorably with existing 2nd generation promoter prediction tools.

  3. Tourism Operator Sustainability Predictive Model in Marine Park

    OpenAIRE

    Mohamad, Zaleha; Ramli, Nurhafizah; Muslim, Aidy Mohamed Shawal M.; Hii, Yii Siang

    2017-01-01

    Sustainable tourism is the concept of visiting a place as a tourist and trying to make only a positive impact on the environment, society and economy. Tourism can involve primary transportation to the general location, local transportation, accommodations, entertainment, recreation, nourishment and shopping. In this context, the research study tourism is operator towards recreational. This study analyzed the sustainability tourism predictive model towards operator in marine park. The research...

  4. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  5. Relative sensitivity analysis of the predictive properties of sloppy models.

    Science.gov (United States)

    Myasnikova, Ekaterina; Spirov, Alexander

    2018-01-25

    Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called "sloppy" parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill's, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.

  6. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  7. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  8. General role of the amino and methylsulfamoyl groups in selective cyclooxygenase(COX)-1 inhibition by 1,4-diaryl-1,2,3-triazoles and validation of a predictive pharmacometric PLS model.

    Science.gov (United States)

    Perrone, Maria Grazia; Vitale, Paola; Panella, Andrea; Fortuna, Cosimo G; Scilimati, Antonio

    2015-04-13

    A novel set of 1,4-diaryl-1,2,3-triazoles were projected as a tool to study the effect of both the heteroaromatic triazole as a core ring and a variety of chemical groups with different electronic features, size and shape on the catalytic activity of the two COX isoenzymes. The new triazoles were synthesized in fair to good yields and then evaluated for their inhibitory activity towards COXs arachidonic acid conversion catalysis. Their COXs selectivity was also measured. A predictive pharmacometric Volsurf plus model, experimentally confirmed by the percentage (%) of COXs inhibition at the concentration of 50 μM and IC50 values of the tested compounds, was built by using a number of isoxazoles of known COXs inhibitory activity as a training set. It was found that two compounds {4-(5-methyl-4-phenyl-1H-1,2,3-triazol-1-yl)benzenamine (18) and 4-[1-(4-methoxyphenyl)-5-methyl-1H-1,2,3-triazole-4-yl]benzenamine (19)} bearing an amino group (NH2) are potent and selective COX-1 inhibitors (IC50 = 15 and 3 μM, respectively) and that the presence of a methylsulfamoyl group (SO2CH3) is not a rule to have a Coxib. In fact, 4-(4-methoxyphenyl)-5-methyl-1-[4-(methylsulfonyl)phenyl]-1H-1,2,3-triazole (23) has COX-1 IC50 = 23 μM and was found inactive towards COX-2. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  9. Exploratory Studies in Generalized Predictive Control for Active Aeroelastic Control of Tiltrotor Aircraft

    Science.gov (United States)

    Kvaternik, Raymond G.; Juang, Jer-Nan; Bennett, Richard L.

    2000-01-01

    The Aeroelasticity Branch at NASA Langley Research Center has a long and substantive history of tiltrotor aeroelastic research. That research has included a broad range of experimental investigations in the Langley Transonic Dynamics Tunnel (TDT) using a variety of scale models and the development of essential analyses. Since 1994, the tiltrotor research program has been using a 1/5-scale, semispan aeroelastic model of the V-22 designed and built by Bell Helicopter Textron Inc. (BHTI) in 1981. That model has been refurbished to form a tiltrotor research testbed called the Wing and Rotor Aeroelastic Test System (WRATS) for use in the TDT. In collaboration with BHTI, studies under the current tiltrotor research program are focused on aeroelastic technology areas having the potential for enhancing the commercial and military viability of tiltrotor aircraft. Among the areas being addressed, considerable emphasis is being directed to the evaluation of modern adaptive multi-input multi- output (MIMO) control techniques for active stability augmentation and vibration control of tiltrotor aircraft. As part of this investigation, a predictive control technique known as Generalized Predictive Control (GPC) is being studied to assess its potential for actively controlling the swashplate of tiltrotor aircraft to enhance aeroelastic stability in both helicopter and airplane modes of flight. This paper summarizes the exploratory numerical and experimental studies that were conducted as part of that investigation.

  10. Effect of misreported family history on Mendelian mutation prediction models.

    Science.gov (United States)

    Katki, Hormuzd A

    2006-06-01

    People with familial history of disease often consult with genetic counselors about their chance of carrying mutations that increase disease risk. To aid them, genetic counselors use Mendelian models that predict whether the person carries deleterious mutations based on their reported family history. Such models rely on accurate reporting of each member's diagnosis and age of diagnosis, but this information may be inaccurate. Commonly encountered errors in family history can significantly distort predictions, and thus can alter the clinical management of people undergoing counseling, screening, or genetic testing. We derive general results about the distortion in the carrier probability estimate caused by misreported diagnoses in relatives. We show that the Bayes factor that channels all family history information has a convenient and intuitive interpretation. We focus on the ratio of the carrier odds given correct diagnosis versus given misreported diagnosis to measure the impact of errors. We derive the general form of this ratio and approximate it in realistic cases. Misreported age of diagnosis usually causes less distortion than misreported diagnosis. This is the first systematic quantitative assessment of the effect of misreported family history on mutation prediction. We apply the results to the BRCAPRO model, which predicts the risk of carrying a mutation in the breast and ovarian cancer genes BRCA1 and BRCA2.

  11. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  12. Mathematical modeling to predict residential solid waste generation.

    Science.gov (United States)

    Benítez, Sara Ojeda; Lozano-Olvera, Gabriela; Morelos, Raúl Adalberto; Vega, Carolina Armijo de

    2008-01-01

    One of the challenges faced by waste management authorities is determining the amount of waste generated by households in order to establish waste management systems, as well as trying to charge rates compatible with the principle applied worldwide, and design a fair payment system for households according to the amount of residential solid waste (RSW) they generate. The goal of this research work was to establish mathematical models that correlate the generation of RSW per capita to the following variables: education, income per household, and number of residents. This work was based on data from a study on generation, quantification and composition of residential waste in a Mexican city in three stages. In order to define prediction models, five variables were identified and included in the model. For each waste sampling stage a different mathematical model was developed, in order to find the model that showed the best linear relation to predict residential solid waste generation. Later on, models to explore the combination of included variables and select those which showed a higher R(2) were established. The tests applied were normality, multicolinearity and heteroskedasticity. Another model, formulated with four variables, was generated and the Durban-Watson test was applied to it. Finally, a general mathematical model is proposed to predict residential waste generation, which accounts for 51% of the total.

  13. Predictive Models for Carcinogenicity and Mutagenicity ...

    Science.gov (United States)

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  14. The ECHAM3 atmospheric general circulation model

    International Nuclear Information System (INIS)

    1993-09-01

    The ECHAM model has been developed from the ECMWF model (cycle 31, November 1988). It contains several changes, mostly in the parameterization, in order to adjust the model for climate simulations. The technical details of the ECHAM operational model are described. (orig./KW)

  15. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  16. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  17. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  18. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  19. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  20. Predictive modelling of evidence informed teaching

    OpenAIRE

    Zhang, Dell; Brown, C.

    2017-01-01

    In this paper, we analyse the questionnaire survey data collected from 79 English primary schools about the situation of evidence informed teaching, where the evidences could come from research journals or conferences. Specifically, we build a predictive model to see what external factors could help to close the gap between teachers’ belief and behaviour in evidence informed teaching, which is the first of its kind to our knowledge. The major challenge, from the data mining perspective, is th...

  1. A Predictive Model for Cognitive Radio

    Science.gov (United States)

    2006-09-14

    response in a given situation. Vadde et al. interest and produce a model for prediction of the response. have applied response surface methodology and...34 2000. [3] K. K. Vadde and V. R. Syrotiuk, "Factor interaction on service configurations to those that best meet our communication delivery in mobile ad...resulting set of configurations randomly or apply additional 2004. screening criteria. [4] K. K. Vadde , M.-V. R. Syrotiuk, and D. C. Montgomery

  2. Tectonic predictions with mantle convection models

    Science.gov (United States)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough

  3. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  4. Urban background noise mapping: the general model

    NARCIS (Netherlands)

    Wei, W.; Botteldooren, D.; Renterghem, T. van; Hornikx, M.; Forssen, J.; Salomons, E.; Ogren, M.

    2014-01-01

    Surveys show that inhabitants of dwellings exposed to high noise levels benefit from having access to a quiet side. However, current practice in noise prediction often underestimates the noise levels at a shielded façade. Multiple reflections between façades in street canyons and inner yards are

  5. Why are predictions of general relativity theory for gravitational effects non-unique?

    International Nuclear Information System (INIS)

    Loskutov, Yu.M.

    1990-01-01

    Reasons of non-uniqueness of predictions of the general relativity theory (GRT) for gravitational effects are analyzed in detail. To authors' opinion, the absence of comparison mechanism of curved and plane metrics is the reason of non-uniqueness

  6. General problems of modeling for accelerators

    International Nuclear Information System (INIS)

    Luccio, A.

    1991-01-01

    In this presentation the author only discusses problems of modeling for circular accelerators and bases the examples on the AGS Booster Synchrotron presently being commissioned at BNL. A model is a platonic representation of an accelerator. With algorithms, implemented through computer codes, the model is brought to life. At the start of a new accelerator project, the model and the real machine are taking shape somewhat apart. They get closer and closer as the project goes on. Ideally, the modeler is only satisfied when the model or the machine cannot be distinguished. Accelerator modeling for real time control has specific problems. If one wants fast responses, algorithms may be implemented in hardware or by parallel computation, perhaps by neural networks. Algorithms and modeling is not only for accelerator control. It is also for: accelerator parameter measurement; hardware problem debugging, perhaps with some help of artificial intelligence; operator training, much like a flight simulator

  7. generalized constitutive model for stabilized quick clay

    African Journals Online (AJOL)

    QUICK CLAY. PANCRAS MUGISHAGWE BUJULU AND GUSTAV GRIMSTAD. ABSTRACT. An experimentally-based two yield surface constitutive model for cemented quick clay has been ... Clay Model, the Koiter Rule and two Mapping Rules. .... models, where a mobilization formulation is used, this is independent of q.

  8. Towards a General Model of Temporal Discounting

    Science.gov (United States)

    van den Bos, Wouter; McClure, Samuel M.

    2013-01-01

    Psychological models of temporal discounting have now successfully displaced classical economic theory due to the simple fact that many common behavior patterns, such as impulsivity, were unexplainable with classic models. However, the now dominant hyperbolic model of discounting is itself becoming increasingly strained. Numerous factors have…

  9. Development of a generalized integral jet model

    DEFF Research Database (Denmark)

    Duijm, Nijs Jan; Kessler, A.; Markert, Frank

    2017-01-01

    model is needed to describe the rapid combustion of the flammable part of the plume (flash fire) and a third model has to be applied for the remaining jet fire. The objective of this paper is to describe the first steps of the development of an integral-type model describing the transient development...

  10. Generalized Predictive Control of Dynamic Systems with Rigid-Body Modes

    Science.gov (United States)

    Kvaternik, Raymond G.

    2013-01-01

    Numerical simulations to assess the effectiveness of Generalized Predictive Control (GPC) for active control of dynamic systems having rigid-body modes are presented. GPC is a linear, time-invariant, multi-input/multi-output predictive control method that uses an ARX model to characterize the system and to design the controller. Although the method can accommodate both embedded (implicit) and explicit feedforward paths for incorporation of disturbance effects, only the case of embedded feedforward in which the disturbances are assumed to be unknown is considered here. Results from numerical simulations using mathematical models of both a free-free three-degree-of-freedom mass-spring-dashpot system and the XV-15 tiltrotor research aircraft are presented. In regulation mode operation, which calls for zero system response in the presence of disturbances, the simulations showed reductions of nearly 100%. In tracking mode operations, where the system is commanded to follow a specified path, the GPC controllers produced the desired responses, even in the presence of disturbances.

  11. An international model to predict recurrent cardiovascular disease.

    Science.gov (United States)

    Wilson, Peter W F; D'Agostino, Ralph; Bhatt, Deepak L; Eagle, Kim; Pencina, Michael J; Smith, Sidney C; Alberts, Mark J; Dallongeville, Jean; Goto, Shinya; Hirsch, Alan T; Liau, Chiau-Suong; Ohman, E Magnus; Röther, Joachim; Reid, Christopher; Mas, Jean-Louis; Steg, Ph Gabriel

    2012-07-01

    Prediction models for cardiovascular events and cardiovascular death in patients with established cardiovascular disease are not generally available. Participants from the prospective REduction of Atherothrombosis for Continued Health (REACH) Registry provided a global outpatient population with known cardiovascular disease at entry. Cardiovascular prediction models were estimated from the 2-year follow-up data of 49,689 participants from around the world. A developmental prediction model was estimated from 33,419 randomly selected participants (2394 cardiovascular events with 1029 cardiovascular deaths) from the pool of 49,689. The number of vascular beds with clinical disease, diabetes, smoking, low body mass index, history of atrial fibrillation, cardiac failure, and history of cardiovascular event(s) <1 year before baseline examination increased risk of a subsequent cardiovascular event. Statin (hazard ratio 0.75; 95% confidence interval, 0.69-0.82) and acetylsalicylic acid therapy (hazard ratio 0.90; 95% confidence interval, 0.83-0.99) also were significantly associated with reduced risk of cardiovascular events. The prediction model was validated in the remaining 16,270 REACH subjects (1172 cardiovascular events, 494 cardiovascular deaths). Risk of cardiovascular death was similarly estimated with the same set of risk factors. Simple algorithms were developed for prediction of overall cardiovascular events and for cardiovascular death. This study establishes and validates a risk model to predict secondary cardiovascular events and cardiovascular death in outpatients with established atherothrombotic disease. Traditional risk factors, burden of disease, lack of treatment, and geographic location all are related to an increased risk of subsequent cardiovascular morbidity and cardiovascular mortality. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. General movements in early infancy predict neuromotor development at 9 to 12 years of age

    NARCIS (Netherlands)

    Groen, SE; de Blecourt, ACE; Postema, K; Hadders-Algra, M

    2005-01-01

    Assessment of the quality of general movements (GMs) in early infancy is a powerful instrument to predict cerebral palsy (CP). The aim of the present study is to explore the value of GM assessment in predicting minor neurological dysfunction (MND) at 9 to 12 years of age. Two groups of infants were

  13. General Theory versus ENA Theory: Comparing Their Predictive Accuracy and Scope.

    Science.gov (United States)

    Ellis, Lee; Hoskin, Anthony; Hartley, Richard; Walsh, Anthony; Widmayer, Alan; Ratnasingam, Malini

    2015-12-01

    General theory attributes criminal behavior primarily to low self-control, whereas evolutionary neuroandrogenic (ENA) theory envisions criminality as being a crude form of status-striving promoted by high brain exposure to androgens. General theory predicts that self-control will be negatively correlated with risk-taking, while ENA theory implies that these two variables should actually be positively correlated. According to ENA theory, traits such as pain tolerance and muscularity will be positively associated with risk-taking and criminality while general theory makes no predictions concerning these relationships. Data from Malaysia and the United States are used to test 10 hypotheses derived from one or both of these theories. As predicted by both theories, risk-taking was positively correlated with criminality in both countries. However, contrary to general theory and consistent with ENA theory, the correlation between self-control and risk-taking was positive in both countries. General theory's prediction of an inverse correlation between low self-control and criminality was largely supported by the U.S. data but only weakly supported by the Malaysian data. ENA theory's predictions of positive correlations between pain tolerance, muscularity, and offending were largely confirmed. For the 10 hypotheses tested, ENA theory surpassed general theory in predictive scope and accuracy. © The Author(s) 2014.

  14. Design of a generalized predictive controller for a biological wastewater treatment plant.

    Science.gov (United States)

    Sadeghassadi, M; Macnab, C J B; Westwick, D

    2016-01-01

    This paper presents a generalized predictive control (GPC) technique to regulate the activated sludge process found in a bioreactor used in wastewater treatment. The control strategy can track dissolved oxygen setpoint changes quickly, adapting to the system uncertainties and disturbances. Tests occur on an Activated Sludge Model No. 1 benchmark of an activated sludge process. A T filter added to the GPC framework results in an effective control strategy in the presence of coloured measurement noise. This work also suggests how a constraint on the measured variable can be added as a penalty term to the GPC framework which leads to improved control of the dissolved oxygen concentration in the presence of dynamic input disturbance.

  15. Prediction of chronic critical illness in a general intensive care unit

    Directory of Open Access Journals (Sweden)

    Sérgio H. Loss

    2013-06-01

    Full Text Available OBJECTIVE: To assess the incidence, costs, and mortality associated with chronic critical illness (CCI, and to identify clinical predictors of CCI in a general intensive care unit. METHODS: This was a prospective observational cohort study. All patients receiving supportive treatment for over 20 days were considered chronically critically ill and eligible for the study. After applying the exclusion criteria, 453 patients were analyzed. RESULTS: There was an 11% incidence of CCI. Total length of hospital stay, costs, and mortality were significantly higher among patients with CCI. Mechanical ventilation, sepsis, Glasgow score < 15, inadequate calorie intake, and higher body mass index were independent predictors for cci in the multivariate logistic regression model. CONCLUSIONS: CCI affects a distinctive population in intensive care units with higher mortality, costs, and prolonged hospitalization. Factors identifiable at the time of admission or during the first week in the intensive care unit can be used to predict CCI.

  16. Prediction of Chemical Function: Model Development and ...

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  17. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  18. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  19. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  20. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  1. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  2. PREDICTION MODELS OF GRAIN YIELD AND CHARACTERIZATION

    Directory of Open Access Journals (Sweden)

    Narciso Ysac Avila Serrano

    2009-06-01

    Full Text Available With the objective to characterize the grain yield of five cowpea cultivars and to find linear regression models to predict it, a study was developed in La Paz, Baja California Sur, Mexico. A complete randomized blocks design was used. Simple and multivariate analyses of variance were carried out using the canonical variables to characterize the cultivars. The variables cluster per plant, pods per plant, pods per cluster, seeds weight per plant, seeds hectoliter weight, 100-seed weight, seeds length, seeds wide, seeds thickness, pods length, pods wide, pods weight, seeds per pods, and seeds weight per pods, showed significant differences (P≤ 0.05 among cultivars. Paceño and IT90K-277-2 cultivars showed the higher seeds weight per plant. The linear regression models showed correlation coefficients ≥0.92. In these models, the seeds weight per plant, pods per cluster, pods per plant, cluster per plant and pods length showed significant correlations (P≤ 0.05. In conclusion, the results showed that grain yield differ among cultivars and for its estimation, the prediction models showed determination coefficients highly dependable.

  3. Advective transport in heterogeneous aquifers: Are proxy models predictive?

    Science.gov (United States)

    Fiori, A.; Zarlenga, A.; Gotovac, H.; Jankovic, I.; Volpi, E.; Cvetkovic, V.; Dagan, G.

    2015-12-01

    We examine the prediction capability of two approximate models (Multi-Rate Mass Transfer (MRMT) and Continuous Time Random Walk (CTRW)) of non-Fickian transport, by comparison with accurate 2-D and 3-D numerical simulations. Both nonlocal in time approaches circumvent the need to solve the flow and transport equations by using proxy models to advection, providing the breakthrough curves (BTC) at control planes at any x, depending on a vector of five unknown parameters. Although underlain by different mechanisms, the two models have an identical structure in the Laplace Transform domain and have the Markovian property of independent transitions. We show that also the numerical BTCs enjoy the Markovian property. Following the procedure recommended in the literature, along a practitioner perspective, we first calibrate the parameters values by a best fit with the numerical BTC at a control plane at x1, close to the injection plane, and subsequently use it for prediction at further control planes for a few values of σY2≤8. Due to a similar structure and Markovian property, the two methods perform equally well in matching the numerical BTC. The identified parameters are generally not unique, making their identification somewhat arbitrary. The inverse Gaussian model and the recently developed Multi-Indicator Model (MIM), which does not require any fitting as it relates the BTC to the permeability structure, are also discussed. The application of the proxy models for prediction requires carrying out transport field tests of large plumes for a long duration.

  4. A general relativistic hydrostatic model for a galaxy

    International Nuclear Information System (INIS)

    Hojman, R.; Pena, L.; Zamorano, N.

    1991-08-01

    The existence of huge amounts of mass laying at the center of some galaxies has been inferred by data gathered at different wavelengths. It seems reasonable then, to incorporate general relativity in the study of these objects. A general relativistic hydrostatic model for a galaxy is studied. We assume that the galaxy is dominated by the dark mass except at the nucleus, where the luminous matter prevails. It considers four different concentric spherically symmetric regions, properly matched and with a specific equation of state for each of them. It yields a slowly raising orbital velocity for a test particle moving in the background gravitational field of the dark matter region. In this sense we think of this model as representing a spiral galaxy. The dependence of the mass on the radius in cluster and field spiral galaxies published recently, can be used to fix the size of the inner luminous core. A vanishing pressure at the edge of the galaxy and the assumption of hydrostatic equilibrium everywhere generates a jump in the density and the orbital velocity at the shell enclosing the galaxy. This is a prediction of this model. The ratio between the size core and the shells introduced here are proportional to their densities. In this sense the model is scale invariant. It can be used to reproduce a galaxy or the central region of a galaxy. We have also compared our results with those obtained with the Newtonian isothermal sphere. The luminosity is not included in our model as an extra variable in the determination of the orbital velocity. (author). 29 refs, 10 figs

  5. Regression Model to Predict Global Solar Irradiance in Malaysia

    Directory of Open Access Journals (Sweden)

    Hairuniza Ahmed Kutty

    2015-01-01

    Full Text Available A novel regression model is developed to estimate the monthly global solar irradiance in Malaysia. The model is developed based on different available meteorological parameters, including temperature, cloud cover, rain precipitate, relative humidity, wind speed, pressure, and gust speed, by implementing regression analysis. This paper reports on the details of the analysis of the effect of each prediction parameter to identify the parameters that are relevant to estimating global solar irradiance. In addition, the proposed model is compared in terms of the root mean square error (RMSE, mean bias error (MBE, and the coefficient of determination (R2 with other models available from literature studies. Seven models based on single parameters (PM1 to PM7 and five multiple-parameter models (PM7 to PM12 are proposed. The new models perform well, with RMSE ranging from 0.429% to 1.774%, R2 ranging from 0.942 to 0.992, and MBE ranging from −0.1571% to 0.6025%. In general, cloud cover significantly affects the estimation of global solar irradiance. However, cloud cover in Malaysia lacks sufficient influence when included into multiple-parameter models although it performs fairly well in single-parameter prediction models.

  6. Predictive Models for Normal Fetal Cardiac Structures.

    Science.gov (United States)

    Krishnan, Anita; Pike, Jodi I; McCarter, Robert; Fulgium, Amanda L; Wilson, Emmanuel; Donofrio, Mary T; Sable, Craig A

    2016-12-01

    Clinicians rely on age- and size-specific measures of cardiac structures to diagnose cardiac disease. No universally accepted normative data exist for fetal cardiac structures, and most fetal cardiac centers do not use the same standards. The aim of this study was to derive predictive models for Z scores for 13 commonly evaluated fetal cardiac structures using a large heterogeneous population of fetuses without structural cardiac defects. The study used archived normal fetal echocardiograms in representative fetuses aged 12 to 39 weeks. Thirteen cardiac dimensions were remeasured by a blinded echocardiographer from digitally stored clips. Studies with inadequate imaging views were excluded. Regression models were developed to relate each dimension to estimated gestational age (EGA) by dates, biparietal diameter, femur length, and estimated fetal weight by the Hadlock formula. Dimension outcomes were transformed (e.g., using the logarithm or square root) as necessary to meet the normality assumption. Higher order terms, quadratic or cubic, were added as needed to improve model fit. Information criteria and adjusted R 2 values were used to guide final model selection. Each Z-score equation is based on measurements derived from 296 to 414 unique fetuses. EGA yielded the best predictive model for the majority of dimensions; adjusted R 2 values ranged from 0.72 to 0.893. However, each of the other highly correlated (r > 0.94) biometric parameters was an acceptable surrogate for EGA. In most cases, the best fitting model included squared and cubic terms to introduce curvilinearity. For each dimension, models based on EGA provided the best fit for determining normal measurements of fetal cardiac structures. Nevertheless, other biometric parameters, including femur length, biparietal diameter, and estimated fetal weight provided results that were nearly as good. Comprehensive Z-score results are available on the basis of highly predictive models derived from gestational

  7. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  8. An Anisotropic Hardening Model for Springback Prediction

    International Nuclear Information System (INIS)

    Zeng, Danielle; Xia, Z. Cedric

    2005-01-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test

  9. generalized constitutive model for stabilized quick clay

    African Journals Online (AJOL)

    An experimentally-based two yield surface constitutive model for cemented quick clay has been developed at NTNU, Norway, to reproduce the mechanical behavior of the stabilized quick clay in the triaxial p'-q stress space. The model takes into account the actual mechanical properties of the stabilized material, such as ...

  10. Stratospheric General Circulation with Chemistry Model (SGCCM)

    Science.gov (United States)

    Rood, Richard B.; Douglass, Anne R.; Geller, Marvin A.; Kaye, Jack A.; Nielsen, J. Eric; Rosenfield, Joan E.; Stolarski, Richard S.

    1990-01-01

    In the past two years constituent transport and chemistry experiments have been performed using both simple single constituent models and more complex reservoir species models. Winds for these experiments have been taken from the data assimilation effort, Stratospheric Data Analysis System (STRATAN).

  11. Equilibrium in Generalized Cournot and Stackelberg Models

    NARCIS (Netherlands)

    Bulavsky, V.A.; Kalashnikov, V.V.

    1999-01-01

    A model of an oligopolistic market with a homogeneous product is examined. Each subject of the model uses a conjecture about the market response to variations of its production volume. The conjecture value depends upon both the current total volume of production at the market and the subject's

  12. Generalized coupling in the Kuramoto model

    DEFF Research Database (Denmark)

    Filatrella, G.; Pedersen, Niels Falsig; Wiesenfeld, K.

    2007-01-01

    We propose a modification of the Kuramoto model to account for the effective change in the coupling constant among the oscillators, as suggested by some experiments on Josephson junction, laser arrays, and mechanical systems, where the active elements are turned on one by one. The resulting model...... with the behavior of Josephson junctions coupled via a cavity....

  13. Hybrid Model for Early Onset Prediction of Driver Fatigue with Observable Cues

    Directory of Open Access Journals (Sweden)

    Mingheng Zhang

    2014-01-01

    Full Text Available This paper presents a hybrid model for early onset prediction of driver fatigue, which is the major reason of severe traffic accidents. The proposed method divides the prediction problem into three stages, that is, SVM-based model for predicting the early onset driver fatigue state, GA-based model for optimizing the parameters in the SVM, and PCA-based model for reducing the dimensionality of the complex features datasets. The model and algorithm are illustrated with driving experiment data and comparison results also show that the hybrid method can generally provide a better performance for driver fatigue state prediction.

  14. Multiloop functional renormalization group for general models

    Science.gov (United States)

    Kugler, Fabian B.; von Delft, Jan

    2018-02-01

    We present multiloop flow equations in the functional renormalization group (fRG) framework for the four-point vertex and self-energy, formulated for a general fermionic many-body problem. This generalizes the previously introduced vertex flow [F. B. Kugler and J. von Delft, Phys. Rev. Lett. 120, 057403 (2018), 10.1103/PhysRevLett.120.057403] and provides the necessary corrections to the self-energy flow in order to complete the derivative of all diagrams involved in the truncated fRG flow. Due to its iterative one-loop structure, the multiloop flow is well suited for numerical algorithms, enabling improvement of many fRG computations. We demonstrate its equivalence to a solution of the (first-order) parquet equations in conjunction with the Schwinger-Dyson equation for the self-energy.

  15. Predicting the future completing models of observed complex systems

    CERN Document Server

    Abarbanel, Henry

    2013-01-01

    Predicting the Future: Completing Models of Observed Complex Systems provides a general framework for the discussion of model building and validation across a broad spectrum of disciplines. This is accomplished through the development of an exact path integral for use in transferring information from observations to a model of the observed system. Through many illustrative examples drawn from models in neuroscience, fluid dynamics, geosciences, and nonlinear electrical circuits, the concepts are exemplified in detail. Practical numerical methods for approximate evaluations of the path integral are explored, and their use in designing experiments and determining a model's consistency with observations is investigated. Using highly instructive examples, the problems of data assimilation and the means to treat them are clearly illustrated. This book will be useful for students and practitioners of physics, neuroscience, regulatory networks, meteorology and climate science, network dynamics, fluid dynamics, and o...

  16. MOTORCYCLE CRASH PREDICTION MODEL FOR NON-SIGNALIZED INTERSECTIONS

    Directory of Open Access Journals (Sweden)

    S. HARNEN

    2003-01-01

    Full Text Available This paper attempts to develop a prediction model for motorcycle crashes at non-signalized intersections on urban roads in Malaysia. The Generalized Linear Modeling approach was used to develop the model. The final model revealed that an increase in motorcycle and non-motorcycle flows entering an intersection is associated with an increase in motorcycle crashes. Non-motorcycle flow on major road had the greatest effect on the probability of motorcycle crashes. Approach speed, lane width, number of lanes, shoulder width and land use were also found to be significant in explaining motorcycle crashes. The model should assist traffic engineers to decide the need for appropriate intersection treatment that specifically designed for non-exclusive motorcycle lane facilities.

  17. Predictive Control, Competitive Model Business Planning, and Innovation ERP

    DEFF Research Database (Denmark)

    Nourani, Cyrus F.; Lauth, Codrina

    2015-01-01

    New optimality principles are put forth based on competitive model business planning. A Generalized MinMax local optimum dynamic programming algorithm is presented and applied to business model computing where predictive techniques can determine local optima. Based on a systems model an enterprise...... is not viewed as the sum of its component elements, but the product of their interactions. The paper starts with introducing a systems approach to business modeling. A competitive business modeling technique, based on the author's planning techniques is applied. Systemic decisions are based on common...... organizational goals, and as such business planning and resource assignments should strive to satisfy higher organizational goals. It is critical to understand how different decisions affect and influence one another. Here, a business planning example is presented where systems thinking technique, using Causal...

  18. Generalized Degrees of Freedom and Adaptive Model Selection in Linear Mixed-Effects Models.

    Science.gov (United States)

    Zhang, Bo; Shen, Xiaotong; Mumford, Sunni L

    2012-03-01

    Linear mixed-effects models involve fixed effects, random effects and covariance structure, which require model selection to simplify a model and to enhance its interpretability and predictability. In this article, we develop, in the context of linear mixed-effects models, the generalized degrees of freedom and an adaptive model selection procedure defined by a data-driven model complexity penalty. Numerically, the procedure performs well against its competitors not only in selecting fixed effects but in selecting random effects and covariance structure as well. Theoretically, asymptotic optimality of the proposed methodology is established over a class of information criteria. The proposed methodology is applied to the BioCycle study, to determine predictors of hormone levels among premenopausal women and to assess variation in hormone levels both between and within women across the menstrual cycle.

  19. Generalized plasma skimming model for cells and drug carriers in the microvasculature.

    Science.gov (United States)

    Lee, Tae-Rin; Yoo, Sung Sic; Yang, Jiho

    2017-04-01

    In microvascular transport, where both blood and drug carriers are involved, plasma skimming has a key role on changing hematocrit level and drug carrier concentration in capillary beds after continuous vessel bifurcation in the microvasculature. While there have been numerous studies on modeling the plasma skimming of blood, previous works lacked in consideration of its interaction with drug carriers. In this paper, a generalized plasma skimming model is suggested to predict the redistributions of both the cells and drug carriers at each bifurcation. In order to examine its applicability, this new model was applied on a single bifurcation system to predict the redistribution of red blood cells and drug carriers. Furthermore, this model was tested at microvascular network level under different plasma skimming conditions for predicting the concentration of drug carriers. Based on these results, the applicability of this generalized plasma skimming model is fully discussed and future works along with the model's limitations are summarized.

  20. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  1. [Endometrial cancer: Predictive models and clinical impact].

    Science.gov (United States)

    Bendifallah, Sofiane; Ballester, Marcos; Daraï, Emile

    2017-12-01

    In France, in 2015, endometrial cancer (CE) is the first gynecological cancer in terms of incidence and the fourth cause of cancer of the woman. About 8151 new cases and nearly 2179 deaths have been reported. Treatments (surgery, external radiotherapy, brachytherapy and chemotherapy) are currently delivered on the basis of an estimation of the recurrence risk, an estimation of lymph node metastasis or an estimate of survival probability. This risk is determined on the basis of prognostic factors (clinical, histological, imaging, biological) taken alone or grouped together in the form of classification systems, which are currently insufficient to account for the evolutionary and prognostic heterogeneity of endometrial cancer. For endometrial cancer, the concept of mathematical modeling and its application to prediction have developed in recent years. These biomathematical tools have opened a new era of care oriented towards the promotion of targeted therapies and personalized treatments. Many predictive models have been published to estimate the risk of recurrence and lymph node metastasis, but a tiny fraction of them is sufficiently relevant and of clinical utility. The optimization tracks are multiple and varied, suggesting the possibility in the near future of a place for these mathematical models. The development of high-throughput genomics is likely to offer a more detailed molecular characterization of the disease and its heterogeneity. Copyright © 2017 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  2. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  3. A predictive model for indoor radon occurrences - A first approximation

    International Nuclear Information System (INIS)

    LeGrand, H.E.

    1987-01-01

    Knowledge of how radon gas is transmitted in the shallow ground environment and how it emanates into buildings is grossly incomplete. Admittedly, some excellent research studies have been made and some general associations between certain aspects of the environment and radon occurrences in buildings are recognized. Yet, a technique for precisely predicting the radon concentrations indoors is not likely to be developed soon. As knowledge increases, successive approximations toward a final predictive model may be required. An early approximation of a predictive model for indoor radon is presented in this paper. It applies specifically to the crystalline rock region of the eastern United States, but it should have some application on a broader basis. The predictive model described focuses on understanding the wide-ranging permeability characteristics in the soil and rock fracture system. Radon is thought to accrete in confined subsurface air and moves under ground to low-pressure places, such as house niched in hill sloped. Driving forces for the air-laden and entrapped radon gas are considered to be a rising water table and infiltrating moisture from the land surface

  4. Systematic prediction error correction: a novel strategy for maintaining the predictive abilities of multivariate calibration models.

    Science.gov (United States)

    Chen, Zeng-Ping; Li, Li-Mei; Yu, Ru-Qin; Littlejohn, David; Nordon, Alison; Morris, Julian; Dann, Alison S; Jeffkins, Paul A; Richardson, Mark D; Stimpson, Sarah L

    2011-01-07

    The development of reliable multivariate calibration models for spectroscopic instruments in on-line/in-line monitoring of chemical and bio-chemical processes is generally difficult, time-consuming and costly. Therefore, it is preferable if calibration models can be used for an extended period, without the need to replace them. However, in many process applications, changes in the instrumental response (e.g. owing to a change of spectrometer) or variations in the measurement conditions (e.g. a change in temperature) can cause a multivariate calibration model to become invalid. In this contribution, a new method, systematic prediction error correction (SPEC), has been developed to maintain the predictive abilities of multivariate calibration models when e.g. the spectrometer or measurement conditions are altered. The performance of the method has been tested on two NIR data sets (one with changes in instrumental responses, the other with variations in experimental conditions) and the outcomes compared with those of some popular methods, i.e. global PLS, univariate slope and bias correction (SBC) and piecewise direct standardization (PDS). The results show that SPEC achieves satisfactory analyte predictions with significantly lower RMSEP values than global PLS and SBC for both data sets, even when only a few standardization samples are used. Furthermore, SPEC is simple to implement and requires less information than PDS, which offers advantages for applications with limited data.

  5. Economic model predictive control theory, formulations and chemical process applications

    CERN Document Server

    Ellis, Matthew; Christofides, Panagiotis D

    2017-01-01

    This book presents general methods for the design of economic model predictive control (EMPC) systems for broad classes of nonlinear systems that address key theoretical and practical considerations including recursive feasibility, closed-loop stability, closed-loop performance, and computational efficiency. Specifically, the book proposes: Lyapunov-based EMPC methods for nonlinear systems; two-tier EMPC architectures that are highly computationally efficient; and EMPC schemes handling explicitly uncertainty, time-varying cost functions, time-delays and multiple-time-scale dynamics. The proposed methods employ a variety of tools ranging from nonlinear systems analysis, through Lyapunov-based control techniques to nonlinear dynamic optimization. The applicability and performance of the proposed methods are demonstrated through a number of chemical process examples. The book presents state-of-the-art methods for the design of economic model predictive control systems for chemical processes. In addition to being...

  6. Generalized Path Analysis and Generalized Simultaneous Equations Model for Recursive Systems with Responses of Mixed Types

    Science.gov (United States)

    Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang

    2006-01-01

    This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…

  7. Description of the General Equilibrium Model of Ecosystem Services (GEMES)

    Science.gov (United States)

    Travis Warziniack; David Finnoff; Jenny Apriesnig

    2017-01-01

    This paper serves as documentation for the General Equilibrium Model of Ecosystem Services (GEMES). GEMES is a regional computable general equilibrium model that is composed of values derived from natural capital and ecosystem services. It models households, producing sectors, and governments, linked to one another through commodity and factor markets. GEMES was...

  8. Building a generalized distributed system model

    Science.gov (United States)

    Mukkamala, R.

    1993-01-01

    The key elements in the 1992-93 period of the project are the following: (1) extensive use of the simulator to implement and test - concurrency control algorithms, interactive user interface, and replica control algorithms; and (2) investigations into the applicability of data and process replication in real-time systems. In the 1993-94 period of the project, we intend to accomplish the following: (1) concentrate on efforts to investigate the effects of data and process replication on hard and soft real-time systems - especially we will concentrate on the impact of semantic-based consistency control schemes on a distributed real-time system in terms of improved reliability, improved availability, better resource utilization, and reduced missed task deadlines; and (2) use the prototype to verify the theoretically predicted performance of locking protocols, etc.

  9. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  10. Generalized versus non-generalized neural network model for multi-lead inflow forecasting at Aswan High Dam

    Directory of Open Access Journals (Sweden)

    A. El-Shafie

    2011-03-01

    Full Text Available Artificial neural networks (ANN have been found efficient, particularly in problems where characteristics of the processes are stochastic and difficult to describe using explicit mathematical models. However, time series prediction based on ANN algorithms is fundamentally difficult and faces problems. One of the major shortcomings is the search for the optimal input pattern in order to enhance the forecasting capabilities for the output. The second challenge is the over-fitting problem during the training procedure and this occurs when ANN loses its generalization. In this research, autocorrelation and cross correlation analyses are suggested as a method for searching the optimal input pattern. On the other hand, two generalized methods namely, Regularized Neural Network (RNN and Ensemble Neural Network (ENN models are developed to overcome the drawbacks of classical ANN models. Using Generalized Neural Network (GNN helped avoid over-fitting of training data which was observed as a limitation of classical ANN models. Real inflow data collected over the last 130 years at Lake Nasser was used to train, test and validate the proposed model. Results show that the proposed GNN model outperforms non-generalized neural network and conventional auto-regressive models and it could provide accurate inflow forecasting.

  11. The Five-Factor Model: General Overview

    Directory of Open Access Journals (Sweden)

    A A Vorobyeva

    2011-12-01

    Full Text Available The article describes the five-factor model (FFM, giving an overview of its history, basic dimensions, cross-cultural research conducted on the model and highlights some practical studies based on the FFM, including the studies on job performance, leader performance and daily social interactions. An overview of the recent five-factor theory is also provided. According to the theory, the five factors are encoded in human genes, therefore it is almost impossible to change the basic factors themselves, but a person's behavior might be changed due to characteristic adaptations which do not alter personality dimensions, only a person's behavior.

  12. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  13. Predictive Models and Tools for Assessing Chemicals under the Toxic Substances Control Act (TSCA)

    Science.gov (United States)

    EPA has developed databases and predictive models to help evaluate the hazard, exposure, and risk of chemicals released to the environment and how workers, the general public, and the environment may be exposed to and affected by them.

  14. Mathematical models for indoor radon prediction

    International Nuclear Information System (INIS)

    Malanca, A.; Pessina, V.; Dallara, G.

    1995-01-01

    It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model

  15. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  16. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  17. A NEW GENERAL 3DOF QUASI-STEADY AERODYNAMIC INSTABILITY MODEL

    DEFF Research Database (Denmark)

    Gjelstrup, Henrik; Larsen, Allan; Georgakis, Christos

    2008-01-01

    but can generally be applied for aerodynamic instability prediction for prismatic bluff bodies. The 3DOF, which make up the movement of the model, are the displacements in the XY-plane and the rotation around the bluff body’s rotational axis. The proposed model incorporates inertia coupling between...

  18. Esperanto: A Unique Model for General Linguistics.

    Science.gov (United States)

    Dulichenko, Aleksandr D.

    1988-01-01

    Esperanto presents a unique model for linguistic research by allowing the study of language development from project to fully functioning language. Esperanto provides insight into the growth of polysemy and redundancy, as well as into language universals and the phenomenon of social control. (Author/CB)

  19. A generalized model for estimating the energy density of invertebrates

    Science.gov (United States)

    James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.

    2012-01-01

    Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2  =  0.96, p cost savings compared to traditional bomb calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.

  20. An Operational Model for the Prediction of Jet Blast

    Science.gov (United States)

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  1. Forced versus coupled dynamics in Earth system modelling and prediction

    Directory of Open Access Journals (Sweden)

    B. Knopf

    2005-01-01

    Full Text Available We compare coupled nonlinear climate models and their simplified forced counterparts with respect to predictability and phase space topology. Various types of uncertainty plague climate change simulation, which is, in turn, a crucial element of Earth System modelling. Since the currently preferred strategy for simulating the climate system, or the Earth System at large, is the coupling of sub-system modules (representing, e.g. atmosphere, oceans, global vegetation, this paper explicitly addresses the errors and indeterminacies generated by the coupling procedure. The focus is on a comparison of forced dynamics as opposed to fully, i.e. intrinsically, coupled dynamics. The former represents a particular type of simulation, where the time behaviour of one complex systems component is prescribed by data or some other external information source. Such a simplifying technique is often employed in Earth System models in order to save computing resources, in particular when massive model inter-comparisons need to be carried out. Our contribution to the debate is based on the investigation of two representative model examples, namely (i a low-dimensional coupled atmosphere-ocean simulator, and (ii a replica-like simulator embracing corresponding components.Whereas in general the forced version (ii is able to mimic its fully coupled counterpart (i, we show in this paper that for a considerable fraction of parameter- and state-space, the two approaches qualitatively differ. Here we take up a phenomenon concerning the predictability of coupled versus forced models that was reported earlier in this journal: the observation that the time series of the forced version display artificial predictive skill. We present an explanation in terms of nonlinear dynamical theory. In particular we observe an intermittent version of artificial predictive skill, which we call on-off synchronization, and trace it back to the appearance of unstable periodic orbits. We also

  2. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  3. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...... model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model...

  4. The DSM-5 dimensional trait model and five-factor models of general personality.

    Science.gov (United States)

    Gore, Whitney L; Widiger, Thomas A

    2013-08-01

    The current study tests empirically the relationship of the dimensional trait model proposed for the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) with five-factor models of general personality. The DSM-5 maladaptive trait dimensional model proposal included 25 traits organized within five broad domains (i.e., negative affectivity, detachment, antagonism, disinhibition, and psychoticism). Consistent with the authors of the proposal, it was predicted that negative affectivity would align with five-factor model (FFM) neuroticism, detachment with FFM introversion, antagonism with FFM antagonism, disinhibition with low FFM conscientiousness and, contrary to the proposal; psychoticism would align with FFM openness. Three measures of alternative five-factor models of general personality were administered to 445 undergraduates along with the Personality Inventory for DSM-5. The results provided support for the hypothesis that all five domains of the DSM-5 dimensional trait model are maladaptive variants of general personality structure, including the domain of psychoticism. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  5. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    , respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...

  6. Development of independent generalized probabilistic models for regulatory activities

    International Nuclear Information System (INIS)

    Gashev, M.Kh.; Zinchenko, Yu.A.; Stefanishin, N.A.

    2012-01-01

    The paper discusses the development of probabilistic models to be used in regulatory activities. Results from the development of independent generalized PSA-1 models for purposes of SNRIU risk-informed regulation are presented

  7. Predicting homophobic behavior among heterosexual youth: domain general and sexual orientation-specific factors at the individual and contextual level.

    Science.gov (United States)

    Poteat, V Paul; DiGiovanni, Craig D; Scheer, Jillian R

    2013-03-01

    As a form of bias-based harassment, homophobic behavior remains prominent in schools. Yet, little attention has been given to factors that underlie it, aside from bullying and sexual prejudice. Thus, we examined multiple domain general (empathy, perspective-taking, classroom respect norms) and sexual orientation-specific factors (sexual orientation identity importance, number of sexual minority friends, parents' sexual minority attitudes, media messages). We documented support for a model in which these sets of factors converged to predict homophobic behavior, mediated through bullying and prejudice, among 581 students in grades 9-12 (55 % female). The structural equation model indicated that, with the exception of media messages, these additional factors predicted levels of prejudice and bullying, which in turn predicted the likelihood of students to engage in homophobic behavior. These findings highlight the importance of addressing multiple interrelated factors in efforts to reduce bullying, prejudice, and discrimination among youth.

  8. Generalized bottom-tau unification, neutrino oscillations and dark matter: Predictions from a lepton quarticity flavor approach

    Science.gov (United States)

    Centelles Chuliá, Salvador; Srivastava, Rahul; Valle, José W. F.

    2017-10-01

    We propose an A4 extension of the Standard Model with a Lepton Quarticity symmetry correlating dark matter stability with the Dirac nature of neutrinos. The flavor symmetry predicts (i) a generalized bottom-tau mass relation involving all families, (ii) small neutrino masses are induced a la seesaw, (iii) CP must be significantly violated in neutrino oscillations, (iv) the atmospheric angle θ23 lies in the second octant, and (v) only the normal neutrino mass ordering is realized.

  9. Generalized bottom-tau unification, neutrino oscillations and dark matter: Predictions from a lepton quarticity flavor approach

    Directory of Open Access Journals (Sweden)

    Salvador Centelles Chuliá

    2017-10-01

    Full Text Available We propose an A4 extension of the Standard Model with a Lepton Quarticity symmetry correlating dark matter stability with the Dirac nature of neutrinos. The flavor symmetry predicts (i a generalized bottom-tau mass relation involving all families, (ii small neutrino masses are induced a la seesaw, (iii CP must be significantly violated in neutrino oscillations, (iv the atmospheric angle θ23 lies in the second octant, and (v only the normal neutrino mass ordering is realized.

  10. Modeling electrokinetics in ionic liquids: General

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Chao [Physical and Computational Science Directorate, Pacific Northwest National Laboratory, Richland WA USA; Bao, Jie [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland WA USA; Pan, Wenxiao [Department of Mechanical Engineering, University of Wisconsin-Madison, Madison WI USA; Sun, Xin [Physical and Computational Science Directorate, Pacific Northwest National Laboratory, Richland WA USA

    2017-04-07

    Using direct numerical simulations we provide a thorough study on the electrokinetics of ionic liquids. In particular, the modfied Poisson-Nernst-Planck (MPNP) equations are solved to capture the crowding and overscreening effects that are the characteristics of an ionic liquid. For modeling electrokinetic flows in an ionic liquid, the MPNP equations are coupled with the Navier-Stokes equations to study the coupling of ion transport, hydrodynamics, and electrostatic forces. Specifically, we consider the ion transport between two parallel plates, charging dynamics in a 2D straight-walled pore, electro-osmotic ow in a nano-channel, electroconvective instability on a plane ion-selective surface, and electroconvective ow on a curved ion-selective surface. We discuss how the crowding and overscreening effects and their interplay affect the electrokinetic behaviors of ionic liquids in these application problems.

  11. Predictive modeling: potential application in prevention services.

    Science.gov (United States)

    Wilson, Moira L; Tumen, Sarah; Ota, Rissa; Simmers, Anthony G

    2015-05-01

    In 2012, the New Zealand Government announced a proposal to introduce predictive risk models (PRMs) to help professionals identify and assess children at risk of abuse or neglect as part of a preventive early intervention strategy, subject to further feasibility study and trialing. The purpose of this study is to examine technical feasibility and predictive validity of the proposal, focusing on a PRM that would draw on population-wide linked administrative data to identify newborn children who are at high priority for intensive preventive services. Data analysis was conducted in 2013 based on data collected in 2000-2012. A PRM was developed using data for children born in 2010 and externally validated for children born in 2007, examining outcomes to age 5 years. Performance of the PRM in predicting administratively recorded substantiations of maltreatment was good compared to the performance of other tools reviewed in the literature, both overall, and for indigenous Māori children. Some, but not all, of the children who go on to have recorded substantiations of maltreatment could be identified early using PRMs. PRMs should be considered as a potential complement to, rather than a replacement for, professional judgment. Trials are needed to establish whether risks can be mitigated and PRMs can make a positive contribution to frontline practice, engagement in preventive services, and outcomes for children. Deciding whether to proceed to trial requires balancing a range of considerations, including ethical and privacy risks and the risk of compounding surveillance bias. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  12. Development of a Predictive Model for Induction Success of Labour

    Directory of Open Access Journals (Sweden)

    Cristina Pruenza

    2018-03-01

    Full Text Available Induction of the labour process is an extraordinarily common procedure used in some pregnancies. Obstetricians face the need to end a pregnancy, for medical reasons usually (maternal or fetal requirements or less frequently, social (elective inductions for convenience. The success of induction procedure is conditioned by a multitude of maternal and fetal variables that appear before or during pregnancy or birth process, with a low predictive value. The failure of the induction process involves performing a caesarean section. This project arises from the clinical need to resolve a situation of uncertainty that occurs frequently in our clinical practice. Since the weight of clinical variables is not adequately weighted, we consider very interesting to know a priori the possibility of success of induction to dismiss those inductions with high probability of failure, avoiding unnecessary procedures or postponing end if possible. We developed a predictive model of induced labour success as a support tool in clinical decision making. Improve the predictability of a successful induction is one of the current challenges of Obstetrics because of its negative impact. The identification of those patients with high chances of failure, will allow us to offer them better care improving their health outcomes (adverse perinatal outcomes for mother and newborn, costs (medication, hospitalization, qualified staff and patient perceived quality. Therefore a Clinical Decision Support System was developed to give support to the Obstetricians. In this article, we had proposed a robust method to explore and model a source of clinical information with the purpose of obtaining all possible knowledge. Generally, in classification models are difficult to know the contribution that each attribute provides to the model. We had worked in this direction to offer transparency to models that may be considered as black boxes. The positive results obtained from both the

  13. Decadal prediction skill using a high-resolution climate model

    Science.gov (United States)

    Monerie, Paul-Arthur; Coquart, Laure; Maisonnave, Éric; Moine, Marie-Pierre; Terray, Laurent; Valcke, Sophie

    2017-11-01

    The ability of a high-resolution coupled atmosphere-ocean general circulation model (with a horizontal resolution of a quarter of a degree in the ocean and of about 0.5° in the atmosphere) to predict the annual means of temperature, precipitation, sea-ice volume and extent is assessed based on initialized hindcasts over the 1993-2009 period. Significant skill in predicting sea surface temperatures is obtained, especially over the North Atlantic, the tropical Atlantic and the Indian Ocean. The Sea Ice Extent and volume are also reasonably predicted in winter (March) and summer (September). The model skill is mainly due to the external forcing associated with well-mixed greenhouse gases. A decrease in the global warming rate associated with a negative phase of the Pacific Decadal Oscillation is simulated by the model over a suite of 10-year periods when initialized from starting dates between 1999 and 2003. The model ability to predict regional change is investigated by focusing on the mid-90's Atlantic Ocean subpolar gyre warming. The model simulates the North Atlantic warming associated with a meridional heat transport increase, a strengthening of the North Atlantic current and a deepening of the mixed layer over the Labrador Sea. The atmosphere plays a role in the warming through a modulation of the North Atlantic Oscillation: a negative sea level pressure anomaly, located south of the subpolar gyre is associated with a wind speed decrease over the subpolar gyre. This leads to a reduced oceanic heat-loss and favors a northward displacement of anomalously warm and salty subtropical water that both concur to the subpolar gyre warming. We finally conclude that the subpolar gyre warming is mainly triggered by ocean dynamics with a possible contribution of atmospheric circulation favoring its persistence.

  14. Comparison of pause predictions of two sequence-dependent transcription models

    International Nuclear Information System (INIS)

    Bai, Lu; Wang, Michelle D

    2010-01-01

    Two recent theoretical models, Bai et al (2004, 2007) and Tadigotla et al (2006), formulated thermodynamic explanations of sequence-dependent transcription pausing by RNA polymerase (RNAP). The two models differ in some basic assumptions and therefore make different yet overlapping predictions for pause locations, and different predictions on pause kinetics and mechanisms. Here we present a comprehensive comparison of the two models. We show that while they have comparable predictive power of pause locations at low NTP concentrations, the Bai et al model is more accurate than Tadigotla et al at higher NTP concentrations. The pausing kinetics predicted by Bai et al is also consistent with time-course transcription reactions, while Tadigotla et al is unsuited for this type of kinetic prediction. More importantly, the two models in general predict different pausing mechanisms even for the same pausing sites, and the Bai et al model provides an explanation more consistent with recent single molecule observations

  15. Heuristic Modeling for TRMM Lifetime Predictions

    Science.gov (United States)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  16. A Computational Model for Predicting Gas Breakdown

    Science.gov (United States)

    Gill, Zachary

    2017-10-01

    Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.

  17. Anisotropic cosmological models and generalized scalar tensor theory

    Indian Academy of Sciences (India)

    physics pp. 669–673. Anisotropic cosmological models and generalized scalar tensor theory. SUBENOY CHAKRABORTY1,*, BATUL CHANDRA SANTRA2 and ... Anisotropic cosmological models; general scalar tensor theory; inflation. PACS Nos 98.80.Hw; 04.50.+h; 98.80.Cq. 1. Introduction. Brans–Dicke theory [1] (BD ...

  18. Model-free adaptive sliding mode controller design for generalized ...

    Indian Academy of Sciences (India)

    L M WANG

    2017-08-16

    Aug 16, 2017 ... A novel model-free adaptive sliding mode strategy is proposed for a generalized projective synchronization (GPS) ... the neural network theory, a model-free adaptive sliding mode controller is designed to guarantee asymptotic stability of the generalized ..... following optimization parameters are needed: ⎧.

  19. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  20. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  1. Managing heteroscedasticity in general linear models.

    Science.gov (United States)

    Rosopa, Patrick J; Schaffer, Meline M; Schroeder, Amber N

    2013-09-01

    Heteroscedasticity refers to a phenomenon where data violate a statistical assumption. This assumption is known as homoscedasticity. When the homoscedasticity assumption is violated, this can lead to increased Type I error rates or decreased statistical power. Because this can adversely affect substantive conclusions, the failure to detect and manage heteroscedasticity could have serious implications for theory, research, and practice. In addition, heteroscedasticity is not uncommon in the behavioral and social sciences. Thus, in the current article, we synthesize extant literature in applied psychology, econometrics, quantitative psychology, and statistics, and we offer recommendations for researchers and practitioners regarding available procedures for detecting heteroscedasticity and mitigating its effects. In addition to discussing the strengths and weaknesses of various procedures and comparing them in terms of existing simulation results, we describe a 3-step data-analytic process for detecting and managing heteroscedasticity: (a) fitting a model based on theory and saving residuals, (b) the analysis of residuals, and (c) statistical inferences (e.g., hypothesis tests and confidence intervals) involving parameter estimates. We also demonstrate this data-analytic process using an illustrative example. Overall, detecting violations of the homoscedasticity assumption and mitigating its biasing effects can strengthen the validity of inferences from behavioral and social science data.

  2. Nonlinear signal processing using neural networks: Prediction and system modelling

    Energy Technology Data Exchange (ETDEWEB)

    Lapedes, A.; Farber, R.

    1987-06-01

    The backpropagation learning algorithm for neural networks is developed into a formalism for nonlinear signal processing. We illustrate the method by selecting two common topics in signal processing, prediction and system modelling, and show that nonlinear applications can be handled extremely well by using neural networks. The formalism is a natural, nonlinear extension of the linear Least Mean Squares algorithm commonly used in adaptive signal processing. Simulations are presented that document the additional performance achieved by using nonlinear neural networks. First, we demonstrate that the formalism may be used to predict points in a highly chaotic time series with orders of magnitude increase in accuracy over conventional methods including the Linear Predictive Method and the Gabor-Volterra-Weiner Polynomial Method. Deterministic chaos is thought to be involved in many physical situations including the onset of turbulence in fluids, chemical reactions and plasma physics. Secondly, we demonstrate the use of the formalism in nonlinear system modelling by providing a graphic example in which it is clear that the neural network has accurately modelled the nonlinear transfer function. It is interesting to note that the formalism provides explicit, analytic, global, approximations to the nonlinear maps underlying the various time series. Furthermore, the neural net seems to be extremely parsimonious in its requirements for data points from the time series. We show that the neural net is able to perform well because it globally approximates the relevant maps by performing a kind of generalized mode decomposition of the maps. 24 refs., 13 figs.

  3. Relativistic theory of gravitation and nonuniqueness of the predictions of general relativity theory

    International Nuclear Information System (INIS)

    Logunov, A.A.; Loskutov, Yu.M.

    1986-01-01

    It is shown that while the predictions of relativistic theory of gravitation (RTG) for the gravitational effects are unique and consistent with the experimental data available, the relevant predictions of general relativity theory are not unique. Therewith the above nonuniqueness manifests itself in some effects in the first order in the gravitational interaction constant in others in the second one. The absence in GRT of the energy-momentum and angular momentum conservation laws for the matter and gravitational field taken together and its inapplicability to give uniquely determined predictions for the gravitational phenomena compel to reject GRT as a physical theory

  4. A generalized model via random walks for information filtering

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhuo-Ming, E-mail: zhuomingren@gmail.com [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Kong, Yixiu [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Shang, Ming-Sheng, E-mail: msshang@cigit.ac.cn [Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Zhang, Yi-Cheng [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland)

    2016-08-06

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  5. A generalized model via random walks for information filtering

    International Nuclear Information System (INIS)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-01-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  6. Nostradamus 2014 prediction, modeling and analysis of complex systems

    CERN Document Server

    Suganthan, Ponnuthurai; Chen, Guanrong; Snasel, Vaclav; Abraham, Ajith; Rössler, Otto

    2014-01-01

    The prediction of behavior of complex systems, analysis and modeling of its structure is a vitally important problem in engineering, economy and generally in science today. Examples of such systems can be seen in the world around us (including our bodies) and of course in almost every scientific discipline including such “exotic” domains as the earth’s atmosphere, turbulent fluids, economics (exchange rate and stock markets), population growth, physics (control of plasma), information flow in social networks and its dynamics, chemistry and complex networks. To understand such complex dynamics, which often exhibit strange behavior, and to use it in research or industrial applications, it is paramount to create its models. For this purpose there exists a rich spectrum of methods, from classical such as ARMA models or Box Jenkins method to modern ones like evolutionary computation, neural networks, fuzzy logic, geometry, deterministic chaos amongst others. This proceedings book is a collection of accepted ...

  7. Coal demand prediction based on a support vector machine model

    Energy Technology Data Exchange (ETDEWEB)

    Jia, Cun-liang; Wu, Hai-shan; Gong, Dun-wei [China University of Mining & Technology, Xuzhou (China). School of Information and Electronic Engineering

    2007-01-15

    A forecasting model for coal demand of China using a support vector regression was constructed. With the selected embedding dimension, the output vectors and input vectors were constructed based on the coal demand of China from 1980 to 2002. After compared with lineal kernel and Sigmoid kernel, a radial basis function(RBF) was adopted as the kernel function. By analyzing the relationship between the error margin of prediction and the model parameters, the proper parameters were chosen. The support vector machines (SVM) model with multi-input and single output was proposed. Compared the predictor based on RBF neural networks with test datasets, the results show that the SVM predictor has higher precision and greater generalization ability. In the end, the coal demand from 2003 to 2006 is accurately forecasted. l0 refs., 2 figs., 4 tabs.

  8. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...

  9. General Description of Fission Observables: GEF Model Code

    Science.gov (United States)

    Schmidt, K.-H.; Jurado, B.; Amouroux, C.; Schmitt, C.

    2016-01-01

    The GEF ("GEneral description of Fission observables") model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is

  10. An operational phenological model for numerical pollen prediction

    Science.gov (United States)

    Scheifinger, Helfried

    2010-05-01

    The general prevalence of seasonal allergic rhinitis is estimated to be about 15% in Europe, and still increasing. Pre-emptive measures require both the reliable assessment of production and release of various pollen species and the forecasting of their atmospheric dispersion. For this purpose numerical pollen prediction schemes are being developed by a number of European weather services in order to supplement and improve the qualitative pollen prediction systems by state of the art instruments. Pollen emission is spatially and temporally highly variable throughout the vegetation period and not directly observed, which precludes a straightforward application of dispersion models to simulate pollen transport. Even the beginning and end of flowering, which indicates the time period of potential pollen emission, is not (yet) available in real time. One way to create a proxy for the beginning, the course and the end of the pollen emission is its simulation as function of real time temperature observations. In this work the European phenological data set of the COST725 initiative forms the basis of modelling the beginning of flowering of 15 species, some of which emit allergic pollen. In order to keep the problem as simple as possible for the sake of spatial interpolation, a 3 parameter temperature sum model was implemented in a real time operational procedure, which calculates the spatial distribution of the entry dates for the current day and 24, 48 and 72 hours in advance. As stand alone phenological model and combined with back trajectories it is thought to support the qualitative pollen prediction scheme at the Austrian national weather service. Apart from that it is planned to incorporate it in a numerical pollen dispersion model. More details, open questions and first results of the operation phenological model will be discussed and presented.

  11. General Friction Model Extended by the Effect of Strain Hardening

    DEFF Research Database (Denmark)

    Nielsen, Chris V.; Martins, Paulo A.F.; Bay, Niels

    2016-01-01

    An extension to the general friction model proposed by Wanheim and Bay [1] to include the effect of strain hardening is proposed. The friction model relates the friction stress to the fraction of real contact area by a friction factor under steady state sliding. The original model for the real...... contact area as function of the normalized contact pressure is based on slip-line analysis and hence on the assumption of rigid-ideally plastic material behavior. In the present work, a general finite element model is established to, firstly, reproduce the original model under the assumption of rigid......-ideally plastic material, and secondly, to extend the solution by the influence of material strain hardening. This corresponds to adding a new variable and, therefore, a new axis to the general friction model. The resulting model is presented in a combined function suitable for e.g. finite element modeling...

  12. Numerical modeling capabilities to predict repository performance

    International Nuclear Information System (INIS)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used

  13. Integrating a human thermoregulatory model with a clothing model to predict core and skin temperatures.

    Science.gov (United States)

    Yang, Jie; Weng, Wenguo; Wang, Faming; Song, Guowen

    2017-05-01

    This paper aims to integrate a human thermoregulatory model with a clothing model to predict core and skin temperatures. The human thermoregulatory model, consisting of an active system and a passive system, was used to determine the thermoregulation and heat exchanges within the body. The clothing model simulated heat and moisture transfer from the human skin to the environment through the microenvironment and fabric. In this clothing model, the air gap between skin and clothing, as well as clothing properties such as thickness, thermal conductivity, density, porosity, and tortuosity were taken into consideration. The simulated core and mean skin temperatures were compared to the published experimental results of subject tests at three levels of ambient temperatures of 20 °C, 30 °C, and 40 °C. Although lower signal-to-noise-ratio was observed, the developed model demonstrated positive performance at predicting core temperatures with a maximum difference between the simulations and measurements of no more than 0.43 °C. Generally, the current model predicted the mean skin temperatures with reasonable accuracy. It could be applied to predict human physiological responses and assess thermal comfort and heat stress. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Predictive value of the official cancer alarm symptoms in general practice

    DEFF Research Database (Denmark)

    Krasnik Huggenberger, Ivan; Andersen, John Sahl

    2015-01-01

    Introduction: The objective of this study was to investigate the evidence for positive predictive value (PPV) of alarm symptoms and combinations of symptoms for colorectal cancer, breast cancer, prostate cancer and lung cancer in general practice. Methods: This study is based on a literature search...

  15. Comparison of H-infinity control and generalized predictive control for a laser scanner system

    DEFF Research Database (Denmark)

    Ordys, A.W.; Stoustrup, Jakob; Smillie, I.

    2000-01-01

    This paper describes tests performed on a laser scanner system to assess the feasibility of H-infinity control and generalized predictive control design techniques in achieving a required performance in a trajectory folowing problem. The two methods are compared with respect to achieved scan time...

  16. Reliability assessment of competing risks with generalized mixed shock models

    International Nuclear Information System (INIS)

    Rafiee, Koosha; Feng, Qianmei; Coit, David W.

    2017-01-01

    This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.

  17. Use of nonlinear dose-effect models to predict consequences

    International Nuclear Information System (INIS)

    Seiler, F.A.; Alvarez, J.L.

    1996-01-01

    The linear dose-effect relationship was introduced as a model for the induction of cancer from exposure to nuclear radiation. Subsequently, it has been used by analogy to assess the risk of chemical carcinogens also. Recently, however, the model for radiation carcinogenesis has come increasingly under attack because its calculations contradict the epidemiological data, such as cancer in atomic bomb survivors. Even so, its proponents vigorously defend it, often using arguments that are not so much scientific as a mix of scientific, societal, and often political arguments. At least in part, the resilience of the linear model is due to two convenient properties that are exclusive to linearity: First, the risk of an event is determined solely by the event dose; second, the total risk of a population group depends only on the total population dose. In reality, the linear model has been conclusively falsified; i.e., it has been shown to make wrong predictions, and once this fact is generally realized, the scientific method calls for a new paradigm model. As all alternative models are by necessity nonlinear, all the convenient properties of the linear model are invalid, and calculational procedures have to be used that are appropriate for nonlinear models

  18. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  19. Model Reduction of Switched Systems Based on Switching Generalized Gramians

    DEFF Research Database (Denmark)

    Shaker, Hamid Reza; Wisniewski, Rafal

    2012-01-01

    In this paper, a general method for model order reduction of discrete-time switched linear systems is presented. The proposed technique uses switching generalized gramians. It is shown that several classical reduction methods can be developed into the generalized gramian framework for the model...... reduction of linear systems and for the reduction of switched systems. Discrete-time balanced reduction within a specified frequency interval is taken as an example within this framework. To avoid numerical instability and to increase the numerical efficiency, a generalized gramian-based Petrov...

  20. Fractional-Order Generalized Predictive Control: Application for Low-Speed Control of Gasoline-Propelled Cars

    Directory of Open Access Journals (Sweden)

    M. Romero

    2013-01-01

    Full Text Available There is an increasing interest in using fractional calculus applied to control theory generalizing classical control strategies as the PID controller and developing new ones with the intention of taking advantage of characteristics supplied by this mathematical tool for the controller definition. In this work, the fractional generalization of the successful and spread control strategy known as model predictive control is applied to drive autonomously a gasoline-propelled vehicle at low speeds. The vehicle is a Citroën C3 Pluriel that was modified to act over the throttle and brake pedals. Its highly nonlinear dynamics are an excellent test bed for applying beneficial characteristics of fractional predictive formulation to compensate unmodeled dynamics and external disturbances.

  1. Model Predictive Control for an Industrial SAG Mill

    DEFF Research Database (Denmark)

    Ohan, Valeriu; Steinke, Florian; Metzger, Michael

    2012-01-01

    We discuss Model Predictive Control (MPC) based on ARX models and a simple lower order disturbance model. The advantage of this MPC formulation is that it has few tuning parameters and is based on an ARX prediction model that can readily be identied using standard technologies from system identic...

  2. Uncertainties in spatially aggregated predictions from a logistic regression model

    NARCIS (Netherlands)

    Horssen, P.W. van; Pebesma, E.J.; Schot, P.P.

    2002-01-01

    This paper presents a method to assess the uncertainty of an ecological spatial prediction model which is based on logistic regression models, using data from the interpolation of explanatory predictor variables. The spatial predictions are presented as approximate 95% prediction intervals. The

  3. Dealing with missing predictor values when applying clinical prediction models.

    NARCIS (Netherlands)

    Janssen, K.J.; Vergouwe, Y.; Donders, A.R.T.; Harrell Jr, F.E.; Chen, Q.; Grobbee, D.E.; Moons, K.G.

    2009-01-01

    BACKGROUND: Prediction models combine patient characteristics and test results to predict the presence of a disease or the occurrence of an event in the future. In the event that test results (predictor) are unavailable, a strategy is needed to help users applying a prediction model to deal with

  4. A General Polygon-based Deformable Model for Object Recognition

    DEFF Research Database (Denmark)

    Jensen, Rune Fisker; Carstensen, Jens Michael

    1999-01-01

    We propose a general scheme for object localization and recognition based on a deformable model. The model combines shape and image properties by warping a arbitrary prototype intensity template according to the deformation in shape. The shape deformations are constrained by a probabilistic...... distribution, which combined with a match of the warped intensity template and the image form the final criteria used for localization and recognition of a given object. The chosen representation gives the model an ability to model an almost arbitrary object. Beside the actual model a full general scheme...

  5. Establishment of a new initial dose plan for vancomycin using the generalized linear mixed model.

    Science.gov (United States)

    Kourogi, Yasuyuki; Ogata, Kenji; Takamura, Norito; Tokunaga, Jin; Setoguchi, Nao; Kai, Mitsuhiro; Tanaka, Emi; Chiyotanda, Susumu

    2017-04-08

    When administering vancomycin hydrochloride (VCM), the initial dose is adjusted to ensure that the steady-state trough value (Css-trough) remains within the effective concentration range. However, the Css-trough (population mean method predicted value [PMMPV]) calculated using the population mean method (PMM) often deviate from the effective concentration range. In this study, we used the generalized linear mixed model (GLMM) for initial dose planning to create a model that accurately predicts Css-trough, and subsequently assessed its prediction accuracy. The study included 46 subjects whose trough values were measured after receiving VCM. We calculated the Css-trough (Bayesian estimate predicted value [BEPV]) from the Bayesian estimates of trough values. Using the patients' medical data, we created models that predict the BEPV and selected the model with minimum information criterion (GLMM best model). We then calculated the Css-trough (GLMMPV) from the GLMM best model and compared the BEPV correlation with GLMMPV and with PMMPV. The GLMM best model was {[0.977 + (males: 0.029 or females: -0.081)] × PMMPV + 0.101 × BUN/adjusted SCr - 12.899 × SCr adjusted amount}. The coefficients of determination for BEPV/GLMMPV and BEPV/PMMPV were 0.623 and 0.513, respectively. We demonstrated that the GLMM best model was more accurate in predicting the Css-trough than the PMM.

  6. Generalized Linear Models with Applications in Engineering and the Sciences

    CERN Document Server

    Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J

    2012-01-01

    Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma

  7. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  8. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  9. Assessing effects of variation in global climate data sets on spatial predictions from climate envelope models

    Science.gov (United States)

    Romañach, Stephanie; Watling, James I.; Fletcher, Robert J.; Speroterra, Carolina; Bucklin, David N.; Brandt, Laura A.; Pearlstine, Leonard G.; Escribano, Yesenia; Mazzotti, Frank J.

    2014-01-01

    Climate change poses new challenges for natural resource managers. Predictive modeling of species–environment relationships using climate envelope models can enhance our understanding of climate change effects on biodiversity, assist in assessment of invasion risk by exotic organisms, and inform life-history understanding of individual species. While increasing interest has focused on the role of uncertainty in future conditions on model predictions, models also may be sensitive to the initial conditions on which they are trained. Although climate envelope models are usually trained using data on contemporary climate, we lack systematic comparisons of model performance and predictions across alternative climate data sets available for model training. Here, we seek to fill that gap by comparing variability in predictions between two contemporary climate data sets to variability in spatial predictions among three alternative projections of future climate. Overall, correlations between monthly temperature and precipitation variables were very high for both contemporary and future data. Model performance varied across algorithms, but not between two alternative contemporary climate data sets. Spatial predictions varied more among alternative general-circulation models describing future climate conditions than between contemporary climate data sets. However, we did find that climate envelope models with low Cohen's kappa scores made more discrepant spatial predictions between climate data sets for the contemporary period than did models with high Cohen's kappa scores. We suggest conservation planners evaluate multiple performance metrics and be aware of the importance of differences in initial conditions for spatial predictions from climate envelope models.

  10. Circuit-wide structural and functional measures predict ventromedial prefrontal cortex fear generalization: implications for generalized anxiety disorder.

    Science.gov (United States)

    Cha, Jiook; Greenberg, Tsafrir; Carlson, Joshua M; Dedora, Daniel J; Hajcak, Greg; Mujica-Parodi, Lilianne R

    2014-03-12

    The ventromedial prefrontal cortex (vmPFC) plays a critical role in a number of evaluative processes, including risk assessment. Impaired discrimination between threat and safety is considered a hallmark of clinical anxiety. Here, we investigated the circuit-wide structural and functional mechanisms underlying vmPFC threat-safety assessment in humans. We tested patients with generalized anxiety disorder (GAD; n = 32, female) and healthy controls (n = 25, age-matched female) on a task that assessed the generalization of conditioned threat during fMRI scanning. The task consisted of seven rectangles of graded widths presented on a screen; only the midsize one was paired with mild electric shock [conditioned stimulus (CS)], while the others, safety cues, systematically varied in width by ±20, 40, and 60% [generalization stimuli (GS)] compared with the CS. We derived an index reflecting vmPFC functioning from the BOLD reactivity on a continuum of threat (CS) to safety (GS least similar to CS); patients with GAD showed less discrimination between threat and safety cues, compared with healthy controls (Greenberg et al., 2013b). Using structural, functional (i.e., resting-state), and diffusion MRI, we measured vmPFC thickness, vmPFC functional connectivity, and vmPFC structural connectivity within the corticolimbic systems. The results demonstrate that all three factors predict individual variability of vmPFC threat assessment in an independent fashion. Moreover, these neural features are also linked to GAD, most likely via an vmPFC fear generalization. Our results strongly suggest that vmPFC threat processing is closely associated with broader corticolimbic circuit anomalies, which may synergistically contribute to clinical anxiety.

  11. A guide to developing resource selection functions from telemetry data using generalized estimating equations and generalized linear mixed models

    Directory of Open Access Journals (Sweden)

    Nicola Koper

    2012-03-01

    Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.

  12. Linear and Generalized Linear Mixed Models and Their Applications

    CERN Document Server

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  13. A Duality Result for the Generalized Erlang Risk Model

    Directory of Open Access Journals (Sweden)

    Lanpeng Ji

    2014-11-01

    Full Text Available In this article, we consider the generalized Erlang risk model and its dual model. By using a conditional measure-preserving correspondence between the two models, we derive an identity for two interesting conditional probabilities. Applications to the discounted joint density of the surplus prior to ruin and the deficit at ruin are also discussed.

  14. Critical Comments on the General Model of Instructional Communication

    Science.gov (United States)

    Walton, Justin D.

    2014-01-01

    This essay presents a critical commentary on McCroskey et al.'s (2004) general model of instructional communication. In particular, five points are examined which make explicit and problematize the meta-theoretical assumptions of the model. Comments call attention to the limitations of the model and argue for a broader approach to…

  15. Hierarchical Generalized Linear Models for the Analysis of Judge Ratings

    Science.gov (United States)

    Muckle, Timothy J.; Karabatsos, George

    2009-01-01

    It is known that the Rasch model is a special two-level hierarchical generalized linear model (HGLM). This article demonstrates that the many-faceted Rasch model (MFRM) is also a special case of the two-level HGLM, with a random intercept representing examinee ability on a test, and fixed effects for the test items, judges, and possibly other…

  16. Comparing National Water Model Inundation Predictions with Hydrodynamic Modeling

    Science.gov (United States)

    Egbert, R. J.; Shastry, A.; Aristizabal, F.; Luo, C.

    2017-12-01

    The National Water Model (NWM) simulates the hydrologic cycle and produces streamflow forecasts, runoff, and other variables for 2.7 million reaches along the National Hydrography Dataset for the continental United States. NWM applies Muskingum-Cunge channel routing which is based on the continuity equation. However, the momentum equation also needs to be considered to obtain better estimates of streamflow and stage in rivers especially for applications such as flood inundation mapping. Simulation Program for River NeTworks (SPRNT) is a fully dynamic model for large scale river networks that solves the full nonlinear Saint-Venant equations for 1D flow and stage height in river channel networks with non-uniform bathymetry. For the current work, the steady-state version of the SPRNT model was leveraged. An evaluation on SPRNT's and NWM's abilities to predict inundation was conducted for the record flood of Hurricane Matthew in October 2016 along the Neuse River in North Carolina. This event was known to have been influenced by backwater effects from the Hurricane's storm surge. Retrospective NWM discharge predictions were converted to stage using synthetic rating curves. The stages from both models were utilized to produce flood inundation maps using the Height Above Nearest Drainage (HAND) method which uses the local relative heights to provide a spatial representation of inundation depths. In order to validate the inundation produced by the models, Sentinel-1A synthetic aperture radar data in the VV and VH polarizations along with auxiliary data was used to produce a reference inundation map. A preliminary, binary comparison of the inundation maps to the reference, limited to the five HUC-12 areas of Goldsboro, NC, yielded that the flood inundation accuracies for NWM and SPRNT were 74.68% and 78.37%, respectively. The differences for all the relevant test statistics including accuracy, true positive rate, true negative rate, and positive predictive value were found

  17. Generalized height-diameter models for Populus tremula L. stands

    African Journals Online (AJOL)

    USER

    2010-07-12

    Jul 12, 2010 ... Using permanent sample plot data, selected tree height and diameter functions were evaluated for their predictive abilities for Populus tremula stands in Turkey. Two sets of models were evaluated. The first set included five models for estimating height as a function of individual tree diameter; the second set.

  18. Thermospheric tides simulated by the national center for atmospheric research thermosphere-ionosphere general circulation model at equinox

    International Nuclear Information System (INIS)

    Fesen, C.G.; Roble, R.G.; Ridley, E.C.

    1993-01-01

    The authors use the National Center for Atmospheric Research (NCAR) thermosphere/ionosphere general circulation model (TIGCM) to model tides and dynamics in the thermosphere. This model incorporates the latest advances in the thermosphere general circulation model. Model results emphasized the 70 degree W longitude region to overlap a series of incoherent radar scatter installations. Data and the model are available on data bases. The results of this theoretical modeling are compared with available data, and with prediction of more empirical models. In general there is broad agreement within the comparisons

  19. A PROPOSAL FOR GENERALIZATION OF 3D MODELS

    Directory of Open Access Journals (Sweden)

    A. Uyar

    2017-11-01

    Full Text Available In recent years, 3D models have been created of many cities around the world. Most of the 3D city models have been introduced as completely graphic or geometric models, and the semantic and topographic aspects of the models have been neglected. In order to use 3D city models beyond the task, a generalization is necessary. CityGML is an open data model and XML-based format for the storage and exchange of virtual 3D city models. Level of Details (LoD which is an important concept for 3D modelling, can be defined as outlined degree or prior representation of real-world objects. The paper aim is first describes some requirements of 3D model generalization, then presents problems and approaches that have been developed in recent years. In conclude the paper will be a summary and outlook on problems and future work.

  20. a Proposal for Generalization of 3d Models

    Science.gov (United States)

    Uyar, A.; Ulugtekin, N. N.

    2017-11-01

    In recent years, 3D models have been created of many cities around the world. Most of the 3D city models have been introduced as completely graphic or geometric models, and the semantic and topographic aspects of the models have been neglected. In order to use 3D city models beyond the task, a generalization is necessary. CityGML is an open data model and XML-based format for the storage and exchange of virtual 3D city models. Level of Details (LoD) which is an important concept for 3D modelling, can be defined as outlined degree or prior representation of real-world objects. The paper aim is first describes some requirements of 3D model generalization, then presents problems and approaches that have been developed in recent years. In conclude the paper will be a summary and outlook on problems and future work.

  1. Morphometry Predicts Early GFR Change in Primary Proteinuric Glomerulopathies: A Longitudinal Cohort Study Using Generalized Estimating Equations.

    Directory of Open Access Journals (Sweden)

    Kevin V Lemley

    Full Text Available Most predictive models of kidney disease progression have not incorporated structural data. If structural variables have been used in models, they have generally been only semi-quantitative.We examined the predictive utility of quantitative structural parameters measured on the digital images of baseline kidney biopsies from the NEPTUNE study of primary proteinuric glomerulopathies. These variables were included in longitudinal statistical models predicting the change in estimated glomerular filtration rate (eGFR over up to 55 months of follow-up.The participants were fifty-six pediatric and adult subjects from the NEPTUNE longitudinal cohort study who had measurements made on their digital biopsy images; 25% were African-American, 70% were male and 39% were children; 25 had focal segmental glomerular sclerosis, 19 had minimal change disease, and 12 had membranous nephropathy. We considered four different sets of candidate predictors, each including four quantitative structural variables (for example, mean glomerular tuft area, cortical density of patent glomeruli and two of the principal components from the correlation matrix of six fractional cortical areas-interstitium, atrophic tubule, intact tubule, blood vessel, sclerotic glomerulus, and patent glomerulus along with 13 potentially confounding demographic and clinical variables (such as race, age, diagnosis, and baseline eGFR, quantitative proteinuria and BMI. We used longitudinal linear models based on these 17 variables to predict the change in eGFR over up to 55 months. All 4 models had a leave-one-out cross-validated R2 of about 62%.Several combinations of quantitative structural variables were significantly and strongly associated with changes in eGFR. The structural variables were generally stronger than any of the confounding variables, other than baseline eGFR. Our findings suggest that quantitative assessment of diagnostic renal biopsies may play a role in estimating the baseline

  2. Significance of predictive models/risk calculators for HBV-related hepatocellular carcinoma

    Directory of Open Access Journals (Sweden)

    DONG Jing

    2015-06-01

    Full Text Available Hepatitis B virus (HBV-related hepatocellular carcinoma (HCC is a major public health problem in Southeast Asia. In recent years, researchers from Hong Kong and Taiwan have reported predictive models or risk calculators for HBV-associated HCC by studying its natural history, which, to some extent, predicts the possibility of HCC development. Generally, risk factors of each model involve age, sex, HBV DNA level, and liver cirrhosis. This article discusses the evolution and clinical significance of currently used predictive models for HBV-associated HCC and assesses the advantages and limits of risk calculators. Updated REACH-B model and LSM-HCC model show better negative predictive values and have better performance in predicting the outcomes of patients with chronic hepatitis B (CHB. These models can be applied to stratified screening of HCC and, meanwhile, become an assessment tool for the management of CHB patients.

  3. Predictive models for moving contact line flows

    Science.gov (United States)

    Rame, Enrique; Garoff, Stephen

    2003-01-01

    Modeling flows with moving contact lines poses the formidable challenge that the usual assumptions of Newtonian fluid and no-slip condition give rise to a well-known singularity. This singularity prevents one from satisfying the contact angle condition to compute the shape of the fluid-fluid interface, a crucial calculation without which design parameters such as the pressure drop needed to move an immiscible 2-fluid system through a solid matrix cannot be evaluated. Some progress has been made for low Capillary number spreading flows. Combining experimental measurements of fluid-fluid interfaces very near the moving contact line with an analytical expression for the interface shape, we can determine a parameter that forms a boundary condition for the macroscopic interface shape when Ca much les than l. This parameter, which plays the role of an "apparent" or macroscopic dynamic contact angle, is shown by the theory to depend on the system geometry through the macroscopic length scale. This theoretically established dependence on geometry allows this parameter to be "transferable" from the geometry of the measurement to any other geometry involving the same material system. Unfortunately this prediction of the theory cannot be tested on Earth.

  4. Hot Temperatures, Hostile Affect, Hostile Cognition, and Arousal: Tests of a General Model of Affective Aggression.

    Science.gov (United States)

    Anderson, Craig A.; And Others

    1995-01-01

    Used a general model of affective aggression to generate predictions concerning hot temperatures. Results indicated that hot temperatures produced increases in hostile affect, hostile cognition, and physiological arousal. Concluded that hostile affect, hostile cognitions, and excitation transfer processes may all increase the likelihood of biased…

  5. Developmental prediction model for early alcohol initiation in Dutch adolescents

    NARCIS (Netherlands)

    Geels, L.M.; Vink, J.M.; Beijsterveldt, C.E.M. van; Bartels, M.; Boomsma, D.I.

    2013-01-01

    Objective: Multiple factors predict early alcohol initiation in teenagers. Among these are genetic risk factors, childhood behavioral problems, life events, lifestyle, and family environment. We constructed a developmental prediction model for alcohol initiation below the Dutch legal drinking age

  6. The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.

    Science.gov (United States)

    von Davier, Matthias

    2014-02-01

    The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.

  7. Graphical tools for model selection in generalized linear models.

    Science.gov (United States)

    Murray, K; Heritier, S; Müller, S

    2013-11-10

    Model selection techniques have existed for many years; however, to date, simple, clear and effective methods of visualising the model building process are sparse. This article describes graphical methods that assist in the selection of models and comparison of many different selection criteria. Specifically, we describe for logistic regression, how to visualize measures of description loss and of model complexity to facilitate the model selection dilemma. We advocate the use of the bootstrap to assess the stability of selected models and to enhance our graphical tools. We demonstrate which variables are important using variable inclusion plots and show that these can be invaluable plots for the model building process. We show with two case studies how these proposed tools are useful to learn more about important variables in the data and how these tools can assist the understanding of the model building process. Copyright © 2013 John Wiley & Sons, Ltd.

  8. MODELLING OF DYNAMIC SPEED LIMITS USING THE MODEL PREDICTIVE CONTROL

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article considers the issues of traffic management using intelligent system “Car-Road” (IVHS, which consist of interacting intelligent vehicles (IV and intelligent roadside controllers. Vehicles are organized in convoy with small distances between them. All vehicles are assumed to be fully automated (throttle control, braking, steering. Proposed approaches for determining speed limits for traffic cars on the motorway using a model predictive control (MPC. The article proposes an approach to dynamic speed limit to minimize the downtime of vehicles in traffic.

  9. General classical solutions in the noncommutative CPN-1 model

    International Nuclear Information System (INIS)

    Foda, O.; Jack, I.; Jones, D.R.T.

    2002-01-01

    We give an explicit construction of general classical solutions for the noncommutative CP N-1 model in two dimensions, showing that they correspond to integer values for the action and topological charge. We also give explicit solutions for the Dirac equation in the background of these general solutions and show that the index theorem is satisfied

  10. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  11. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  12. Trend modelling of wave parameters and application in onboard prediction of ship responses

    DEFF Research Database (Denmark)

    Montazeri, Najmeh; Nielsen, Ulrik Dam; Jensen, J. Juncher

    2015-01-01

    This paper presents a trend analysis for prediction of sea state parameters onboard shipsduring voyages. Given those parameters, a JONSWAP model and also the transfer functions, prediction of wave induced ship responses are thus made. The procedure is tested with full-scale data of an in-service...... container ship. Comparison between predictions and the actual measurements, implies a good agreementin general. This method can be an efficient way to improve decision support on board ships....

  13. Specific and General Human Capital in an Endogenous Growth Model

    OpenAIRE

    Evangelia Vourvachaki; Vahagn Jerbashian; : Sergey Slobodyan

    2014-01-01

    In this article, we define specific (general) human capital in terms of the occupations whose use is spread in a limited (wide) set of industries. We analyze the growth impact of an economy's composition of specific and general human capital, in a model where education and research and development are costly and complementary activities. The model suggests that a declining share of specific human capital, as observed in the Czech Republic, can be associated with a lower rate of long-term grow...

  14. R(D(*)) in a general two Higgs doublet model

    Science.gov (United States)

    Iguro, Syuhei; Tobe, Kazuhiro

    2017-12-01

    Motivated by an anomaly in R (D (*)) = BR (B bar →D (*)τ- ν bar) / BR (B bar →D (*)l- ν bar) reported by BaBar, Belle and LHCb, we study R (D (*)) in a general two Higgs doublet model (2HDM). Although it has been suggested that it is difficult for the 2HDM to explain the current world average for R (D (*)), it would be important to clarify how large deviations from the standard model predictions for R (D (*)) are possible in the 2HDM. We investigate possible corrections to R (D (*)) in the 2HDM by taking into account various flavor physics constraints (such as Bc- →τ- ν bar , b → sγ, b → sl+l-, Δm Bd,s, Bs →μ+μ- and τ+τ-, and B- →τ- ν bar), and find that it would be possible (impossible) to accommodate the 1σ region suggested by the Belle's result when we adopt a constraint BR (Bc- →τ- ν bar) ≤ 30% (BR (Bc- →τ- ν bar) ≤ 10%). We also study productions and decays of heavy neutral and charged Higgs bosons at the Large Hadron Collider (LHC) experiment and discuss the constraints and implications at the LHC. We show that in addition to well-studied production modes bg → tH- and gg → H / A, exotic productions of heavy Higgs bosons such as cg → bH+ , t + H / A and c b bar →H+ would be significantly large, and the search for their exotic decay modes such as H / A → t c bar + c t bar , μ±τ∓ and H+ → c b bar as well as H / A →τ+τ- and H+ →τ+ ν would be important to probe the interesting parameter regions for R (D (*)).

  15. Pricing Participating Products under a Generalized Jump-Diffusion Model

    Directory of Open Access Journals (Sweden)

    Tak Kuen Siu

    2008-01-01

    Full Text Available We propose a model for valuing participating life insurance products under a generalized jump-diffusion model with a Markov-switching compensator. It also nests a number of important and popular models in finance, including the classes of jump-diffusion models and Markovian regime-switching models. The Esscher transform is employed to determine an equivalent martingale measure. Simulation experiments are conducted to illustrate the practical implementation of the model and to highlight some features that can be obtained from our model.

  16. Generalized continua as models for classical and advanced materials

    CERN Document Server

    Forest, Samuel

    2016-01-01

    This volume is devoted to an actual topic which is the focus world-wide of various research groups. It contains contributions describing the material behavior on different scales, new existence and uniqueness theorems, the formulation of constitutive equations for advanced materials. The main emphasis of the contributions is directed on the following items - Modelling and simulation of natural and artificial materials with significant microstructure, - Generalized continua as a result of multi-scale models, - Multi-field actions on materials resulting in generalized material models, - Theories including higher gradients, and - Comparison with discrete modelling approaches.

  17. Extending the generalized Chaplygin gas model by using geometrothermodynamics

    Science.gov (United States)

    Aviles, Alejandro; Bastarrachea-Almodovar, Aztlán; Campuzano, Lorena; Quevedo, Hernando

    2012-09-01

    We use the formalism of geometrothermodynamics to derive fundamental thermodynamic equations that are used to construct general relativistic cosmological models. In particular, we show that the simplest possible fundamental equation, which corresponds in geometrothermodynamics to a system with no internal thermodynamic interaction, describes the different fluids of the standard model of cosmology. In addition, a particular fundamental equation with internal thermodynamic interaction is shown to generate a new cosmological model that correctly describes the dark sector of the Universe and contains as a special case the generalized Chaplygin gas model.

  18. A computational model that predicts behavioral sensitivity to intracortical microstimulation

    Science.gov (United States)

    Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.

    2017-02-01

    Objective. Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber’s law. Significance. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.

  19. Modeling the frequency of opposing left-turn conflicts at signalized intersections using generalized linear regression models.

    Science.gov (United States)

    Zhang, Xin; Liu, Pan; Chen, Yuguang; Bai, Lu; Wang, Wei

    2014-01-01

    The primary objective of this study was to identify whether the frequency of traffic conflicts at signalized intersections can be modeled. The opposing left-turn conflicts were selected for the development of conflict predictive models. Using data collected at 30 approaches at 20 signalized intersections, the underlying distributions of the conflicts under different traffic conditions were examined. Different conflict-predictive models were developed to relate the frequency of opposing left-turn conflicts to various explanatory variables. The models considered include a linear regression model, a negative binomial model, and separate models developed for four traffic scenarios. The prediction performance of different models was compared. The frequency of traffic conflicts follows a negative binominal distribution. The linear regression model is not appropriate for the conflict frequency data. In addition, drivers behaved differently under different traffic conditions. Accordingly, the effects of conflicting traffic volumes on conflict frequency vary across different traffic conditions. The occurrences of traffic conflicts at signalized intersections can be modeled using generalized linear regression models. The use of conflict predictive models has potential to expand the uses of surrogate safety measures in safety estimation and evaluation.

  20. Predicting Children's Reading and Mathematics Achievement from Early Quantitative Knowledge and Domain-General Cognitive Abilities

    Science.gov (United States)

    Chu, Felicia W.; vanMarle, Kristy; Geary, David C.

    2016-01-01

    One hundred children (44 boys) participated in a 3-year longitudinal study of the development of basic quantitative competencies and the relation between these competencies and later mathematics and reading achievement. The children's preliteracy knowledge, intelligence, executive functions, and parental educational background were also assessed. The quantitative tasks assessed a broad range of symbolic and nonsymbolic knowledge and were administered four times across 2 years of preschool. Mathematics achievement was assessed at the end of each of 2 years of preschool, and mathematics and word reading achievement were assessed at the end of kindergarten. Our goals were to determine how domain-general abilities contribute to growth in children's quantitative knowledge and to determine how domain-general and domain-specific abilities contribute to children's preschool mathematics achievement and kindergarten mathematics and reading achievement. We first identified four core quantitative competencies (e.g., knowledge of the cardinal value of number words) that predict later mathematics achievement. The domain-general abilities were then used to predict growth in these competencies across 2 years of preschool, and the combination of domain-general abilities, preliteracy skills, and core quantitative competencies were used to predict mathematics achievement across preschool and mathematics and word reading achievement at the end of kindergarten. Both intelligence and executive functions predicted growth in the four quantitative competencies, especially across the first year of preschool. A combination of domain-general and domain-specific competencies predicted preschoolers' mathematics achievement, with a trend for domain-specific skills to be more strongly related to achievement at the beginning of preschool than at the end of preschool. Preschool preliteracy skills, sensitivity to the relative quantities of collections of objects, and cardinal knowledge predicted

  1. Predictability in models of the atmospheric circulation

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error

  2. A neural network model for predicting postures during non-repetitive manual materials handling tasks.

    Science.gov (United States)

    Perez, Miguel A; Nussbaum, Maury A

    2008-10-01

    Posture prediction can be useful in facilitating the design and evaluation processes for manual materials handling tasks. This study evaluates the ability of artificial neural network models to predict initial and final lifting postures in 2-D and 3-D scenarios. Descriptors for the participant and condition of interest were input to the models; outputs consisted of posture-defining joint angles. Models were trained with subsets of an existing posture database before predictions were generated. Trained models predictions were then evaluated using the remaining data, which included conditions not presented during training. Prediction errors were consistent across these data subsets, suggesting the models generalised well to novel conditions. The models generally predicted whole-body postures with per-joint errors in the 5 degrees -20 degrees range, though some errors were larger, particularly for 3-D conditions. These models provided reasonably accurate predictions, even outperforming some computational approaches previously proposed for similar purposes. Suggestions for future refinement of such models are presented. The models in this investigation provide a means to predict initial and final postures in commonly occurring manual materials handling tasks. In addition, the model structures provide information about potential lifting strategies that may be used by individuals with particular anthropometry or strength characteristics.

  3. A Semi-Tychonic Model in General relativity

    Science.gov (United States)

    Murphy, George L.

    1998-10-01

    In the sixteenth century Tycho Brahe proposed a geocentric model of the solar system kinematically equivalent to the heliocentric Copernican model. There has been disagreement even among prominent relativists over whether or not relativity validates use of a geocentric model. Tycho's desire for a non-rotating earth cannot be satisfied, but we demonstrate here dynamical equivalence between a Copernican and a "semi-Tychonic" model by using an appropriate accelerated reference frame in general relativity. (The idea of absolute space in Newtonian mechanics makes use of Einstein's theory desirable even in the Newtonian approximation.) Optical questions are easily dealt with. Our treatment provides a satisfactory answer for the important historical question concerning geocentric and heliocentric models, and is also of pedagogic value. In addition, it gives insights into the real generality of general relativity, the nature of the relativistic equations of motion, and the analogy between coordinate and gauge transformations.

  4. Predictive power of task orientation, general self-efficacy and self-determined motivation on fun and boredom

    Directory of Open Access Journals (Sweden)

    Lorena Ruiz-González

    2015-12-01

    Full Text Available Abstract The aim of this study was to test the predictive power of dispositional orientations, general self-efficacy and self-determined motivation on fun and boredom in physical education classes, with a sample of 459 adolescents between 13 and 18 with a mean age of 15 years (SD = 0.88. The adolescents responded to four Likert scales: Perceptions of Success Questionnaire, General Self-Efficacy Scale, Sport Motivation Scale and Intrinsic Satisfaction Questionnaire in Sport. The results showed the structural regression model showed that task orientation and general self-efficacy positively predicted self-determined motivation and this in turn positively predicted more fun and less boredom in physical education classes. Consequently, the promotion of an educational task-oriented environment where learners perceive their progress and make them feel more competent, will allow them to overcome the intrinsically motivated tasks, and therefore they will have more fun. Pedagogical implications for less boredom and more fun in physical education classes are discussed.

  5. SIMULATION AND PREDICTION OF THE PROCESS BASED ON THE GENERAL LOGISTIC MAPPING

    Directory of Open Access Journals (Sweden)

    V. V. Skalozub

    2013-11-01

    Full Text Available Purpose. The aim of the research is to build a model of the generalzed logistic mapping and assessment of the possibilities of its use for the formation of the mathematical description, as well as operational forecasts of parameters of complex dynamic processes described by the time series. Methodology. The research results are obtained on the basis of mathematical modeling and simulation of nonlinear systems using the tools of chaotic dynamics. Findings. A model of the generalized logistic mapping, which is used to interpret the characteristics of dynamic processes was proposed. We consider some examples of representations of processes based on enhanced logistic mapping varying the values of model parameters. The procedures of modeling and interpretation of the data on the investigated processes, represented by the time series, as well as the operational forecasting of parameters using the generalized model of logistic mapping were proposed. Originality. The paper proposes an improved mathematical model, generalized logistic mapping, designed for the study of nonlinear discrete dynamic processes. Practical value. The carried out research using the generalized logistic mapping of railway transport processes, in particular, according to assessment of the parameters of traffic volumes, indicate the great potential of its application in practice for solving problems of analysis, modeling and forecasting complex nonlinear discrete dynamical processes. The proposed model can be used, taking into account the conditions of uncertainty, irregularity, the manifestations of the chaotic nature of the technical, economic and other processes, including the railway ones.

  6. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  7. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  8. Prediction Activities at NASA's Global Modeling and Assimilation Office

    Science.gov (United States)

    Schubert, Siegfried

    2010-01-01

    The Global Modeling and Assimilation Office (GMAO) is a core NASA resource for the development and use of satellite observations through the integrating tools of models and assimilation systems. Global ocean, atmosphere and land surface models are developed as components of assimilation and forecast systems that are used for addressing the weather and climate research questions identified in NASA's science mission. In fact, the GMAO is actively engaged in addressing one of NASA's science mission s key questions concerning how well transient climate variations can be understood and predicted. At weather time scales the GMAO is developing ultra-high resolution global climate models capable of resolving high impact weather systems such as hurricanes. The ability to resolve the detailed characteristics of weather systems within a global framework greatly facilitates addressing fundamental questions concerning the link between weather and climate variability. At sub-seasonal time scales, the GMAO is engaged in research and development to improve the use of land information (especially soil moisture), and in the improved representation and initialization of various sub-seasonal atmospheric variability (such as the MJO) that evolves on time scales longer than weather and involves exchanges with both the land and ocean The GMAO has a long history of development for advancing the seasonal-to-interannual (S-I) prediction problem using an older version of the coupled atmosphere-ocean general circulation model (AOGCM). This includes the development of an Ensemble Kalman Filter (EnKF) to facilitate the multivariate assimilation of ocean surface altimetry, and an EnKF developed for the highly inhomogeneous nature of the errors in land surface models, as well as the multivariate assimilation needed to take advantage of surface soil moisture and snow observations. The importance of decadal variability, especially that associated with long-term droughts is well recognized by the

  9. General and religious coping predict drinking outcomes for alcohol dependent adults in treatment.

    Science.gov (United States)

    Martin, Rosemarie A; Ellingsen, Victor J; Tzilos, Golfo K; Rohsenow, Damaris J

    2015-04-01

    Religiosity is associated with improved treatment outcomes among adults with alcohol dependence; however, it is unknown whether religious coping predicts drinking outcomes above and beyond the effects of coping in general, and whether gender differences exist. We assessed 116 alcohol-dependent adults (53% women; mean age = 37, SD = 8.6) for use of religious coping, general coping, and alcohol use within 2 weeks of entering outpatient treatment, and again 6 months after treatment. Religious coping at 6 months predicted fewer heavy alcohol use days and fewer drinks per day. This relationship was no longer significant after controlling for general coping at 6 months. The relationship between the use of religious coping strategies and drinking outcomes is not independent of general coping. Coping skills training that includes religious coping skills, as one of several coping methods, may be useful for a subset of adults early in recovery. This novel, prospective study assessed the relationship between religious coping strategies, general coping, and treatment outcomes for alcohol-dependent adults in treatment with results suggesting that the use of religious coping as one of several coping methods may be useful for a subset of adults early in recovery. © American Academy of Addiction Psychiatry.

  10. A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments

    Science.gov (United States)

    Gokhale, Sharad; Khare, Mukesh

    Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two

  11. To predict the niche, model colonization and extinction

    Science.gov (United States)

    Yackulic, Charles B.; Nichols, James D.; Reid, Janice; Der, Ricky

    2015-01-01

    Ecologists frequently try to predict the future geographic distributions of species. Most studies assume that the current distribution of a species reflects its environmental requirements (i.e., the species' niche). However, the current distributions of many species are unlikely to be at equilibrium with the current distribution of environmental conditions, both because of ongoing invasions and because the distribution of suitable environmental conditions is always changing. This mismatch between the equilibrium assumptions inherent in many analyses and the disequilibrium conditions in the real world leads to inaccurate predictions of species' geographic distributions and suggests the need for theory and analytical tools that avoid equilibrium assumptions. Here, we develop a general theory of environmental associations during periods of transient dynamics. We show that time-invariant relationships between environmental conditions and rates of local colonization and extinction can produce substantial temporal variation in occupancy–environment relationships. We then estimate occupancy–environment relationships during three avian invasions. Changes in occupancy–environment relationships over time differ among species but are predicted by dynamic occupancy models. Since estimates of the occupancy–environment relationships themselves are frequently poor predictors of future occupancy patterns, research should increasingly focus on characterizing how rates of local colonization and extinction vary with environmental conditions.

  12. Development of a General Form CO2 and Brine Flux Input Model

    Energy Technology Data Exchange (ETDEWEB)

    Mansoor, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sun, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carroll, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-08-01

    The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probe variability in key parameters. This report presents the procedures used to develop a generalized model for CO2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.

  13. Generalized entropy formalism and a new holographic dark energy model

    Science.gov (United States)

    Sayahian Jahromi, A.; Moosavi, S. A.; Moradpour, H.; Morais Graça, J. P.; Lobo, I. P.; Salako, I. G.; Jawad, A.

    2018-05-01

    Recently, the Rényi and Tsallis generalized entropies have extensively been used in order to study various cosmological and gravitational setups. Here, using a special type of generalized entropy, a generalization of both the Rényi and Tsallis entropy, together with holographic principle, we build a new model for holographic dark energy. Thereinafter, considering a flat FRW universe, filled by a pressureless component and the new obtained dark energy model, the evolution of cosmos has been investigated showing satisfactory results and behavior. In our model, the Hubble horizon plays the role of IR cutoff, and there is no mutual interaction between the cosmos components. Our results indicate that the generalized entropy formalism may open a new window to become more familiar with the nature of spacetime and its properties.

  14. A generalized model via random walks for information filtering

    Science.gov (United States)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-08-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.

  15. Incorporating shape constraints in generalized additive modelling of the height-diameter relationship for Norway spruce

    Directory of Open Access Journals (Sweden)

    Natalya Pya

    2016-02-01

    Full Text Available Background: Measurements of tree heights and diameters are essential in forest assessment and modelling. Tree heights are used for estimating timber volume, site index and other important variables related to forest growth and yield, succession and carbon budget models. However, the diameter at breast height (dbh can be more accurately obtained and at lower cost, than total tree height. Hence, generalized height-diameter (h-d models that predict tree height from dbh, age and other covariates are needed. For a more flexible but biologically plausible estimation of covariate effects we use shape constrained generalized additive models as an extension of existing h-d model approaches. We use causal site parameters such as index of aridity to enhance the generality and causality of the models and to enable predictions under projected changeable climatic conditions. Methods: We develop unconstrained generalized additive models (GAM and shape constrained generalized additive models (SCAM for investigating the possible effects of tree-specific parameters such as tree age, relative diameter at breast height, and site-specific parameters such as index of aridity and sum of daily mean temperature during vegetation period, on the h-d relationship of forests in Lower Saxony, Germany. Results: Some of the derived effects, e.g. effects of age, index of aridity and sum of daily mean temperature have significantly non-linear pattern. The need for using SCAM results from the fact that some of the model effects show partially implausible patterns especially at the boundaries of data ranges. The derived model predicts monotonically increasing levels of tree height with increasing age and temperature sum and decreasing aridity and social rank of a tree within a stand. The definition of constraints leads only to marginal or minor decline in the model statistics like AIC. An observed structured spatial trend in tree height is modelled via 2-dimensional surface

  16. The assisted prediction modelling frame with hybridisation and ensemble for business risk forecasting and an implementation

    Science.gov (United States)

    Li, Hui; Hong, Lu-Yao; Zhou, Qing; Yu, Hai-Jie

    2015-08-01

    The business failure of numerous companies results in financial crises. The high social costs associated with such crises have made people to search for effective tools for business risk prediction, among which, support vector machine is very effective. Several modelling means, including single-technique modelling, hybrid modelling, and ensemble modelling, have been suggested in forecasting business risk with support vector machine. However, existing literature seldom focuses on the general modelling frame for business risk prediction, and seldom investigates performance differences among different modelling means. We reviewed researches on forecasting business risk with support vector machine, proposed the general assisted prediction modelling frame with hybridisation and ensemble (APMF-WHAE), and finally, investigated the use of principal components analysis, support vector machine, random sampling, and group decision, under the general frame in forecasting business risk. Under the APMF-WHAE frame with support vector machine as the base predictive model, four specific predictive models were produced, namely, pure support vector machine, a hybrid support vector machine involved with principal components analysis, a support vector machine ensemble involved with random sampling and group decision, and an ensemble of hybrid support vector machine using group decision to integrate various hybrid support vector machines on variables produced from principle components analysis and samples from random sampling. The experimental results indicate that hybrid support vector machine and ensemble of hybrid support vector machines were able to produce dominating performance than pure support vector machine and support vector machine ensemble.

  17. Regression models for predicting anthropometric measurements of ...

    African Journals Online (AJOL)

    measure anthropometric dimensions to predict difficult-to-measure dimensions required for ergonomic design of school furniture. A total of 143 students aged between 16 and 18 years from eight public secondary schools in Ogbomoso, Nigeria ...

  18. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    direction (σx) had a maximum value of 375MPa (tensile) and minimum value of ... These results shows that the residual stresses obtained by prediction from the finite element method are in fair agreement with the experimental results.

  19. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...... visualization to improve our understanding of the different attained performances, effectively compiling all the conducted experiments in a meaningful way. We complete our study with an entropy-based analysis that highlights the uncertainty handling properties provided by the GP, crucial for prediction tasks...

  20. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  1. On the general ontological foundations of conceptual modeling

    NARCIS (Netherlands)

    Guizzardi, G.; Herre, Heinrich; Wagner, Gerd; Spaccapietra, Stefano; March, Salvatore T.; Kambayashi, Yahiko

    2002-01-01

    As pointed out in the pioneering work of [WSW99,EW01], an upper level ontology allows to evaluate the ontological correctness of a conceptual model and to develop guidelines how the constructs of a conceptual modeling language should be used. In this paper we adopt the General Ontological Language

  2. General Separations Area (GSA) Groundwater Flow Model Update: Hydrostratigraphic Data

    Energy Technology Data Exchange (ETDEWEB)

    Bagwell, L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Bennett, P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-02-21

    This document describes the assembly, selection, and interpretation of hydrostratigraphic data for input to an updated groundwater flow model for the General Separations Area (GSA; Figure 1) at the Department of Energy’s (DOE) Savannah River Site (SRS). This report is one of several discrete but interrelated tasks that support development of an updated groundwater model (Bagwell and Flach, 2016).

  3. A MIXTURE LIKELIHOOD APPROACH FOR GENERALIZED LINEAR-MODELS

    NARCIS (Netherlands)

    WEDEL, M; DESARBO, WS

    1995-01-01

    A mixture model approach is developed that simultaneously estimates the posterior membership probabilities of observations to a number of unobservable groups or latent classes, and the parameters of a generalized linear model which relates the observations, distributed according to some member of

  4. Response of an ocean general circulation model to wind and ...

    Indian Academy of Sciences (India)

    The stretched-coordinate ocean general circulation model has been designed to study the observed variability due to wind and thermodynamic forcings. The model domain extends from 60°N to 60°S and cyclically continuous in the longitudinal direction. The horizontal resolution is 5° × 5° and 9 discrete vertical levels.

  5. Bianchi type IX string cosmological model in general relativity

    Indian Academy of Sciences (India)

    Abstract. We have investigated Bianchi type IX string cosmological models in general relativity. To get a determinate solution, we have assumed a condition p = λ i.e. rest energy density for a cloud of strings is equal to the string tension density. The various physical and geometrical aspects of the models are also discussed.

  6. Stability analysis for a general age-dependent vaccination model

    International Nuclear Information System (INIS)

    El Doma, M.

    1995-05-01

    An SIR epidemic model of a general age-dependent vaccination model is investigated when the fertility, mortality and removal rates depends on age. We give threshold criteria of the existence of equilibriums and perform stability analysis. Furthermore a critical vaccination coverage that is sufficient to eradicate the disease is determined. (author). 12 refs

  7. Bianchi type IX string cosmological model in general relativity

    Indian Academy of Sciences (India)

    We have investigated Bianchi type IX string cosmological models in general relativity. To get a determinate solution, we have assumed a condition ρ= i.e. rest energy density for a cloud of strings is equal to the string tension density. The various physical and geometrical aspects of the models are also discussed.

  8. From Predictive Models to Instructional Policies

    Science.gov (United States)

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  9. Double generalized linear compound poisson models to insurance claims data

    DEFF Research Database (Denmark)

    Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo

    2017-01-01

    This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... in a finite sample framework. The simulation studies are also used to validate the fitting algorithms and check the computational implementation. Furthermore, we investigate the impact of an unsuitable choice for the response variable distribution on both mean and dispersion parameter estimates. We provide R...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....

  10. Significance of predictive models/risk calculators for HBV-related hepatocellular carcinoma

    OpenAIRE

    DONG Jing

    2015-01-01

    Hepatitis B virus (HBV)-related hepatocellular carcinoma (HCC) is a major public health problem in Southeast Asia. In recent years, researchers from Hong Kong and Taiwan have reported predictive models or risk calculators for HBV-associated HCC by studying its natural history, which, to some extent, predicts the possibility of HCC development. Generally, risk factors of each model involve age, sex, HBV DNA level, and liver cirrhosis. This article discusses the evolution and clinical significa...

  11. Physically-Derived Dynamical Cores in Atmospheric General Circulation Models

    Science.gov (United States)

    Rood, Richard B.; Lin, Shian-Kiann

    1999-01-01

    The algorithm chosen to represent the advection in atmospheric models is often used as the primary attribute to classify the model. Meteorological models are generally classified as spectral or grid point, with the term grid point implying discretization using finite differences. These traditional approaches have a number of shortcomings that render them non-physical. That is, they provide approximate solutions to the conservation equations that do not obey the fundamental laws of physics. The most commonly discussed shortcomings are overshoots and undershoots which manifest themselves most overtly in the constituent continuity equation. For this reason many climate models have special algorithms to model water vapor advection. This talk focuses on the development of an atmospheric general circulation model which uses a consistent physically-based advection algorithm in all aspects of the model formulation. The shallow-water model of Lin and Rood (QJRMS, 1997) is generalized to three dimensions and combined with the physics parameterizations of NCAR's Community Climate Model. The scientific motivation for the development is to increase the integrity of the underlying fluid dynamics so that the physics terms can be more effectively isolated, examined, and improved. The expected benefits of the new model are discussed and results from the initial integrations will be presented.

  12. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  13. Flexible Approaches to Computing Mediated Effects in Generalized Linear Models: Generalized Estimating Equations and Bootstrapping

    Science.gov (United States)

    Schluchter, Mark D.

    2008-01-01

    In behavioral research, interest is often in examining the degree to which the effect of an independent variable X on an outcome Y is mediated by an intermediary or mediator variable M. This article illustrates how generalized estimating equations (GEE) modeling can be used to estimate the indirect or mediated effect, defined as the amount by…

  14. QCD Sum Rules and Models for Generalized Parton Distributions

    Energy Technology Data Exchange (ETDEWEB)

    Anatoly Radyushkin

    2004-10-01

    I use QCD sum rule ideas to construct models for generalized parton distributions. To this end, the perturbative parts of QCD sum rules for the pion and nucleon electromagnetic form factors are interpreted in terms of GPDs and two models are discussed. One of them takes the double Borel transform at adjusted value of the Borel parameter as a model for nonforward parton densities, and another is based on the local duality relation. Possible ways of improving these Ansaetze are briefly discussed.

  15. Model predictive control of room temperature with disturbance compensation

    Science.gov (United States)

    Kurilla, Jozef; Hubinský, Peter

    2017-08-01

    This paper deals with temperature control of multivariable system of office building. The system is simplified to several single input-single output systems by decoupling their mutual linkages, which are separately controlled by regulator based on generalized model predictive control. Main part of this paper focuses on the accuracy of the office temperature with respect to occupancy profile and effect of disturbance. Shifting of desired temperature and changing of weighting coefficients are used to achieve the desired accuracy of regulation. The final structure of regulation joins advantages of distributed computing power and possibility to use network communication between individual controllers to consider the constraints. The advantage of using decoupled MPC controllers compared to conventional PID regulators is demonstrated in a simulation study.

  16. Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models

    Directory of Open Access Journals (Sweden)

    Shelton Peiris

    2017-12-01

    Full Text Available This paper considers a flexible class of time series models generated by Gegenbauer polynomials incorporating the long memory in stochastic volatility (SV components in order to develop the General Long Memory SV (GLMSV model. We examine the corresponding statistical properties of this model, discuss the spectral likelihood estimation and investigate the finite sample properties via Monte Carlo experiments. We provide empirical evidence by applying the GLMSV model to three exchange rate return series and conjecture that the results of out-of-sample forecasts adequately confirm the use of GLMSV model in certain financial applications.

  17. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  18. Does generalized anxiety disorder predict coronary heart disease risk factors independently of major depressive disorder?

    Science.gov (United States)

    Barger, Steven D; Sydeman, Sumner J

    2005-09-01

    Anxiety symptoms are associated with elevated coronary heart disease (CHD) risk, but it is not known whether such associations extend to anxiety disorders or if they are independent of depression. We sought to determine if generalized anxiety disorder is associated with elevated CHD risk, and whether this association is independent of or interacts with major depressive disorder. Generalized anxiety and major depressive disorders were assessed in a cross-sectional survey of a representative sample of U.S. adults aged 25-74 (N=3032). Coronary heart disease risk was determined by self-reported smoking status, body mass index, and recent medication use for hypertension, hypercholesterolemia, and diabetes. Generalized anxiety disorder independently predicted increased CHD risk (F(1,3018)=5.14; b=0.39; 95% confidence interval (0.05-0.72)) and tended to denote the greatest risk in the absence of major depressive disorder. The cross-sectional design cannot determine the causal direction of the association. Generalized anxiety disorder appears to be associated with elevated CHD risk in the general population. It may denote excess CHD risk relative to major depressive disorder, and clinicians should consider CHD risk when treating generalized anxiety disorder.

  19. Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model

    Directory of Open Access Journals (Sweden)

    C. Narendra Babu

    2015-07-01

    Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.

  20. Evaluation of the US Army fallout prediction model

    International Nuclear Information System (INIS)

    Pernick, A.; Levanon, I.

    1987-01-01

    The US Army fallout prediction method was evaluated against an advanced fallout prediction model--SIMFIC (Simplified Fallout Interpretive Code). The danger zone areas of the US Army method were found to be significantly greater (up to a factor of 8) than the areas of corresponding radiation hazard as predicted by SIMFIC. Nonetheless, because the US Army's method predicts danger zone lengths that are commonly shorter than the corresponding hot line distances of SIMFIC, the US Army's method is not reliably conservative