WorldWideScience

Sample records for model predictions generally

  1. Predicting the Probability of Lightning Occurrence with Generalized Additive Models

    Science.gov (United States)

    Fabsic, Peter; Mayr, Georg; Simon, Thorsten; Zeileis, Achim

    2017-04-01

    This study investigates the predictability of lightning in complex terrain. The main objective is to estimate the probability of lightning occurrence in the Alpine region during summertime afternoons (12-18 UTC) at a spatial resolution of 64 × 64 km2. Lightning observations are obtained from the ALDIS lightning detection network. The probability of lightning occurrence is estimated using generalized additive models (GAM). GAMs provide a flexible modelling framework to estimate the relationship between covariates and the observations. The covariates, besides spatial and temporal effects, include numerous meteorological fields from the ECMWF ensemble system. The optimal model is chosen based on a forward selection procedure with out-of-sample mean squared error as a performance criterion. Our investigation shows that convective precipitation and mid-layer stability are the most influential meteorological predictors. Both exhibit intuitive, non-linear trends: higher values of convective precipitation indicate higher probability of lightning, and large values of the mid-layer stability measure imply low lightning potential. The performance of the model was evaluated against a climatology model containing both spatial and temporal effects. Taking the climatology model as a reference forecast, our model attains a Brier Skill Score of approximately 46%. The model's performance can be further enhanced by incorporating the information about lightning activity from the previous time step, which yields a Brier Skill Score of 48%. These scores show that the method is able to extract valuable information from the ensemble to produce reliable spatial forecasts of the lightning potential in the Alps.

  2. Multi-year predictability in a coupled general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Power, Scott; Colman, Rob [Bureau of Meteorology Research Centre, Melbourne, VIC (Australia)

    2006-02-01

    Multi-year to decadal variability in a 100-year integration of a BMRC coupled atmosphere-ocean general circulation model (CGCM) is examined. The fractional contribution made by the decadal component generally increases with depth and latitude away from surface waters in the equatorial Indo-Pacific Ocean. The relative importance of decadal variability is enhanced in off-equatorial ''wings'' in the subtropical eastern Pacific. The model and observations exhibit ''ENSO-like'' decadal patterns. Analytic results are derived, which show that the patterns can, in theory, occur in the absence of any predictability beyond ENSO time-scales. In practice, however, modification to this stochastic view is needed to account for robust differences between ENSO-like decadal patterns and their interannual counterparts. An analysis of variability in the CGCM, a wind-forced shallow water model, and a simple mixed layer model together with existing and new theoretical results are used to improve upon this stochastic paradigm and to provide a new theory for the origin of decadal ENSO-like patterns like the Interdecadal Pacific Oscillation and Pacific Decadal Oscillation. In this theory, ENSO-driven wind-stress variability forces internal equatorially-trapped Kelvin waves that propagate towards the eastern boundary. Kelvin waves can excite reflected internal westward propagating equatorially-trapped Rossby waves (RWs) and coastally-trapped waves (CTWs). CTWs have no impact on the off-equatorial sub-surface ocean outside the coastal wave guide, whereas the RWs do. If the frequency of the incident wave is too high, then only CTWs are excited. At lower frequencies, both CTWs and RWs can be excited. The lower the frequency, the greater the fraction of energy transmitted to RWs. This lowers the characteristic frequency of variability off the equator relative to its equatorial counterpart. Both the eastern boundary interactions and the accumulation of

  3. Prediction of cloud droplet number in a general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-04-01

    We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.

  4. MJO prediction skill, predictability, and teleconnection impacts in the Beijing Climate Center Atmospheric General Circulation Model

    Science.gov (United States)

    Wu, Jie; Ren, Hong-Li; Zuo, Jinqing; Zhao, Chongbo; Chen, Lijuan; Li, Qiaoping

    2016-09-01

    This study evaluates performance of Madden-Julian oscillation (MJO) prediction in the Beijing Climate Center Atmospheric General Circulation Model (BCC_AGCM2.2). By using the real-time multivariate MJO (RMM) indices, it is shown that the MJO prediction skill of BCC_AGCM2.2 extends to about 16-17 days before the bivariate anomaly correlation coefficient drops to 0.5 and the root-mean-square error increases to the level of the climatological prediction. The prediction skill showed a seasonal dependence, with the highest skill occurring in boreal autumn, and a phase dependence with higher skill for predictions initiated from phases 2-4. The results of the MJO predictability analysis showed that the upper bounds of the prediction skill can be extended to 26 days by using a single-member estimate, and to 42 days by using the ensemble-mean estimate, which also exhibited an initial amplitude and phase dependence. The observed relationship between the MJO and the North Atlantic Oscillation was accurately reproduced by BCC_AGCM2.2 for most initial phases of the MJO, accompanied with the Rossby wave trains in the Northern Hemisphere extratropics driven by MJO convection forcing. Overall, BCC_AGCM2.2 displayed a significant ability to predict the MJO and its teleconnections without interacting with the ocean, which provided a useful tool for fully extracting the predictability source of subseasonal prediction.

  5. Explained variation and predictive accuracy in general parametric statistical models: the role of model misspecification

    DEFF Research Database (Denmark)

    Rosthøj, Susanne; Keiding, Niels

    2004-01-01

    When studying a regression model measures of explained variation are used to assess the degree to which the covariates determine the outcome of interest. Measures of predictive accuracy are used to assess the accuracy of the predictions based on the covariates and the regression model. We give...... a detailed and general introduction to the two measures and the estimation procedures. The framework we set up allows for a study of the effect of misspecification on the quantities estimated. We also introduce a generalization to survival analysis....

  6. Predicting infectivity of Arbuscular Mycorrhizal fungi from soil variables using Generalized Additive Models and Generalized Linear Models

    Directory of Open Access Journals (Sweden)

    IRNANDA AIKO FIFI DJUUNA

    2010-07-01

    Full Text Available Djuuna IAF, Abbott LK, Van Niel K (2010 Predicting infectivity of Arbuscular Mycorrhizal fungi from soil variables using Generalized Additive Models and Generalized Linear Models. Biodiversitas 11: 145-150. The objective of this study was to predict the infectivity of arbuscular mycorrhizal fungi (AM fungi, from field soil based on soil properties and land use history using generalized additive models (GAMs and generalized linear models (GLMs. A total of 291 soil samples from a farm in Western Australia near Wickepin were collected and used in this study. Nine soil properties, including elevation, pH, EC, total C, total N, P, K, microbial biomass carbon, and soil texture, and land use history of the farm were used as independent variables, while the percentage of root length colonized (%RLC was used as the dependent variable. GAMs parameterized for the percent of root length colonized suggested skewed quadratic responses to soil pH and microbial biomass carbon; cubic responses to elevation and soil K; and linear responses to soil P, EC and total C. The strength of the relationship between percent root length colonized by AM fungi and environmental variables showed that only elevation, total C and microbial biomass carbon had strong relationships. In general, GAMs and GLMs models confirmed the strong relationship between infectivity of AM fungi (assessed in a glasshouse bioassay for soil collected in summer prior to the first rain of the season and soil properties.

  7. Generalized model for predicting methane conversion to syngas in ...

    African Journals Online (AJOL)

    University School of Chemical Technology, Guru Gobind Singh Indraprastha University ... Linear regression analysis was performed and a generalized equation ... ceramic membrane reactors since it achieves 90% saturation in 25 hours while ...

  8. LINEAR LAYER AND GENERALIZED REGRESSION COMPUTATIONAL INTELLIGENCE MODELS FOR PREDICTING SHELF LIFE OF PROCESSED CHEESE

    Directory of Open Access Journals (Sweden)

    S. Goyal

    2012-03-01

    Full Text Available This paper highlights the significance of computational intelligence models for predicting shelf life of processed cheese stored at 7-8 g.C. Linear Layer and Generalized Regression models were developed with input parameters: Soluble nitrogen, pH, Standard plate count, Yeast & mould count, Spores, and sensory score as output parameter. Mean Square Error, Root Mean Square Error, Coefficient of Determination and Nash - Sutcliffo Coefficient were used in order to compare the prediction ability of the models. The study revealed that Generalized Regression computational intelligence models are quite effective in predicting the shelf life of processed cheese stored at 7-8 g.C.

  9. Predicting Category Intuitiveness with the Rational Model, the Simplicity Model, and the Generalized Context Model

    Science.gov (United States)

    Pothos, Emmanuel M.; Bailey, Todd M.

    2009-01-01

    Naive observers typically perceive some groupings for a set of stimuli as more intuitive than others. The problem of predicting category intuitiveness has been historically considered the remit of models of unsupervised categorization. In contrast, this article develops a measure of category intuitiveness from one of the most widely supported…

  10. Bayesian prediction of spatial count data using generalized linear mixed models

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge

    2002-01-01

    Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, we...... demonstrate that so-called Langevin-Hastings updates are useful for efficient simulation of the posterior distributions, and we discuss computational issues concerning prediction....

  11. Generalized Dynamic Factor Model + GARCH Exploiting Multivariate Information for Univariate Prediction

    OpenAIRE

    Alessi, Lucia; Barigozzi, Matteo; Capasso, Marco

    2006-01-01

    We propose a new model for multivariate forecasting which combines the Generalized Dynamic Factor Model (GDFM)and the GARCH model. The GDFM, applied to a huge number of series, captures the multivariate information and disentangles the common and the idiosyncratic part of each series of returns. In this financial analysis, both these components are modeled as a GARCH. We compare GDFM+GARCH and standard GARCH performance on samples up to 475 series, predicting both levels and volatility of ret...

  12. Prediction models for cardiovascular disease risk in the general population : Systematic review

    NARCIS (Netherlands)

    Damen, Johanna A A G; Hooft, Lotty; Schuit, Ewoud; Debray, Thomas P A; Collins, Gary S.; Tzoulaki, Ioanna; Lassale, Camille M.; Siontis, George C M; Chiocchia, Virginia; Roberts, Corran; Schlüssel, Michael Maia; Gerry, Stephen; Black, James A.; Heus, Pauline; Van Der Schouw, Yvonne T.; Peelen, Linda M.; Moons, Karel G M

    2016-01-01

    OBJECTIVE: To provide an overview of prediction models for risk of cardiovascular disease (CVD) in the general population. DESIGN: Systematic review. DATA SOURCES: Medline and Embase until June 2013. ELIGIBILITY CRITERIA FOR STUDY SELECTION: Studies describing the development or external validation

  13. Predicting evolution with generalized models of divergent selection: a case study with poeciliid fish.

    Science.gov (United States)

    Langerhans, R Brian

    2010-12-01

    Over the past century and half since the process of natural selection was first described, one enduring question has captivated many, "how predictable is evolution?" Because natural selection comprises deterministic components, the course of evolution may exhibit some level of predictability across organismal groups. Here, I provide an early appraisal of the utility of one particular approach to understanding the predictability of evolution: generalized models of divergent selection (GMDS). The GMDS approach is meant to provide a unifying framework for the science of evolutionary prediction, offering a means of better understanding the causes and consequences of phenotypic and genetic evolution. I describe and test a GMDS centered on the evolution of body shape, size of the gonopodium (sperm-transfer organ), steady-swimming abilities, fast-start swimming performance, and reproductive isolation between populations in Gambusia fishes (Family Poeciliidae). The GMDS produced some accurate evolutionary predictions in Gambusia, identifying variation in intensity of predation by piscivorous fish as a major factor driving repeatable and predictable phenotypic divergence, and apparently playing a key role in promoting ecological speciation. Moreover, the model's applicability seems quite general, as patterns of differentiation in body shape between predator regimes in many disparate fishes match the model's predictions. The fact that such a simple model could yield accurate evolutionary predictions in distantly related fishes inhabiting different geographic regions and types of habitat, and experiencing different predator species, suggests that the model pinpointed a causal factor underlying major, shared patterns of diversification. The GMDS approach appears to represent a promising method of addressing the predictability of evolution and identifying environmental factors responsible for driving major patterns of replicated evolution.

  14. Existing general population models inaccurately predict lung cancer risk in patients referred for surgical evaluation.

    Science.gov (United States)

    Isbell, James M; Deppen, Stephen; Putnam, Joe B; Nesbitt, Jonathan C; Lambright, Eric S; Dawes, Aaron; Massion, Pierre P; Speroff, Theodore; Jones, David R; Grogan, Eric L

    2011-01-01

    Patients undergoing resections for suspicious pulmonary lesions have a 9% to 55% benign rate. Validated prediction models exist to estimate the probability of malignancy in a general population and current practice guidelines recommend their use. We evaluated these models in a surgical population to determine the accuracy of existing models to predict benign or malignant disease. We conducted a retrospective review of our thoracic surgery quality improvement database (2005 to 2008) to identify patients who underwent resection of a pulmonary lesion. Patients were stratified into subgroups based on age, smoking status, and fluorodeoxyglucose positron emission tomography (PET) results. The probability of malignancy was calculated for each patient using the Mayo and solitary pulmonary nodules prediction models. Receiver operating characteristic and calibration curves were used to measure model performance. A total of 189 patients met selection criteria; 73% were malignant. Patients with preoperative PET scans were divided into four subgroups based on age, smoking history, and nodule PET avidity. Older smokers with PET-avid lesions had a 90% malignancy rate. Patients with PET-nonavid lesions, PET-avid lesions with age less than 50 years, or never smokers of any age had a 62% malignancy rate. The area under the receiver operating characteristic curve for the Mayo and solitary pulmonary nodules models was 0.79 and 0.80, respectively; however, the models were poorly calibrated (p<0.001). Despite improvements in diagnostic and imaging techniques, current general population models do not accurately predict lung cancer among patients referred for surgical evaluation. Prediction models with greater accuracy are needed to identify patients with benign disease to reduce nontherapeutic resections. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  15. A GENERALIZED CANOPY MODEL AND ITS APPLICATION TO THE PREDICTION OF URBAN WIND CLIMATE

    Science.gov (United States)

    Enoki, Kota; Ishihara, Takeshi

    In this study, a generalized canopy model is proposed by the combination of a fluid force model to consider the drag forces caused by buildings and trees and a turbulence model to overcome inapplicability of the Green's turbulence model to the high packing density. This model can predict the flow field with arbitrary porosities in a contrast to the conventional models. Procedures for the calculation of the parameters in the proposed model based on the land use and digital map data are also described and a fluid force model counting drag forces caused by obstacles existing in the same grid is proposed for the flow field simulation in the urban areas. The proposed canopy model is verified by wind tunnel tests and the onsite measurement. The predicted flow fields around various obstacles with different porosities, such as a tree, a city area and a single building, show good agreements with the measurements. Finally, the wind speed at a meteorological station located in Tokyo city is simulated and the prediction error in the annual mean value is reduced from 23.9% by a meso-scale meteorological model to -1.9% by applying the proposed model.

  16. Use of Ocean Remote Sensing Data to Enhance Predictions with a Coupled General Circulation Model

    Science.gov (United States)

    Rienecker, Michele M.

    1999-01-01

    Surface height, sea surface temperature and surface wind observations from satellites have given a detailed time sequence of the initiation and evolution of the 1997/98 El Nino. The data have beet complementary to the subsurface TAO moored data in their spatial resolution and extent. The impact of satellite observations on seasonal prediction in the tropical Pacific using a coupled ocean-atmosphere general circulation model will be presented.

  17. Generalized Pareto Distribution Model and Its Application to Hydrocarbon Resource Structure Prediction of the Huanghua Depression

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The generalized Pareto distribution model is a kind of hydrocarbon pool size probability statistical method for resource assessment. By introducing the time variable, resource conversion rate and the geological variable, resource density, such model can describe not only different types of basins, but also any exploration samples at different phases of exploration, up to the parent population. It is a dynamic distribution model with profound geological significance and wide applicability. Its basic principle and the process of resource assessment are described in this paper. The petroleum accumulation system is an appropriate assessment unit for such method. The hydrocarbon resource structure of the Huanghua Depression in Bohai Bay Basin was predicted by using this model. The prediction results accord with the knowledge of exploration in the Huanghua Depression, and point out the remaining resources potential and structure of different petroleum accumulation systems, which are of great significance for guiding future exploration in the Huanghua Depression.

  18. Constrained generalized predictive control of battery charging process based on a coupled thermoelectric model

    Science.gov (United States)

    Liu, Kailong; Li, Kang; Zhang, Cheng

    2017-04-01

    Battery temperature is a primary factor affecting the battery performance, and suitable battery temperature control in particular internal temperature control can not only guarantee battery safety but also improve its efficiency. This is however challenging as current controller designs for battery charging have no mechanisms to incorporate such information. This paper proposes a novel battery charging control strategy which applies the constrained generalized predictive control (GPC) to charge a LiFePO4 battery based on a newly developed coupled thermoelectric model. The control target primarily aims to maintain the battery cell internal temperature within a desirable range while delivering fast charging. To achieve this, the coupled thermoelectric model is firstly introduced to capture the battery behaviours in particular SOC and internal temperature which are not directly measurable in practice. Then a controlled auto-regressive integrated moving average (CARIMA) model whose parameters are identified by the recursive least squares (RLS) algorithm is developed as an online self-tuning predictive model for a GPC controller. Then the constrained generalized predictive controller is developed to control the charging current. Experiment results confirm the effectiveness of the proposed control strategy. Further, the best region of heat dissipation rate and proper internal temperature set-points are also investigated and analysed.

  19. Generalized nonlinear models applied to the prediction of basal area and volume of Eucalyptus sp

    Directory of Open Access Journals (Sweden)

    Samuel de Pádua Chaves e Carvalho

    2011-12-01

    Full Text Available This paper aims to propose the use of generalized nonlinear models for prediction of basal area growth and yield of total volume of the hybrid Eucalyptus urocamaldulensis, in a stand situation in a central region in state of Minas Gerais. The used methodology allows to work with data in its original form without the necessity of transformation of variables, and generate highly accurate models. To evaluate the fitting quality, it was proposed the Bayesian information criterion, of the Akaike, and test the maximum likelihood, beyond the standard error of estimate, and residual graphics. The models were used with a good performance, highly accurate and parsimonious estimates of the variables proposed, with errors reduced to 12% for basal area and 4% for prediction of the volume.

  20. Developing a methodology to predict PM10 concentrations in urban areas using generalized linear models.

    Science.gov (United States)

    Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G

    2016-09-01

    A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.

  1. Practical prediction model for the risk of 2-year mortality of individuals in the general population.

    Science.gov (United States)

    Goldfarb-Rumyantzev, Alexander; Gautam, Shiva; Brown, Robert S

    2016-04-01

    This study proposed to validate a prediction model and risk-stratification tool of 2-year mortality rates of individuals in the general population suitable for office practice use. A risk indicator (R) derived from data in the literature was based on only 6 variables: to calculate R for an individual, starting with 0, for each year of age above 60, add 0.14; for a male, add 0.9; for diabetes mellitus, add 0.7; for albuminuria > 30 mg/g of creatinine, add 0.7; for stage ≥ 3 chronic kidney disease (CKD), add 0.9; for cardiovascular disease (CVD), add 1.4; or for both CKD and CVD, add 1.7. We developed a univariate logistic regression model predicting 2-year individual mortality rates. The National Health and Nutrition Examination Survey (NHANES) data set (1999-2004 with deaths through 2006) was used as the target for validation. These 12,515 subjects had a mean age of 48.9 ± 18.1 years, 48% males, 9.5% diabetes, 11.7% albuminuria, 6.8% CVD, 5.4% CKD, and 2.8% both CKD and CVD. Using the risk indicator R alone to predict mortality demonstrated good performance with area under the receiver operating characteristic (ROC) curve of 0.84. Dividing subjects into low-risk (R=0-1.0), low intermediate risk (R > 1.0-3.0), high intermediate risk (R > 3.0-5.0) or high-risk (R > 5.0) categories predicted 2-year mortality rates of 0.52%, 1.44%, 5.19% and 15.24%, respectively, by the prediction model compared with actual mortality rates of 0.29%, 2.48%, 5.13% and 13.40%, respectively. We have validated a model of risk stratification using easily identified clinical characteristics to predict 2-year mortality rates of individuals in the general population. The model demonstrated performance adequate for its potential use for clinical practice and research decisions.

  2. A residual life prediction model based on the generalized σ -N curved surface

    Directory of Open Access Journals (Sweden)

    Zongwen AN

    2016-06-01

    Full Text Available In order to investigate change rule of the residual life of structure under random repeated load, firstly, starting from the statistic meaning of random repeated load, the joint probability density function of maximum stress and minimum stress is derived based on the characteristics of order statistic (maximum order statistic and minimum order statistic; then, based on the equation of generalized σ -N curved surface, considering the influence of load cycles number on fatigue life, a relationship among minimum stress, maximum stress and residual life, that is the σmin(n- σmax(n-Nr(n curved surface model, is established; finally, the validity of the proposed model is demonstrated by a practical case. The result shows that the proposed model can reflect the influence of maximum stress and minimum stress on residual life of structure under random repeated load, which can provide a theoretical basis for life prediction and reliability assessment of structure.

  3. General Model to Predict Power Flow Transmitted into Laminated Beam Bases in Flexible Isolation Systems

    Institute of Scientific and Technical Information of China (English)

    NIU Junchuan; GE Peiqi; HOU Cuirong; LIM C W; SONG Kongjie

    2009-01-01

    For estimating the vibration transmission accurately and performing vibration control efficiently in isolation systems, a novel general model is presented to predict the power flow transmitted into the complicate flexible bases of laminated beams. In the model, the laminated beam bases are simulated by the first-order shear deformation laminated plate theory, which is relatively simple and economic but accurate in predicting the vibration solutions of flexible isolation systems with laminated beam bases in comparison with classical laminated beam theories and higher order theories. On the basis of the presented model, substructure technique and variational principle are employed to obtain the governing equation of the isolation system and the power flow solution. Then, the vibration characteristics of the flexible isolation systems with laminated bases are investigated. Several numerical examples are given to show the validity and efficiency of the presented model. It is concluded that the presented model is the extension of the classical one and it can obtain more accurate power flow solutions.

  4. Improving the predictive accuracy of hurricane power outage forecasts using generalized additive models.

    Science.gov (United States)

    Han, Seung-Ryong; Guikema, Seth D; Quiring, Steven M

    2009-10-01

    Electric power is a critical infrastructure service after hurricanes, and rapid restoration of electric power is important in order to minimize losses in the impacted areas. However, rapid restoration of electric power after a hurricane depends on obtaining the necessary resources, primarily repair crews and materials, before the hurricane makes landfall and then appropriately deploying these resources as soon as possible after the hurricane. This, in turn, depends on having sound estimates of both the overall severity of the storm and the relative risk of power outages in different areas. Past studies have developed statistical, regression-based approaches for estimating the number of power outages in advance of an approaching hurricane. However, these approaches have either not been applicable for future events or have had lower predictive accuracy than desired. This article shows that a different type of regression model, a generalized additive model (GAM), can outperform the types of models used previously. This is done by developing and validating a GAM based on power outage data during past hurricanes in the Gulf Coast region and comparing the results from this model to the previously used generalized linear models.

  5. Predicting stem borer density in maize using RapidEye data and generalized linear models

    Science.gov (United States)

    Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le

    2017-05-01

    Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.

  6. Spatially explicit models, generalized reproduction numbers and the prediction of patterns of waterborne disease

    Science.gov (United States)

    Rinaldo, A.; Gatto, M.; Mari, L.; Casagrandi, R.; Righetto, L.; Bertuzzo, E.; Rodriguez-Iturbe, I.

    2012-12-01

    Metacommunity and individual-based theoretical models are studied in the context of the spreading of infections of water-borne diseases along the ecological corridors defined by river basins and networks of human mobility. The overarching claim is that mathematical models can indeed provide predictive insight into the course of an ongoing epidemic, potentially aiding real-time emergency management in allocating health care resources and by anticipating the impact of alternative interventions. To support the claim, we examine the ex-post reliability of published predictions of the 2010-2011 Haiti cholera outbreak from four independent modeling studies that appeared almost simultaneously during the unfolding epidemic. For each modeled epidemic trajectory, it is assessed how well predictions reproduced the observed spatial and temporal features of the outbreak to date. The impact of different approaches is considered to the modeling of the spatial spread of V. cholera, the mechanics of cholera transmission and in accounting for the dynamics of susceptible and infected individuals within different local human communities. A generalized model for Haitian epidemic cholera and the related uncertainty is thus constructed and applied to the year-long dataset of reported cases now available. Specific emphasis will be dedicated to models of human mobility, a fundamental infection mechanism. Lessons learned and open issues are discussed and placed in perspective, supporting the conclusion that, despite differences in methods that can be tested through model-guided field validation, mathematical modeling of large-scale outbreaks emerges as an essential component of future cholera epidemic control. Although explicit spatial modeling is made routinely possible by widespread data mapping of hydrology, transportation infrastructure, population distribution, and sanitation, the precise condition under which a waterborne disease epidemic can start in a spatially explicit setting is

  7. Evaluating Parameterizations in General Circulation Models: Climate Simulation Meets Weather Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J

    2004-05-06

    To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands that the GCM parameterizations of unresolved processes, in particular, should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provided that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by a realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be tested in the same framework. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the U.S. Department of Energy is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM.

  8. Comparing artificial neural networks, general linear models and support vector machines in building predictive models for small interfering RNAs.

    Directory of Open Access Journals (Sweden)

    Kyle A McQuisten

    Full Text Available BACKGROUND: Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models. PRINCIPAL FINDINGS: Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation. CONCLUSIONS: The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features

  9. An atmospheric general circulation model for Pluto with predictions for New Horizons temperature profiles

    Science.gov (United States)

    Zalucha, Angela M.

    2016-06-01

    Results are presented from a 3D Pluto general circulation model (GCM) that includes conductive heating and cooling, non-local thermodynamic equilibrium (non-LTE) heating by methane at 2.3 and 3.3 μm, non-LTE cooling by cooling by methane at 7.6 μm, and LTE CO rotational line cooling. The GCM also includes a treatment of the subsurface temperature and surface-atmosphere mass exchange. An initially 1 m thick layer of surface nitrogen frost was assumed such that it was large enough to act as a large heat sink (compared with the solar heating term) but small enough that the water ice subsurface properties were also significant. Structure was found in all three directions of the 3D wind field (with a maximum magnitude of the order of 10 m s-1 in the horizontal directions and 10-5 microbar s-1 in the vertical direction). Prograde jets were found at several altitudes. The direction of flow over the poles was found to very with altitude. Broad regions of up-welling and down-welling were also found. Predictions of vertical temperature profiles are provided for the Alice and Radio science Experiment instruments on New Horizons, while predictions of light curves are provided for ground-based stellar occultation observations. With this model methane concentrations of 0.2 per cent and 1.0 per cent and 8 and 24 microbar surface pressures are distinguishable. For ground-based stellar occultations, a detectable difference exists between light curves with the different methane concentrations, but not for different initial global mean surface pressures.

  10. Simplified modeling and generalized predictive position control of an ultrasonic motor.

    Science.gov (United States)

    Bigdeli, Nooshin; Haeri, Mohammad

    2005-04-01

    Ultrasonic motors (USM's) possess heavy nonlinear and load dependent characteristics such as dead-zone and saturation reverse effects, which vary with driving conditions. In this paper, behavior of an ultrasonic motor is modeled using Hammerstein model structure and experimental measurements. Also, model predictive controllers are designed to obtain precise USM position control. Simulation results indicate improved performance of the motor for both set point tracking and disturbance rejection.

  11. A Generalized Process Model of Human Action Selection and Error and its Application to Error Prediction

    Science.gov (United States)

    2014-07-01

    Macmillan & Creelman , 2005). This is a quite high degree of discriminability and it means that when the decision model predicts a probability of...ROC analysis. Pattern Recognition Letters, 27(8), 861-874. Retrieved from Google Scholar. Macmillan, N. A., & Creelman , C. D. (2005). Detection

  12. A General, Synthetic Model for Predicting Biodiversity Gradients from Environmental Geometry.

    Science.gov (United States)

    Gross, Kevin; Snyder-Beattie, Andrew

    2016-10-01

    Latitudinal and elevational biodiversity gradients fascinate ecologists, and have inspired dozens of explanations. The geometry of the abiotic environment is sometimes thought to contribute to these gradients, yet evaluations of geometric explanations are limited by a fragmented understanding of the diversity patterns they predict. This article presents a mathematical model that synthesizes multiple pathways by which environmental geometry can drive diversity gradients. The model characterizes species ranges by their environmental niches and limits on range sizes and places those ranges onto the simplified geometries of a sphere or cone. The model predicts nuanced and realistic species-richness gradients, including latitudinal diversity gradients with tropical plateaus and mid-latitude inflection points and elevational diversity gradients with low-elevation diversity maxima. The model also illustrates the importance of a mid-environment effect that augments species richness at locations with intermediate environments. Model predictions match multiple empirical biodiversity gradients, depend on ecological traits in a testable fashion, and formally synthesize elements of several geometric models. Together, these results suggest that previous assessments of geometric hypotheses should be reconsidered and that environmental geometry may play a deeper role in driving biodiversity gradients than is currently appreciated.

  13. Permeability prediction of organic shale with generalized lattice Boltzmann model considering surface diffusion effect

    CERN Document Server

    Wang, Junjian; Kang, Qinjun; Rahman, Sheik S

    2016-01-01

    Gas flow in shale is associated with both organic matter (OM) and inorganic matter (IOM) which contain nanopores ranging in size from a few to hundreds of nanometers. In addition to the noncontinuum effect which leads to an apparent permeability of gas higher than the intrinsic permeability, the surface diffusion of adsorbed gas in organic pores also can influence the apparent permeability through its own transport mechanism. In this study, a generalized lattice Boltzmann model (GLBM) is employed for gas flow through the reconstructed shale matrix consisting of OM and IOM. The Expectation-Maximization (EM) algorithm is used to assign the pore size distribution to each component, and the dusty gas model (DGM) and generalized Maxwell-Stefan model (GMS) are adopted to calculate the apparent permeability accounting for multiple transport mechanisms including viscous flow, Knudsen diffusion and surface diffusion. Effects of pore radius and pressure on permeability of both IOM and OM as well as effects of Langmuir ...

  14. A more general Force Balance Model to predict Bubble Departure and Lift-off Diameters in flow boiling

    Science.gov (United States)

    Kommajosyula, Ravikishore; Mazzocco, Thomas; Ambrosini, Walter; Baglietto, Emilio

    2016-11-01

    Accurate prediction of Bubble Departure and Lift-off Diameters is key for development of closures in two-phase Eulerian CFD simulation of Flow Boiling, owing to its sensitivity in the Heat Flux partitioning approach. Several models ranging from simple correlations to solving complex force balance models have been proposed in literature; however, they rely on data-fitting for specific databases, and have shown to be inapplicable for general flow applications. The aim of this study is to extend the approach by proposing a more consistent and general formulation that accounts for relevant forces acting on the Bubble at the point of Departure and Lift-off. Among the key features of the model, the Bubble Inclination angle is treated as an unknown to be inferred along with the Departure Diameter, and the relative velocity of the bubble sliding on the surface, is modeled to determine the Lift-off Diameter. A novel expression is developed for the bubble growth force in terms of flow quantities, based on extensive data analysis. The model has been validated using 6 different experimental databases with varying flow conditions and 3 fluids. Results show high accuracy of predictions over a broad range, outperforming existing models both in terms of accuracy and generality. CASL - The Consortium for Advanced Simulation of LWRs.

  15. A mechanistic model for predicting flow-assisted and general corrosion of carbon steel in reactor primary coolants

    Energy Technology Data Exchange (ETDEWEB)

    Lister, D. [University of New Brunswick, Fredericton, NB (Canada). Dept. of Chemical Engineering; Lang, L.C. [Atomic Energy of Canada Ltd., Chalk River Lab., ON (Canada)

    2002-07-01

    Flow-assisted corrosion (FAC) of carbon steel in high-temperature lithiated water can be described with a model that invokes dissolution of the protective oxide film and erosion of oxide particles that are loosened as a result. General corrosion under coolant conditions where oxide is not dissolved is described as well. In the model, the electrochemistry of magnetite dissolution and precipitation and the effect of particle size on solubility move the dependence on film thickness of the diffusion processes (and therefore the corrosion rate) away from reciprocal. Particle erosion under dissolving conditions is treated stochastically and depends upon the fluid shear stress at the surface. The corrosion rate dependence on coolant flow under FAC conditions then becomes somewhat less than that arising purely from fluid shear (proportional to the velocity squared). Under non-dissolving conditions, particle erosion occurs infrequently and general corrosion is almost unaffected by flow For application to a CANDU primary circuit and its feeders, the model was bench-marked against the outlet feeder S08 removed from the Point Lepreau reactor, which furnished one value of film thickness and one of corrosion rate for a computed average coolant velocity. Several constants and parameters in the model had to be assumed or were optimised, since values for them were not available. These uncertainties are no doubt responsible for the rather high values of potential that evolved as steps in the computation. The model predicts film thickness development and corrosion rate for the whole range of coolant velocities in outlet feeders very well. In particular, the detailed modelling of FAC in the complex geometry of one outlet feeder (F11) is in good agreement with measurements. When the particle erosion computations are inserted in the balance equations for the circuit, realistic values of crud level are obtained. The model also predicts low corrosion rates and thick oxide films for inlet

  16. Comparison of mid-Pliocene climate predictions produced by the HadAM3 and GCMAM3 General Circulation Models

    Science.gov (United States)

    Haywood, A.M.; Chandler, M.A.; Valdes, P.J.; Salzmann, U.; Lunt, D.J.; Dowsett, H.J.

    2009-01-01

    The mid-Pliocene warm period (ca. 3 to 3.3??million years ago) has become an important interval of time for palaeoclimate modelling exercises, with a large number of studies published during the last decade. However, there has been no attempt to assess the degree of model dependency of the results obtained. Here we present an initial comparison of mid-Pliocene climatologies produced by the Goddard Institute for Space Studies and Hadley Centre for Climate Prediction and Research atmosphere-only General Circulation Models (GCMAM3 and HadAM3). Whilst both models are consistent in the simulation of broad-scale differences in mid-Pliocene surface air temperature and total precipitation rates, significant variation is noted on regional and local scales. There are also significant differences in the model predictions of total cloud cover. A terrestrial data/model comparison, facilitated by the BIOME 4 model and a new data set of Piacenzian Stage land cover [Salzmann, U., Haywood, A.M., Lunt, D.J., Valdes, P.J., Hill, D.J., (2008). A new global biome reconstruction and data model comparison for the Middle Pliocene. Global Ecology and Biogeography 17, 432-447, doi:10.1111/j.1466-8238.2007.00381.x] and combined with the use of Kappa statistics, indicates that HadAM3-based biome predictions provide a closer fit to proxy data in the mid to high-latitudes. However, GCMAM3-based biomes in the tropics provide the closest fit to proxy data. ?? 2008 Elsevier B.V.

  17. Improving groundwater predictions utilizing seasonal precipitation forecasts from general circulation models forced with sea surface temperature forecasts

    Science.gov (United States)

    Almanaseer, Naser; Sankarasubramanian, A.; Bales, Jerad

    2014-01-01

    Recent studies have found a significant association between climatic variability and basin hydroclimatology, particularly groundwater levels, over the southeast United States. The research reported in this paper evaluates the potential in developing 6-month-ahead groundwater-level forecasts based on the precipitation forecasts from ECHAM 4.5 General Circulation Model Forced with Sea Surface Temperature forecasts. Ten groundwater wells and nine streamgauges from the USGS Groundwater Climate Response Network and Hydro-Climatic Data Network were selected to represent groundwater and surface water flows, respectively, having minimal anthropogenic influences within the Flint River Basin in Georgia, United States. The writers employ two low-dimensional models [principle component regression (PCR) and canonical correlation analysis (CCA)] for predicting groundwater and streamflow at both seasonal and monthly timescales. Three modeling schemes are considered at the beginning of January to predict winter (January, February, and March) and spring (April, May, and June) streamflow and groundwater for the selected sites within the Flint River Basin. The first scheme (model 1) is a null model and is developed using PCR for every streamflow and groundwater site using previous 3-month observations (October, November, and December) available at that particular site as predictors. Modeling schemes 2 and 3 are developed using PCR and CCA, respectively, to evaluate the role of precipitation forecasts in improving monthly and seasonal groundwater predictions. Modeling scheme 3, which employs a CCA approach, is developed for each site by considering observed groundwater levels from nearby sites as predictands. The performance of these three schemes is evaluated using two metrics (correlation coefficient and relative RMS error) by developing groundwater-level forecasts based on leave-five-out cross-validation. Results from the research reported in this paper show that using

  18. Stable isotopes of fossil teeth corroborate key general circulation model predictions for the Last Glacial Maximum in North America

    Science.gov (United States)

    Kohn, Matthew J.; McKay, Moriah

    2010-11-01

    Oxygen isotope data provide a key test of general circulation models (GCMs) for the Last Glacial Maximum (LGM) in North America, which have otherwise proved difficult to validate. High δ18O pedogenic carbonates in central Wyoming have been interpreted to indicate increased summer precipitation sourced from the Gulf of Mexico. Here we show that tooth enamel δ18O of large mammals, which is strongly correlated with local water and precipitation δ18O, is lower during the LGM in Wyoming, not higher. Similar data from Texas, California, Florida and Arizona indicate higher δ18O values than in the Holocene, which is also predicted by GCMs. Tooth enamel data closely validate some recent models of atmospheric circulation and precipitation δ18O, including an increase in the proportion of winter precipitation for central North America, and summer precipitation in the southern US, but suggest aridity can bias pedogenic carbonate δ18O values significantly.

  19. General, Unified, Multiscale Modeling to Predict the Sensitivity of Energetic Materials

    Science.gov (United States)

    2011-10-05

    FOX-7. Corrigendum. Acta Crystallographica Section B 64(4): 519. Miller, M. S. (1995): Three-phase combustion modelling: frozen ozone , a proto- type...of 1308.9 cm1 compares to 312the previously reported values of 1229 cm1, obtained by 313neutron diffraction; 1221 cm1 from FTIR , and at 1219 cm1

  20. Generalisation of Levine's prediction for the distribution of freezing temperatures of droplets: a general singular model for ice nucleation

    Directory of Open Access Journals (Sweden)

    R. P. Sear

    2013-04-01

    Full Text Available Models without an explicit time dependence, called singular models, are widely used for fitting the distribution of temperatures at which water droplets freeze. In 1950 Levine developed the original singular model. His key assumption was that each droplet contained many nucleation sites, and that freezing occurred due to the nucleation site with the highest freezing temperature. The fact that freezing occurs due to the maximum value out of large number of nucleation temperatures, means that we can apply the results of what is called extreme-value statistics. This is the statistics of the extreme, i.e., maximum or minimum, value of a large number of random variables. Here we use the results of extreme-value statistics to show that we can generalise Levine's model to produce the most general singular model possible. We show that when a singular model is a good approximation, the distribution of freezing temperatures should always be given by what is called the generalised extreme-value distribution. In addition, we also show that the distribution of freezing temperatures for droplets of one size, can be used to make predictions for the scaling of the median nucleation temperature with droplet size, and vice versa.

  1. Generalisation of Levine's prediction for the distribution of freezing temperatures of droplets: a general singular model for ice nucleation

    Directory of Open Access Journals (Sweden)

    R. P. Sear

    2013-07-01

    Full Text Available Models without an explicit time dependence, called singular models, are widely used for fitting the distribution of temperatures at which water droplets freeze. In 1950 Levine developed the original singular model. His key assumption was that each droplet contained many nucleation sites, and that freezing occurred due to the nucleation site with the highest freezing temperature. The fact that freezing occurs due to the maximum value out of a large number of nucleation temperatures, means that we can apply the results of what is called extreme-value statistics. This is the statistics of the extreme, i.e. maximum or minimum, value of a large number of random variables. Here we use the results of extreme-value statistics to show that we can generalise Levine's model to produce the most general singular model possible. We show that when a singular model is a good approximation, the distribution of freezing temperatures should always be given by what is called the generalised extreme-value distribution. In addition, we also show that the distribution of freezing temperatures for droplets of one size, can be used to make predictions for the scaling of the median nucleation temperature with droplet size, and vice versa.

  2. Predicting mastitis in dairy cows using neural networks and generalized additive models: a comparison

    DEFF Research Database (Denmark)

    Anantharama Ankinakatte, Smitha; Norberg, Elise; Løvendahl, Peter;

    2013-01-01

    The aim of this paper is to develop and compare methods for early detection of oncoming mastitis with automated recorded data. The data were collected at the Danish Cattle Research Center (Tjele, Denmark). As indicators of mastitis, electrical conductivity (EC), somatic cell scores (SCS), lactate...... dehydrogenase (LDH), and milk yield are considered. Each indicator is decomposed into a long-term, smoothed component, and a short-term, residual component, in order to distinguish long-term trends from short-term departures from these trends. We also study whether it is useful to derive a latent variable...... set. Their performance is evaluated on the validation data set in terms of sensitivity and specificity. Overall, the performance of NNs and GAMs is similar, with neither method appearing to be decisively superior. NNs appear to be marginally better for high specificities. NNs model results in better...

  3. Concentration addition, independent action and generalized concentration addition models for mixture effect prediction of sex hormone synthesis in vitro

    DEFF Research Database (Denmark)

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael;

    2013-01-01

    , antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone...

  4. Generalized Poisson sigma models

    CERN Document Server

    Batalin, I; Batalin, Igor; Marnelius, Robert

    2001-01-01

    A general master action in terms of superfields is given which generates generalized Poisson sigma models by means of a natural ghost number prescription. The simplest representation is the sigma model considered by Cattaneo and Felder. For Dirac brackets considerably more general models are generated.

  5. Prediction of vertical PM2.5 concentrations alongside an elevated expressway by using the neural network hybrid model and generalized additive model

    Science.gov (United States)

    Gao, Ya; Wang, Zhanyong; Lu, Qing-Chang; Liu, Chao; Peng, Zhong-Ren; Yu, Yue

    2016-10-01

    A study on vertical variation of PM2.5 concentrations was carried out in this paper. Field measurements were conducted at eight different floor heights outside a building alongside a typical elevated expressway in downtown Shanghai, China. Results show that PM2.5 concentration decreases significantly with the increase of height from the 3rd to 7th floor or the 8th to 15th floor, and increases suddenly from the 7th to 8th floor which is the same height as the elevated expressway. A non-parametric test indicates that the data of PM2.5 concentration is statistically different under the 7th floor and above the 8th floor at the 5% significance level. To investigate the relationships between PM2.5 concentration and influencing factors, the Pearson correlation analysis was performed and the results indicate that both traffic and meteorological factors have crucial impacts on the variation of PM2.5 concentration, but there is a rather large variation in correlation coefficients under the 7th floor and above the 8th floor. Furthermore, the back propagation neural network based on principal component analysis (PCA-BPNN), as well as generalized additive model (GAM), was applied to predict the vertical PM2.5 concentration and examined with the field measurement dataset. Experimental results indicated that both models can obtain accurate predictions, while PCA-BPNN model provides more reliable and accurate predictions as it can reduce the complexity and eliminate data co-linearity. These findings reveal the vertical distribution of PM2.5 concentration and the potential of the proposed model to be applicable to predict the vertical trends of air pollution in similar situations.

  6. ADAPTIVE GENERALIZED PREDICTIVE CONTROL OF SWITCHED SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    WANG Yi-jing; WANG Long

    2005-01-01

    The problem of adaptive generalized predictive control which consists of output prediction errors for a class of switched systems is studied. The switching law is determined by the output predictive errors of a finite number of subsystems. For the single subsystem and multiple subsystems cases, it is proved that the given direct algorithm of generalized predictive control guarantees the global convergence of the system. This algorithm overcomes the inherent drawbacks of the slow convergence and large transient errors for the conventional adaptive control.

  7. Generalisation of Levine's prediction for the distribution of freezing temperatures of droplets: A general singular model for ice nucleation

    CERN Document Server

    Sear, Richard P

    2013-01-01

    Models without an explicit time dependence, called singular models, are widely used for fitting the distribution of temperatures at which water droplets freeze. In 1950 Levine developed the original singular model. His key assumption was that each droplet contained many nucleation sites, and that freezing occurred due to the nucleation site with the highest freezing temperature. The fact that freezing occurs due to the maximum value out of large number of nucleation temperatures, means that we can apply the results of what is called extreme-value statistics. This is the statistics of the extreme, i.e., maximum or minimum, value of a large number of random variables. Here we use the results of extreme-value statistics to show that we can generalise Levine's model to produce the most general singular model possible. We show that when a singular model is a good approximation, the distribution of freezing temperatures should always be given by what is called the generalised extreme-value distribution. In addition...

  8. The 2009–2010 arctic stratospheric winter – general evolution, mountain waves and predictability of an operational weather forecast model

    Directory of Open Access Journals (Sweden)

    A. Dörnbrack

    2011-12-01

    Full Text Available The relatively warm 2009–2010 Arctic winter was an exceptional one as the North Atlantic Oscillation index attained persistent extreme negative values. Here, selected aspects of the Arctic stratosphere during this winter inspired by the analysis of the international field experiment RECONCILE are presented. First of all, and as a kind of reference, the evolution of the polar vortex in its different phases is documented. Special emphasis is put on explaining the formation of the exceptionally cold vortex in mid winter after a sequence of stratospheric disturbances which were caused by upward propagating planetary waves. A major sudden stratospheric warming (SSW occurring near the end of January 2010 concluded the anomalous cold vortex period. Wave ice polar stratospheric clouds were frequently observed by spaceborne remote-sensing instruments over the Arctic during the cold period in January 2010. Here, one such case observed over Greenland is analysed in more detail and an attempt is made to correlate flow information of an operational numerical weather prediction model to the magnitude of the mountain-wave induced temperature fluctuations. Finally, it is shown that the forecasts of the ECMWF ensemble prediction system for the onset of the major SSW were very skilful and the ensemble spread was very small. However, the ensemble spread increased dramatically after the major SSW, displaying the strong non-linearity and internal variability involved in the SSW event.

  9. General predictive control using the delta operator

    DEFF Research Database (Denmark)

    Jensen, Morten Rostgaard; Poulsen, Niels Kjølstad; Ravn, Ole

    1993-01-01

    This paper deals with two-discrete-time operators, the conventional forward shift-operator and the δ-operator. Both operators are treated in view of construction of suitable solutions to the Diophantine equation for the purpose of prediction. A general step-recursive scheme is presented. Finally...... a general predictive control (GPC) is formulated and applied adaptively to a continuous-time plant...

  10. Differential Prediction Generalization in College Admissions Testing

    Science.gov (United States)

    Aguinis, Herman; Culpepper, Steven A.; Pierce, Charles A.

    2016-01-01

    We introduce the concept of "differential prediction generalization" in the context of college admissions testing. Specifically, we assess the extent to which predicted first-year college grade point average (GPA) based on high-school grade point average (HSGPA) and SAT scores depends on a student's ethnicity and gender and whether this…

  11. Differential Prediction Generalization in College Admissions Testing

    Science.gov (United States)

    Aguinis, Herman; Culpepper, Steven A.; Pierce, Charles A.

    2016-01-01

    We introduce the concept of "differential prediction generalization" in the context of college admissions testing. Specifically, we assess the extent to which predicted first-year college grade point average (GPA) based on high-school grade point average (HSGPA) and SAT scores depends on a student's ethnicity and gender and whether this…

  12. Generalized simplicial chiral models

    CERN Document Server

    Alimohammadi, M

    2000-01-01

    Using the auxiliary field representation of the simplicial chiral models on a (d-1)-dimensional simplex, we generalize the simplicial chiral models by replacing the term Tr$(AA^{\\d})$ in the Lagrangian of these models, by an arbitrary class function of $AA^{\\d}; V(AA^{\\d})$. This is the same method that has been used in defining the generalized two-dimensional Yang-Mills theories (gYM_2) from ordinary YM_2. We call these models, the " generalized simplicial chiral models ". With the help of the results of one-link integral over a U(N) matrix, we compute the large-N saddle-point equations for eigenvalue density function $\\ro (z)$ in the weak ($\\b >\\b_c$) and strong ($\\b <\\b_c$) regions. In d=2, where the model somehow relates to gYM_2 theory, we solve the saddle-point equations and find $\\ro (z)$ in two region, and calculate the explicit value of critical point $\\b_c$ for $V(B)=TrB^n (B=AA^{\\d})$. For $V(B)=Tr B^2,Tr B^3$ and Tr$B^4$, we study the critical behaviour of the model at d=2, and by calculating t...

  13. Link prediction via generalized coupled tensor factorisation

    DEFF Research Database (Denmark)

    Ermiş, Beyza; Evrim, Acar Ataman; Taylan Cemgil, A.

    2012-01-01

    This study deals with the missing link prediction problem: the problem of predicting the existence of missing connections between entities of interest. We address link prediction using coupled analysis of relational datasets represented as heterogeneous data, i.e., datasets in the form of matrices...... different loss functions. Numerical experiments demonstrate that joint analysis of data from multiple sources via coupled factorisation improves the link prediction performance and the selection of right loss function and tensor model is crucial for accurately predicting missing links....

  14. Fast Algorithm of Multivariable Generalized Predictive Control

    Institute of Scientific and Technical Information of China (English)

    Jin,Yuanyu; Pang,Zhonghua; Cui,Hong

    2005-01-01

    To avoid the shortcoming of the traditional (previous)generalized predictive control (GPC) algorithms, too large amounts of computation, a fast algorithm of multivariable generalized predictive control is presented in which only the current control actions are computed exactly on line and the rest (the future control actions) are approximately done off line. The algorithm is simple and can be used in the arbitary-dimension input arbitary-dimension output (ADIADO) linear systems. Because it dose not need solving Diophantine equation and reduces the dimension of the inverse matrix, it decreases largely the computational burden. Finally, simulation results show that the presented algorithm is effective and practicable.

  15. Analysis of nodalization effects on the prediction error of generalized finite element method used for dynamic modeling of hot water storage tank

    Directory of Open Access Journals (Sweden)

    Wołowicz Marcin

    2015-09-01

    Full Text Available The paper presents dynamic model of hot water storage tank. The literature review has been made. Analysis of effects of nodalization on the prediction error of generalized finite element method (GFEM is provided. The model takes into account eleven various parameters, such as: flue gases volumetric flow rate to the spiral, inlet water temperature, outlet water flow rate, etc. Boiler is also described by sizing parameters, nozzle parameters and heat loss including ambient temperature. The model has been validated on existing data. Adequate laboratory experiments were provided. The comparison between 1-, 5-, 10- and 50-zone boiler is presented. Comparison between experiment and simulations for different zone numbers of the boiler model is presented on the plots. The reason of differences between experiment and simulation is explained.

  16. A general unified non-equilibrium model for predicting saturated and subcooled critical two-phase flow rates through short and long tubes

    Energy Technology Data Exchange (ETDEWEB)

    Fraser, D.W.H. [Univ. of British Columbia (Canada); Abdelmessih, A.H. [Univ. of Toronto, Ontario (Canada)

    1995-09-01

    A general unified model is developed to predict one-component critical two-phase pipe flow. Modelling of the two-phase flow is accomplished by describing the evolution of the flow between the location of flashing inception and the exit (critical) plane. The model approximates the nonequilibrium phase change process via thermodynamic equilibrium paths. Included are the relative effects of varying the location of flashing inception, pipe geometry, fluid properties and length to diameter ratio. The model predicts that a range of critical mass fluxes exist and is bound by a maximum and minimum value for a given thermodynamic state. This range is more pronounced at lower subcooled stagnation states and can be attributed to the variation in the location of flashing inception. The model is based on the results of an experimental study of the critical two-phase flow of saturated and subcooled water through long tubes. In that study, the location of flashing inception was accurately controlled and adjusted through the use of a new device. The data obtained revealed that for fixed stagnation conditions, the maximum critical mass flux occurred with flashing inception located near the pipe exit; while minimum critical mass fluxes occurred with the flashing front located further upstream. Available data since 1970 for both short and long tubes over a wide range of conditions are compared with the model predictions. This includes test section L/D ratios from 25 to 300 and covers a temperature and pressure range of 110 to 280{degrees}C and 0.16 to 6.9 MPa. respectively. The predicted maximum and minimum critical mass fluxes show an excellent agreement with the range observed in the experimental data.

  17. General Composite Higgs Models

    CERN Document Server

    Marzocca, David; Shu, Jing

    2012-01-01

    We construct a general class of pseudo-Goldstone composite Higgs models, within the minimal $SO(5)/SO(4)$ coset structure, that are not necessarily of moose-type. We characterize the main properties these models should have in order to give rise to a Higgs mass at around 125 GeV. We assume the existence of relatively light and weakly coupled spin 1 and 1/2 resonances. In absence of a symmetry principle, we introduce the Minimal Higgs Potential (MHP) hypothesis: the Higgs potential is assumed to be one-loop dominated by the SM fields and the above resonances, with a contribution that is made calculable by imposing suitable generalizations of the first and second Weinberg sum rules. We show that a 125 GeV Higgs requires light, often sub-TeV, fermion resonances. Their presence can also be important for the model to successfully pass the electroweak precision tests. Interestingly enough, the latter can be passed also by models with a heavy Higgs around 320 GeV. The composite Higgs models of the moose-type conside...

  18. Generalized Nonlinear Yule Models

    Science.gov (United States)

    Lansky, Petr; Polito, Federico; Sacerdote, Laura

    2016-10-01

    With the aim of considering models related to random graphs growth exhibiting persistent memory, we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macroevolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth rates. Among the main results we derive the explicit distribution of the number of in-links of a webpage chosen uniformly at random recognizing the contribution to the asymptotics and the finite time correction. The mean value of the latter distribution is also calculated explicitly in the most general case. Furthermore, in order to show the usefulness of our results, we particularize them in the case of specific birth rates giving rise to a saturating behaviour, a property that is often observed in nature. The further specialization to the non-fractional case allows us to extend the Yule model accounting for a nonlinear growth.

  19. Generalized Nonlinear Yule Models

    Science.gov (United States)

    Lansky, Petr; Polito, Federico; Sacerdote, Laura

    2016-11-01

    With the aim of considering models related to random graphs growth exhibiting persistent memory, we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macroevolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth rates. Among the main results we derive the explicit distribution of the number of in-links of a webpage chosen uniformly at random recognizing the contribution to the asymptotics and the finite time correction. The mean value of the latter distribution is also calculated explicitly in the most general case. Furthermore, in order to show the usefulness of our results, we particularize them in the case of specific birth rates giving rise to a saturating behaviour, a property that is often observed in nature. The further specialization to the non-fractional case allows us to extend the Yule model accounting for a nonlinear growth.

  20. Using factor analysis scales of generalized amino acid information for prediction and characteristic analysis of β-turns in proteins based on a support vector machine model

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    This paper offers a new combined approach to predict and characterize β-turns in proteins.The approach includes two key steps,i.e.,how to represent the features of β-turns and how to develop a predictor.The first step is to use factor analysis scales of generalized amino acid information(FASGAI),involving hydrophobicity,alpha and turn propensities,bulky properties,compositional characteristics,local flexibility and electronic properties,to represent the features of β-turns in proteins.The second step is to construct a support vector machine(SVM) predictor of β-turns based on 426 training proteins by a sevenfold cross validation test.The SVM predictor thus predicted β-turns on 547 and 823 proteins by an external validation test,separately.Our results are compared with the previously best known β-turn prediction methods and are shown to give comparative performance.Most significantly,the SVM model provides some information related to β-turn residues in proteins.The results demonstrate that the present combination approach may be used in the prediction of protein structures.

  1. Link prediction via generalized coupled tensor factorisation

    DEFF Research Database (Denmark)

    Ermiş, Beyza; Evrim, Acar Ataman; Taylan Cemgil, A.

    2012-01-01

    and higher-order tensors. We propose to use an approach based on probabilistic interpretation of tensor factorisation models, i.e., Generalised Coupled Tensor Factorisation, which can simultaneously fit a large class of tensor models to higher-order tensors/matrices with com- mon latent factors using...... different loss functions. Numerical experiments demonstrate that joint analysis of data from multiple sources via coupled factorisation improves the link prediction performance and the selection of right loss function and tensor model is crucial for accurately predicting missing links....

  2. Introduction to general and generalized linear models

    CERN Document Server

    Madsen, Henrik

    2010-01-01

    IntroductionExamples of types of data Motivating examples A first view on the modelsThe Likelihood PrincipleIntroduction Point estimation theory The likelihood function The score function The information matrix Alternative parameterizations of the likelihood The maximum likelihood estimate (MLE) Distribution of the ML estimator Generalized loss-function and deviance Quadratic approximation of the log-likelihood Likelihood ratio tests Successive testing in hypothesis chains Dealing with nuisance parameters General Linear ModelsIntroduction The multivariate normal distribution General linear mod

  3. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  4. A general role for medial prefrontal cortex in event prediction

    Directory of Open Access Journals (Sweden)

    William H Alexander

    2014-07-01

    Full Text Available A recent computational neural model of medial prefrontal cortex (mPFC, namely the PRO model (Alexander & Brown, 2011, suggests that mPFC learns to predict the outcomes of actions. The model accounted for a wide range of data on the mPFC. Nevertheless, numerous recent findings suggest that mPFC may signal predictions and prediction errors even when the predicted outcomes are not contingent on prior actions. Here we show that the existing PRO model can learn to predict outcomes in a general sense, and not only when the outcomes are contingent on actions. A series of simulations show how this generalized PRO model can account for an even broader range of findings in the mPFC, including human ERP, fMRI, and macaque single-unit data. The results suggest that the mPFC learns to predict salient events in general and provides a theoretical framework that links mPFC function to model-based reinforcement learning, Bayesian learning, and theories of cognitive control.

  5. Generalization error bounds for stationary autoregressive models

    CERN Document Server

    McDonald, Daniel J; Schervish, Mark

    2011-01-01

    We derive generalization error bounds for stationary univariate autoregressive (AR) models. We show that the stationarity assumption alone lets us treat the estimation of AR models as a regularized kernel regression without the need to further regularize the model arbitrarily. We thereby bound the Rademacher complexity of AR models and apply existing Rademacher complexity results to characterize the predictive risk of AR models. We demonstrate our methods by predicting interest rate movements.

  6. Using the Job-Demands-Resources model to predict turnover in the information technology workforce – General effects and gender

    Directory of Open Access Journals (Sweden)

    Peter Hoonakker

    2014-01-01

    Full Text Available High employee turnover has always been a major issue for Information Technology (IT. In particular, turnover of women is very high. In this study, we used the Job Demand/Resources (JD-R model to examine the relationship between job demands and job resources, stress/burnout and job satisfaction/commitment, and turnover intention and tested the model for gender differences. Data were collected in five IT companies. A sample of 624 respondents (return rate: 56%; 54% males; mean age: 39.7 years was available for statistical analyses. Results of our study show that relationships between job demands and turnover intention are mediated by emotional exhaustion (burnout and relationships between job resources and turnover intention are mediated by job satisfaction. We found noticeable gender differences in these relationships, which can explain differences in turnover intention between male and female employees. The results of our study have consequences for organizational retention strategies to keep men and women in the IT work force.

  7. Neural Generalized Predictive Control of a non-linear Process

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1998-01-01

    The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability...... detail and discuss the implementation difficulties. The neural generalized predictive controller is tested on a pneumatic servo sys-tem....

  8. Predictive models in urology.

    Science.gov (United States)

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  9. Generalized Predictive Control for Non-Stationary Systems

    DEFF Research Database (Denmark)

    Palsson, Olafur Petur; Madsen, Henrik; Søgaard, Henning Tangen

    1994-01-01

    This paper shows how the generalized predictive control (GPC) can be extended to non-stationary (time-varying) systems. If the time-variation is slow, then the classical GPC can be used in context with an adaptive estimation procedure of a time-invariant ARIMAX model. However, in this paper prior...... knowledge concerning the nature of the parameter variations is assumed available. The GPC is based on the assumption that the prediction of the system output can be expressed as a linear combination of present and future controls. Since the Diophantine equation cannot be used due to the time......-variation of the parameters, the optimal prediction is found as the general conditional expectation of the system output. The underlying model is of an ARMAX-type instead of an ARIMAX-type as in the original version of the GPC (Clarke, D. W., C. Mohtadi and P. S. Tuffs (1987). Automatica, 23, 137-148) and almost all later...

  10. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.

  11. Nominal model predictive control

    OpenAIRE

    Grüne, Lars

    2013-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  12. Nominal Model Predictive Control

    OpenAIRE

    Grüne, Lars

    2014-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  13. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  14. Generalizing over Lexicons to Predict Consonant Mastery.

    Science.gov (United States)

    Beckman, Mary E; Edwards, Jan

    2010-10-01

    When they first begin to talk, children show characteristic consonant errors, which are often described in terms that recall Neogrammarian sound change. For example, a Japanese child's production of the word kimono might be transcribed with an initial postalveolar affricate, as in typical velar-softening sound changes. Broad-stroke reviews of errors list striking commonalities across children acquiring different languages, whereas quantitative studies reveal enormous variability across children, some of which seems related to differences in consonant frequencies across different lexicons. This paper asks whether the appearance of commonalities across children acquiring different languages might be reconciled with the observed variability by referring to the ways in which sound change might affect frequencies in the lexicon. Correlational analyses were used to assess relationships between consonant accuracy in a database of recordings of toddlers acquiring Cantonese, English, Greek, or Japanese and two measures of consonant frequency: one specific to the lexicon being acquired, the other an average frequency calculated for the other three languages. Results showed generally positive trends, although the strength of the trends differed across measures and across languages. Many outliers in plots depicting the relationships suggested historical contingencies that have conspired to make for unexpected paths, much as in biological evolution."The history of life is not necessarily progressive; it is certainly not predictable. The earth's creatures have evolved through a series of contingent and fortuitous events." (Gould, 1989).

  15. The impacts of data constraints on the predictive performance of a general process-based crop model (PeakN-crop v1.0)

    Science.gov (United States)

    Caldararu, Silvia; Purves, Drew W.; Smith, Matthew J.

    2017-04-01

    Improving international food security under a changing climate and increasing human population will be greatly aided by improving our ability to modify, understand and predict crop growth. What we predominantly have at our disposal are either process-based models of crop physiology or statistical analyses of yield datasets, both of which suffer from various sources of error. In this paper, we present a generic process-based crop model (PeakN-crop v1.0) which we parametrise using a Bayesian model-fitting algorithm to three different sources: data-space-based vegetation indices, eddy covariance productivity measurements and regional crop yields. We show that the model parametrised without data, based on prior knowledge of the parameters, can largely capture the observed behaviour but the data-constrained model greatly improves both the model fit and reduces prediction uncertainty. We investigate the extent to which each dataset contributes to the model performance and show that while all data improve on the prior model fit, the satellite-based data and crop yield estimates are particularly important for reducing model error and uncertainty. Despite these improvements, we conclude that there are still significant knowledge gaps, in terms of available data for model parametrisation, but our study can help indicate the necessary data collection to improve our predictions of crop yields and crop responses to environmental changes.

  16. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...

  17. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  18. SIMULATION STUDY OF GENERALIZED PREDICTIVE CONTROL FOR TURBINE POWER

    Institute of Scientific and Technical Information of China (English)

    Shi Xiaoping; Li Dongmei

    2004-01-01

    A GPC (generalized predictive control) law is developed to control the power of a turbine, after transforming the nonlinear mathematical model of the power regulation system into a CARIMA(controlled auto-regressive integrated moving average) form. The effect of the new control law is compared with a traditional PID (proportional, integral and differential) control law by numerical simulation. The simulation results verify the effectiveness, the correctness and the advantage of the new control scheme.

  19. (Studies of ocean predictability at decade to century time scales using a global ocean general circulation model in a parallel competing environment). [Large Scale Geostrophic Model

    Energy Technology Data Exchange (ETDEWEB)

    1992-03-10

    The first phase of the proposed work is largely completed on schedule. Scientists at the San Diego Supercomputer Center (SDSC) succeeded in putting a version of the Hamburg isopycnal coordinate ocean model (OPYC) onto the INTEL parallel computer. Due to the slow run speeds of the OPYC on the parallel machine, another ocean is being model used during the first part of phase 2. The model chosen is the Large Scale Geostrophic (LSG) model form the Max Planck Institute.

  20. Generalized, Linear, and Mixed Models

    CERN Document Server

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  1. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  2. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  3. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  4. Modified Newtonian dynamics as a prediction of general relativity

    CERN Document Server

    Rahman, S

    2006-01-01

    We treat the physical vacuum as a featureless relativistic continuum in motion, and explore its consequences. Proceeding in a step-by-step manner, we are able to show that the equations of classical electrodynamics follow from the motion of a space-filling fluid of neutral spinors which we identify with neutrinos. The model predicts that antimatter has negative mass, and that neutrinos are matter-antimatter dipoles. Together these suffice to explain the presence of modified Newtonian dynamics as a gravitational polarisation effect. The existence of antigravity could resolve other major outstanding issues in cosmology, including the rate of expansion of the universe and its flatness, the origin of gamma ray bursts, and the smallness of the cosmological constant. If our model is correct then all of these observations are non-trivial predictions of Einstein's general theory of relativity.

  5. Implementation of routine ash predictions using a general purpose atmospheric dispersion model (HYSPLIT) adapted for calculating ash thickness on the ground.

    Science.gov (United States)

    Hurst, Tony; Davis, Cory; Deligne, Natalia

    2016-04-01

    GNS Science currently produces twice-daily forecasts of the likely ash deposition if any of the active or recently active volcanoes in New Zealand was to erupt, with a number of alternative possible eruptions for each volcano. These use our ASHFALL program for calculating ash thickness, which uses 1-D wind profiles at the location of each volcano derived from Numerical Weather Prediction (NWP) model output supplied by MetService. HYSPLIT is a hybrid Lagrangian dispersion model, developed by NOAA/ARL, which is used by MetService in its role as a Volcanic Ash Advisory Centre, to model airborne volcanic ash, with meteorological data provided by external and in-house NWP models. A by-product of the HYSPLIT volcanic ash dispersion simulations is the deposition rate at the ground surface. Comparison of HYSPLIT with ASHFALL showed that alterations to the standard fall velocity model were required to deal with ash particles larger than about 50 microns, which make up the bulk of ash deposits near a volcano. It also required the ash injected into the dispersion model to have a concentration based on a typical umbrella-shaped eruption column, rather than uniform across all levels. The different parameters used in HYSPLIT also caused us to revisit what possible combinations of eruption size and column height were appropriate to model as a likely eruption. We are now running HYSPLIT to produce alternative ash forecasts. It is apparent that there are many times at which the 3-D wind model used in HYSPLIT gives a substantially different ash deposition pattern to the 1-D wind model of ASHFALL, and the use of HYSPLIT will give more accurate predictions. ASHFALL is likely still to be used for probabilistic hazard forecasting, in which very large numbers of runs are required, as HYSPLIT takes much more computer time.

  6. Numerical weather prediction model tuning via ensemble prediction system

    Science.gov (United States)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  7. Experimental Investigations of Generalized Predictive Control for Tiltrotor Stability Augmentation

    Science.gov (United States)

    Nixon, Mark W.; Langston, Chester W.; Singleton, Jeffrey D.; Piatak, David J.; Kvaternik, Raymond G.; Bennett, Richard L.; Brown, Ross K.

    2001-01-01

    A team of researchers from the Army Research Laboratory, NASA Langley Research Center (LaRC), and Bell Helicopter-Textron, Inc. have completed hover-cell and wind-tunnel testing of a 1/5-size aeroelastically-scaled tiltrotor model using a new active control system for stability augmentation. The active system is based on a generalized predictive control (GPC) algorithm originally developed at NASA LaRC in 1997 for un-known disturbance rejection. Results of these investigations show that GPC combined with an active swashplate can significantly augment the damping and stability of tiltrotors in both hover and high-speed flight.

  8. Predicting student success in General Chemistry

    Science.gov (United States)

    Figueroa, Daphne Elizabeth

    The goal of this research was to determine the predictors of student success in college level General Chemistry. The potential predictors were categorized as cognitive, non-cognitive, affective, or demographic factors. A broader goal of the study was to provide a reference for academic personnel to better judge the prerequisite skills, knowledge and attitudes that students should attain before enrolling in General Chemistry. Therefore, the study is relevant to chemical educators who are attempting to matriculate candidates for the scientific workforce and to chemical education researches who are interested in student success, student retention and curricular reform. The major hypotheses were that several factors from each category would emerge as significant predictors and that these would differ for students enrolled at three different post-secondary institutions: a community college, a private university and a public university. These hypotheses were tested using multiple regression techniques to analyze grade, student survey and post-test data collected from General Chemistry students at the three institutions. Over-all, twelve factors (six demographic, three cognitive and three affective) emerged as strong, significant predictors of student success. In addition, there were marked differences in which factors emerged based on the type of institution and on how student success was defined. Thus, the major hypotheses of the study were supported. Over-all, this study has significant implications for educational policy, theory, and practice. With regard to policy, there is a need for institutions and departments that offer General Chemistry to provide support for a diverse population of students. And, at the community college level, in particular, there is a need for better academic advising and more institutional support for underprepared students. In the classroom, the professor plays a critical role in influencing students' academic self-concept, which in turn

  9. Data mining the NCI60 to predict generalized cytotoxicity.

    Science.gov (United States)

    Lee, Adam C; Shedden, Kerby; Rosania, Gustavo R; Crippen, Gordon M

    2008-07-01

    Elimination of cytotoxic compounds in the early and later stages of drug discovery can help reduce the costs of research and development. Through the application of principal components analysis (PCA), we were able to data mine and prove that approximately 89% of the total log GI 50 variance is due to the nonspecific cytotoxic nature of substances. Furthermore, PCA led to the identification of groups of structurally unrelated substances showing very specific toxicity profiles, such as a set of 45 substances toxic only to the Leukemia_SR cancer cell line. In an effort to predict nonspecific cytotoxicity on the basis of the mean log GI 50, we created a decision tree using MACCS keys that can correctly classify over 83% of the substances as cytotoxic/noncytotoxic in silico, on the basis of the cutoff of mean log GI 50 = -5.0. Finally, we have established a linear model using least-squares in which nine of the 59 available NCI60 cancer cell lines can be used to predict the mean log GI 50. The model has R (2) = 0.99 and a root-mean-square deviation between the observed and calculated mean log GI 50 (RMSE) = 0.09. Our predictive models can be applied to flag generally cytotoxic molecules in virtual and real chemical libraries, thus saving time and effort.

  10. Generalized predictive control in the delta-domain

    DEFF Research Database (Denmark)

    Lauritsen, Morten Bach; Jensen, Morten Rostgaard; Poulsen, Niels Kjølstad

    1995-01-01

    This paper describes new approaches to generalized predictive control formulated in the delta (δ) domain. A new δ-domain version of the continuous-time emulator-based predictor is presented. It produces the optimal estimate in the deterministic case whenever the predictor order is chosen greater...... than or equal to the number of future predicted samples, however a “good” estimate is usually obtained in a much longer range of samples. This is particularly advantageous at fast sampling rates where a “conventional” predictor is bound to become very computationally demanding. Two controllers...... are considered: one having a well-defined limit as the sampling period tends to zero, the other being a close approximation to the conventional discrete-time GPC. Both algorithms are discrete in nature and well-suited for adaptive control. The fact, that δ-domain model are used does not introduce...

  11. Stability analysis of embedded nonlinear predictor neural generalized predictive controller

    Directory of Open Access Journals (Sweden)

    Hesham F. Abdel Ghaffar

    2014-03-01

    Full Text Available Nonlinear Predictor-Neural Generalized Predictive Controller (NGPC is one of the most advanced control techniques that are used with severe nonlinear processes. In this paper, a hybrid solution from NGPC and Internal Model Principle (IMP is implemented to stabilize nonlinear, non-minimum phase, variable dead time processes under high disturbance values over wide range of operation. Also, the superiority of NGPC over linear predictive controllers, like GPC, is proved for severe nonlinear processes over wide range of operation. The necessary conditions required to stabilize NGPC is derived using Lyapunov stability analysis for nonlinear processes. The NGPC stability conditions and improvement in disturbance suppression are verified by both simulation using Duffing’s nonlinear equation and real-time using continuous stirred tank reactor. Up to our knowledge, the paper offers the first hardware embedded Neural GPC which has been utilized to verify NGPC–IMP improvement in realtime.

  12. SCIPUFF - a generalized hazard dispersion model

    Energy Technology Data Exchange (ETDEWEB)

    Sykes, R.I.; Henn, D.S.; Parker, S.F.; Gabruk, R.S. [Titan Research and Technology, Princeton, NJ (United States)

    1996-12-31

    One of the more popular techniques for efficiently representing the dispersion process is the Gaussian puff model, which uses a collection of Lagrangian puffs with Gaussian concentration profiles. SCIPUFF (Second-order Closure Integrated Puff) is an advanced Gaussian puff model. SCIPUFF which uses second-order turbulence closure techniques to relate the dispersion rates to measurable turbulent velocity statistics, providing a wide range of applicability. In addition, the closure model provides a prediction of the statistical variance in the concentration field which can be used to estimate the uncertainty in the dispersion prediction resulting from the inherent uncertainty in the wind field. SCIPUFF has been greatly extended from a power plant plume model to describe more general source characteristics, material properties, and longer range dispersion. In addition, a Graphical User Interface has been developed to provide interactive problem definition and output display. This presentation describes the major features of the model, and presents several example calculations.

  13. Predictive Models for Music

    OpenAIRE

    Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy

    2008-01-01

    Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...

  14. Childhood asthma prediction models: a systematic review.

    Science.gov (United States)

    Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup

    2015-12-01

    Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.

  15. Generalized Mathematical Model for Hot Rolling Process of Plate

    Institute of Scientific and Technical Information of China (English)

    Zhenshan CUI; Bingye XU

    2003-01-01

    A generalized mathematical model is developed to predict the changes of temperature, rolling pressure, strain,strain rate, and austenite grain size for plate hot rolling and cooling processes. The model is established mainly by incorporating analytical an

  16. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  17. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...

  18. Fermions as generalized Ising models

    Science.gov (United States)

    Wetterich, C.

    2017-04-01

    We establish a general map between Grassmann functionals for fermions and probability or weight distributions for Ising spins. The equivalence between the two formulations is based on identical transfer matrices and expectation values of products of observables. The map preserves locality properties and can be realized for arbitrary dimensions. We present a simple example where a quantum field theory for free massless Dirac fermions in two-dimensional Minkowski space is represented by an asymmetric Ising model on a euclidean square lattice.

  19. CPN Models in General Coordinates

    CERN Document Server

    Barnes, K J

    2002-01-01

    An analysis of CPN models is given in terms of general coordinates or arbitrary interpolating fields.Only closed expressions made from simple functions are involved.Special attention is given to CP2 and CP4. In the first of these the retrieval of stereographic coordinates reveals the hermitian form of the metric. A similar analysis for the latter case allows comparison with the Fubini-Study metric.

  20. Self-Tuning of Design Variables for Generalized Predictive Control

    Science.gov (United States)

    Lin, Chaung; Juang, Jer-Nan

    2000-01-01

    Three techniques are introduced to determine the order and control weighting for the design of a generalized predictive controller. These techniques are based on the application of fuzzy logic, genetic algorithms, and simulated annealing to conduct an optimal search on specific performance indexes or objective functions. Fuzzy logic is found to be feasible for real-time and on-line implementation due to its smooth and quick convergence. On the other hand, genetic algorithms and simulated annealing are applicable for initial estimation of the model order and control weighting, and final fine-tuning within a small region of the solution space, Several numerical simulations for a multiple-input and multiple-output system are given to illustrate the techniques developed in this paper.

  1. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  2. Multivariate covariance generalized linear models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Jørgensen, Bent

    2016-01-01

    We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...... are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions...

  3. Predicting Gang Fight Participation in a General Youth Sample via the HEW Youth Development Model's Community Program Impact Scales, Age, and Sex.

    Science.gov (United States)

    Truckenmiller, James L.

    The accurate prediction of violence has been in the spotlight of critical concern in recent years. To investigate the relative predictive power of peer pressure, youth perceived negative labeling, youth perceived access to educational and occupational roles, social alienation, self-esteem, sex, and age with regard to gang fight participation…

  4. Modelling, controlling, predicting blackouts

    CERN Document Server

    Wang, Chengwei; Baptista, Murilo S

    2016-01-01

    The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...

  5. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Fermions as generalized Ising models

    Directory of Open Access Journals (Sweden)

    C. Wetterich

    2017-04-01

    Full Text Available We establish a general map between Grassmann functionals for fermions and probability or weight distributions for Ising spins. The equivalence between the two formulations is based on identical transfer matrices and expectation values of products of observables. The map preserves locality properties and can be realized for arbitrary dimensions. We present a simple example where a quantum field theory for free massless Dirac fermions in two-dimensional Minkowski space is represented by an asymmetric Ising model on a euclidean square lattice.

  7. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...

  8. Hybrid modeling and prediction of dynamical systems

    Science.gov (United States)

    Lloyd, Alun L.; Flores, Kevin B.

    2017-01-01

    Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642

  9. Studies of Ocean Predictability at Decade to Century Time Scales Using a Global Ocean General Circulation Model in a Parallel Computing Environment

    Energy Technology Data Exchange (ETDEWEB)

    Barnett, T.P.

    1998-11-30

    The objectives of this report are to determine the structure of oceanic natural variability at time scales of decades to centuries, characterize the physical mechanisms responsible for the variability; determine the relative importance of heat, fresh water, and moment fluxes on the variability; determine the predictability of the variability on these times scales. (B204)

  10. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...

  11. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  12. [Studies of ocean predictability at decade to century time scales using a global ocean general circulation model in a parallel competing environment]. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    1992-03-10

    The first phase of the proposed work is largely completed on schedule. Scientists at the San Diego Supercomputer Center (SDSC) succeeded in putting a version of the Hamburg isopycnal coordinate ocean model (OPYC) onto the INTEL parallel computer. Due to the slow run speeds of the OPYC on the parallel machine, another ocean is being model used during the first part of phase 2. The model chosen is the Large Scale Geostrophic (LSG) model form the Max Planck Institute.

  13. Metacognition Beliefs and General Health in Predicting Alexithymia in Students.

    Science.gov (United States)

    Babaei, Samaneh; Ranjbar Varandi, Shahryar; Hatami, Zohre; Gharechahi, Maryam

    2015-06-12

    The present study was conducted to investigate the role of metacognition beliefs and general health in alexithymia in Iranian students. This descriptive and correlational study included 200 participants of high schools students, selected randomly from students of two cities (Sari and Dargaz), Iran. Metacognitive Strategies Questionnaire (MCQ-30); the General Health Questionnaire (GHQ) and Farsi Version of the Toronto Alexithymia Scale (TAS-20) were used for gathering the data. Using the Pearson's correlation method and regression, the data were analyzed. The findings indicated significant positive relationships between alexithymia and all subscales of general health. The highest correlation was between alexithymia and anxiety subscale (r=0.36, Pmetacognitive strategies. The highest significant negative relationship was seen between alexithymia and the sub-scale of risk uncontrollability (r=-0.359, P Metacognition beliefs predicted about 8% of the variance of alexithymia (β=-0.028, Pmetacognition beliefs and general health had important role in predicting of alexithymia in students.

  14. Generalized model of island biodiversity

    Science.gov (United States)

    Kessler, David A.; Shnerb, Nadav M.

    2015-04-01

    The dynamics of a local community of competing species with weak immigration from a static regional pool is studied. Implementing the generalized competitive Lotka-Volterra model with demographic noise, a rich dynamics with four qualitatively distinct phases is unfolded. When the overall interspecies competition is weak, the island species recapitulate the mainland species. For higher values of the competition parameter, the system still admits an equilibrium community, but now some of the mainland species are absent on the island. Further increase in competition leads to an intermittent "disordered" phase, where the dynamics is controlled by invadable combinations of species and the turnover rate is governed by the migration. Finally, the strong competition phase is glasslike, dominated by uninvadable states and noise-induced transitions. Our model contains, as a special case, the celebrated neutral island theories of Wilson-MacArthur and Hubbell. Moreover, we show that slight deviations from perfect neutrality may lead to each of the phases, as the Hubbell point appears to be quadracritical.

  15. Generalized model of brushless dc generator

    Science.gov (United States)

    Vadher, V. V.; Kettleborough, J. Gordon; Smith, I. R.; Gerges, Wahid R.

    1989-07-01

    A generalized model is described for a brushless dc machine consisting of a multiphase synchronous machine with a full-wave bridge rectifier connected to its output terminals. The state-variable equations for the machine are suitable for numerical integration on a digital computer, and are assembled in a form which permits investigations to be made on the effects of different numbers of armature phase windings and different winding connections. The model has been used in both steady-state and transient studies on a number of generating units, with the detailed information which is provided being beneficial to design engineers. Comparisons presented between predicted and measured results illustrate the validity of the model and the mathematical techniques adopted, and confirm that accurate information on the performance of a brushless generator may be obtained prior to manufacture.

  16. Predictive models of forest dynamics.

    Science.gov (United States)

    Purves, Drew; Pacala, Stephen

    2008-06-13

    Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.

  17. Transferability of regional permafrost disturbance susceptibility modelling using generalized linear and generalized additive models

    Science.gov (United States)

    Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.

    2016-07-01

    To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were

  18. Neural Generalized Predictive Control of a non-linear Process

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1998-01-01

    qualities. The controller is a non-linear version of the well-known generalized predictive controller developed in linear control theory. It involves minimization of a cost function which in the present case has to be done numerically. Therefore, we develop the numerical algorithms necessary in substantial...

  19. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  20. Webinar of paper 2013, Which method predicts recidivism best? A comparison of statistical, machine learning and data mining predictive models

    NARCIS (Netherlands)

    Tollenaar, N.; Van der Heijden, P.G.M.|info:eu-repo/dai/nl/073087998

    2013-01-01

    Using criminal population criminal conviction history information, prediction models are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining

  1. Webinar of paper 2013, Which method predicts recidivism best? A comparison of statistical, machine learning and data mining predictive models

    NARCIS (Netherlands)

    Tollenaar, N.; Van der Heijden, P.G.M.

    2013-01-01

    Using criminal population criminal conviction history information, prediction models are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining

  2. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  3. Generalized ESO and Predictive Control Based Robust Autopilot Design

    Directory of Open Access Journals (Sweden)

    Bhavnesh Panchal

    2016-01-01

    Full Text Available A novel continuous time predictive control and generalized extended state observer (GESO based acceleration tracking pitch autopilot design is proposed for a tail controlled, skid-to-turn tactical missile. As the dynamics of missile are significantly uncertain with mismatched uncertainty, GESO is employed to estimate the state and uncertainty in an integrated manner. The estimates are used to meet the requirement of state and to robustify the output tracking predictive controller designed for nominal system. Closed loop stability for the controller-observer structure is established. An important feature of the proposed design is that it does not require any specific information about the uncertainty. Also the predictive control design yields the feedback control gain and disturbance compensation gain simultaneously. Effectiveness of GESO in estimation of the states and uncertainties and in robustifying the predictive controller in the presence of parametric uncertainties, external disturbances, unmodeled dynamics, and measurement noise is illustrated by simulation.

  4. Modelling language evolution: Examples and predictions.

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  5. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  6. Decentralized adaptive generalized predictive control for structural vibration

    Institute of Scientific and Technical Information of China (English)

    LU Minyue; GU Zhongquan

    2005-01-01

    A decentralized generalized predictive control (GPC) algorithm is developed for strongly coupled multi-input multi-output systems with parallel computation. The algorithm is applied to adaptive control of structural vibration. The key steps in this algorithm are to group the actuators and the sensors and then to pair these groups into subsystems. It is important that the on-line identification and the control law design can be a parallel process for all these subsystems. It avoids the high computation cost in ordinary predictive control,and is of great advantage especially for large-scale systems.

  7. Generalized complex geometry, generalized branes and the Hitchin sigma model

    Energy Technology Data Exchange (ETDEWEB)

    Zucchini, Roberto E-mail: zucchinir@bo.infn.it

    2005-03-01

    Hitchin's generalized complex geometry has been shown to be relevant in compactifications of superstring theory with fluxes and is expected to lead to a deeper understanding of mirror symmetry. Gualtieri's notion of generalized complex submanifold seems to be a natural candidate for the description of branes in this context. Recently, we introduced a Batalin-Vilkovisky field theoretic realization of generalized complex geometry, the Hitchin sigma model, extending the well known Poisson sigma model. In this paper, exploiting Gualtieri's formalism, we incorporate branes into the model. A detailed study of the boundary conditions obeyed by the world sheet fields is provided. Finally, it is found that, when branes are present, the classical Batalin-Vilkovisky cohomology contains an extra sector that is related non trivially to a novel cohomology associated with the branes as generalized complex submanifolds. (author)

  8. Generalized complex geometry, generalized branes and the Hitchin sigma model

    CERN Document Server

    Zucchini, R

    2005-01-01

    Hitchin's generalized complex geometry has been shown to be relevant in compactifications of superstring theory with fluxes and is expected to lead to a deeper understanding of mirror symmetry. Gualtieri's notion of generalized complex submanifold seems to be a natural candidate for the description of branes in this context. Recently, we introduced a field theoretic realization of generalized complex geometry, the Hitchin sigma model, extending the well known Poisson sigma model. In this paper, exploiting Gualtieri's formalism, we incorporate branes into the model. A detailed study of the boundary conditions obeyed by the world sheet fields is provided. Finally, it is found that, when branes are present, the classical Batalin--Vilkovisky cohomology contains an extra sector that is related non trivially to a novel cohomology associated with the branes as generalized complex submanifolds.

  9. More on generalized Heisenberg ferromagnet models

    CERN Document Server

    Oh, P; Oh, Phillial; Park, Q Han

    1996-01-01

    We generalize the integrable Heisenberg ferromagnet model according to each Hermitian symmetric spaces and address various new aspects of the generalized model. Using the first order formalism of generalized spins which are defined on the coadjoint orbits of arbitrary groups, we construct a Lagrangian of the generalized model from which we obtain the Hamiltonian structure explicitly in the case of CP(N-1) orbit. The gauge equivalence between the generalized Heisenberg ferromagnet and the nonlinear Schr\\"{o}dinger models is given. Using the equivalence, we find infinitely many conserved integrals of both models.

  10. Gravitational redshift of galaxies in clusters as predicted by general relativity.

    Science.gov (United States)

    Wojtak, Radosław; Hansen, Steen H; Hjorth, Jens

    2011-09-28

    The theoretical framework of cosmology is mainly defined by gravity, of which general relativity is the current model. Recent tests of general relativity within the Lambda Cold Dark Matter (ΛCDM) model have found a concordance between predictions and the observations of the growth rate and clustering of the cosmic web. General relativity has not hitherto been tested on cosmological scales independently of the assumptions of the ΛCDM model. Here we report an observation of the gravitational redshift of light coming from galaxies in clusters at the 99 per cent confidence level, based on archival data. Our measurement agrees with the predictions of general relativity and its modification created to explain cosmic acceleration without the need for dark energy (the f(R) theory), but is inconsistent with alternative models designed to avoid the presence of dark matter. © 2011 Macmillan Publishers Limited. All rights reserved

  11. Product model structure for generalized optimal design

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The framework of the generalized optimization product model with the core of network- and tree-hierarchical structure is advanced to improve the characteristics of the generalized optimal design. Based on the proposed node-repetition technique, a network-hierarchical structure is united with the tree-hierarchical structure to facilitate the modeling of serialization and combination products. The criteria for product decomposition are investigated. Seven tree nodes are defined for the construction of a general product model, and their modeling properties are studied in detail. The developed product modeling system is applied and examined successfully in the modeling practice of the generalized optimal design for a hydraulic excavator.

  12. None-overshoot fast generalized predictive control design for furnace pressure based on TS model%T-S模型的窑炉压力无超调快速广义预测控制设计

    Institute of Scientific and Technical Information of China (English)

    雷世昌

    2014-01-01

    Glass furnace pressure is an important indicator of glass production process, which can directly affect furnace’ energy consumption, longevity and product yield, furnace pressure control optimizing has important economic significance. Furnace pressure has a complex nonlinear characteristics due to many factors , performance of existing control methods still leaves much room for improvement. This paper designed a novel nonlinear None-overshoot Fast Generalized Predictive Control(NFGPC) method. First, a fuzzy "offline + online"identification strategy was used to get the pressure’ T-S model;then time variant CARIMA model was obtained by the linearization at each sample point. A novel GPC law was designed based on the combination of soften rolling and new optimization objective function,which has a good property on Overshoot suppression with less computation. Simulation results show that T-S model can be a good modeling method for furnce pressure nonlinear systems. Compared with PID, linear Generalized Predictive Control(LGPC), Fuzzy Generalized Predictive Control(FGPC), Fast Fuzzy Generalized Predictive Control(FFGPC), NFGPC provide a better control performance.%窑压是玻璃窑炉运行过程中重要的被控指标之一,直接影响窑炉能耗、寿命及产品成品率,优化窑炉压力控制具有重要的经济意义。由于受到众多因素的影响,窑压具有典型的非线性特性,现有方法的控制效果还有很大的提升空间。本文针对窑压设计了一套新型无超调快速模糊广义预测控制方法(NFGPC),先利用“离线+在线”组合辨识方法得到窑压对象的T-S模型;然后基于该模型对系统进行分片线性化得到时变 CARIMA 模型,以设计窑压广义预测控制律;结合柔化理论与新型滚动优化目标函数设计一种广义预测控制律,该方法无需求解逆矩阵即可得到控制输出,计算量更小;由于新目标函数的应用,该方法还

  13. Multivariate generalized linear mixed models using R

    CERN Document Server

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  14. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A Simple General Model of Evolutionary Dynamics

    Science.gov (United States)

    Thurner, Stefan

    Evolution is a process in which some variations that emerge within a population (of, e.g., biological species or industrial goods) get selected, survive, and proliferate, whereas others vanish. Survival probability, proliferation, or production rates are associated with the "fitness" of a particular variation. We argue that the notion of fitness is an a posteriori concept in the sense that one can assign higher fitness to species or goods that survive but one can generally not derive or predict fitness per se. Whereas proliferation rates can be measured, fitness landscapes, that is, the inter-dependence of proliferation rates, cannot. For this reason we think that in a physical theory of evolution such notions should be avoided. Here we review a recent quantitative formulation of evolutionary dynamics that provides a framework for the co-evolution of species and their fitness landscapes (Thurner et al., 2010, Physica A 389, 747; Thurner et al., 2010, New J. Phys. 12, 075029; Klimek et al., 2009, Phys. Rev. E 82, 011901 (2010). The corresponding model leads to a generic evolutionary dynamics characterized by phases of relative stability in terms of diversity, followed by phases of massive restructuring. These dynamical modes can be interpreted as punctuated equilibria in biology, or Schumpeterian business cycles (Schumpeter, 1939, Business Cycles, McGraw-Hill, London) in economics. We show that phase transitions that separate phases of high and low diversity can be approximated surprisingly well by mean-field methods. We demonstrate that the mathematical framework is suited to understand systemic properties of evolutionary systems, such as their proneness to collapse, or their potential for diversification. The framework suggests that evolutionary processes are naturally linked to self-organized criticality and to properties of production matrices, such as their eigenvalue spectra. Even though the model is phrased in general terms it is also practical in the sense

  16. Aspects of generalized Calogero model

    CERN Document Server

    Meljanac, S; Milekovic, M

    2004-01-01

    A multispecies model of Calogero type in $D\\geq 1$ dimensions is constructed. The model includes harmonic, two-body and three-body interactions. Using the underlying conformal SU(1,1) algebra, we find the exact eigenenergies corresponding to a class of the exact global collective states. Analysing corresponding Fock space, we detect the universal critical point at which the model exhibits singular behaviour.

  17. Generalized exponential function and discrete growth models

    Science.gov (United States)

    Souto Martinez, Alexandre; Silva González, Rodrigo; Lauri Espíndola, Aquino

    2009-07-01

    Here we show that a particular one-parameter generalization of the exponential function is suitable to unify most of the popular one-species discrete population dynamic models into a simple formula. A physical interpretation is given to this new introduced parameter in the context of the continuous Richards model, which remains valid for the discrete case. From the discretization of the continuous Richards’ model (generalization of the Gompertz and Verhulst models), one obtains a generalized logistic map and we briefly study its properties. Notice, however that the physical interpretation for the introduced parameter persists valid for the discrete case. Next, we generalize the (scramble competition) θ-Ricker discrete model and analytically calculate the fixed points as well as their stabilities. In contrast to previous generalizations, from the generalized θ-Ricker model one is able to retrieve either scramble or contest models.

  18. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and field...

  19. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  20. Inhomogeneous generalization of some Bianchi models

    Science.gov (United States)

    Carmeli, M.; Charach, Ch.

    1980-02-01

    Vacuum Bianchi models which can be transformed to the Einstein-Rosen metric are considered. The models are used in order to construct new inhomogeneous universes, which are generalizations of Bianchi cosmologies of types III, V and VIh. Recent generalizations of these Bianchi models, considered by Wainwright et al., are also discussed.

  1. Theoretical Models of Generalized Quasispecies.

    Science.gov (United States)

    Wagner, Nathaniel; Atsmon-Raz, Yoav; Ashkenasy, Gonen

    2016-01-01

    Theoretical modeling of quasispecies has progressed in several directions. In this chapter, we review the works of Emmanuel Tannenbaum, who, together with Eugene Shakhnovich at Harvard University and later with colleagues and students at Ben-Gurion University in Beersheva, implemented one of the more useful approaches, by progressively setting up various formulations for the quasispecies model and solving them analytically. Our review will focus on these papers that have explored new models, assumed the relevant mathematical approximations, and proceeded to analytically solve for the steady-state solutions and run stochastic simulations . When applicable, these models were related to real-life problems and situations, including changing environments, presence of chemical mutagens, evolution of cancer and tumor cells , mutations in Escherichia coli, stem cells , chromosomal instability (CIN), propagation of antibiotic drug resistance , dynamics of bacteria with plasmids , DNA proofreading mechanisms, and more.

  2. Self-tuning Generalized Predictive Control applied to terrain following flight

    Science.gov (United States)

    Hess, R. A.; Jung, Y. C.

    1989-01-01

    Generalized Predictive Control (GPC) describes an algorithm for the control of dynamic systems in which a control input is generated which minimizes a quadratic cost function consisting of a weighted sum of errors between desired and predicted future system output and future predicted control increments. The output predictions are obtained from an internal model of the plant dynamics. Self-tuning GPC refers to an implementation of the GPC algorithm in which the parameters of the internal model(s) are estimated on-line and the predictive control law tuned to the parameters so identified. The self-tuning GPC algorithm is applied to a problem of rotorcraft longitudinal/vertical terrain-following flight. The ability of the algorithm to tune to the initial vehicle parameters and to successfully adapt to a stability augmentation failure is demonstrated. Flight path performance is compared to a conventional, classically designed flight path control system.

  3. Identification of Highly Deformed Even-Even Nuclides in the Neutron- and Proton-Rich Regions of the Nuclear Chart from the B(E2) and E2 Predictions in the Generalized Differential Equation Model

    CERN Document Server

    Nayak, R C

    2015-01-01

    We identify here possible occurrence of large deformations in the neutron- and proton-rich regions of the nuclear chart from extensive predictions of the values of the reduced quadrupole transition probability B-E2 for the transition from the ground state to the first 2+ state and the corresponding excitation energy E2 of even-even nuclei in the recently developed Generalized Differential Equation model exclusively meant for these physical quantities. This is made possible from our analysis of the predicted values of these two physical quantities and the corresponding deformation parameters derived from them such as the quadrupole deformation beta-2, the ratio of beta-2 to the Weisskopf single-particle beta-2 and the intrinsic electric quadruplole moment , calculated for a large number of both known as well as hitherto unknown even-even isotopes of Oxygen to Fermium (Z=8 to 100). Our critical analysis of the resulting data convincingly support possible existence of large collectivity for the nuclides 30,32 Ne...

  4. General Pressurization Model in Simscape

    Science.gov (United States)

    Servin, Mario; Garcia, Vicky

    2010-01-01

    System integration is an essential part of the engineering design process. The Ares I Upper Stage (US) is a complex system which is made up of thousands of components assembled into subsystems including a J2-X engine, liquid hydrogen (LH2) and liquid oxygen (LO2) tanks, avionics, thrust vector control, motors, etc. System integration is the task of connecting together all of the subsystems into one large system. To ensure that all the components will "fit together" as well as safety and, quality, integration analysis is required. Integration analysis verifies that, as an integrated system, the system will behave as designed. Models that represent the actual subsystems are built for more comprehensive analysis. Matlab has been an instrument widely use by engineers to construct mathematical models of systems. Simulink, one of the tools offered by Matlab, provides multi-domain graphical environment to simulate and design time-varying systems. Simulink is a powerful tool to analyze the dynamic behavior of systems over time. Furthermore, Simscape, a tool provided by Simulink, allows users to model physical (such as mechanical, thermal and hydraulic) systems using physical networks. Using Simscape, a model representing an inflow of gas to a pressurized tank was created where the temperature and pressure of the tank are measured over time to show the behavior of the gas. By further incorporation of Simscape into model building, the full potential of this software can be discovered and it hopefully can become a more utilized tool.

  5. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  6. Cosmological models in general relativity

    Indian Academy of Sciences (India)

    B B Paul

    2003-12-01

    LRS Bianchi type-I space-time filled with perfect fluid is considered here with deceleration parameter as variable. The metric potentials and are functions of as well as . Assuming '/=(), where prime denotes differentiation with respect to , it was found that =('/) and =(), where =() and is the scale factor which is a function of only. The value of Hubble’s constant 0 was found to be less than half for non-flat model and is equal to 1.3 for a flat model.

  7. Effective and Robust Generalized Predictive Speed Control of Induction Motor

    Directory of Open Access Journals (Sweden)

    Patxi Alkorta

    2013-01-01

    Full Text Available This paper presents and validates a new proposal for effective speed vector control of induction motors based on linear Generalized Predictive Control (GPC law. The presented GPC-PI cascade configuration simplifies the design with regard to GPC-GPC cascade configuration, maintaining the advantages of the predictive control algorithm. The robust stability of the closed loop system is demonstrated by the poles placement method for several typical cases of uncertainties in induction motors. The controller has been tested using several simulations and experiments and has been compared with Proportional Integral Derivative (PID and Sliding Mode (SM control schemes, obtaining outstanding results in speed tracking even in the presence of parameter uncertainties, unknown load disturbance, and measurement noise in the loop signals, suggesting its use in industrial applications.

  8. Pathway-Based Genomics Prediction using Generalized Elastic Net.

    Science.gov (United States)

    Sokolov, Artem; Carlin, Daniel E; Paull, Evan O; Baertsch, Robert; Stuart, Joshua M

    2016-03-01

    We present a novel regularization scheme called The Generalized Elastic Net (GELnet) that incorporates gene pathway information into feature selection. The proposed formulation is applicable to a wide variety of problems in which the interpretation of predictive features using known molecular interactions is desired. The method naturally steers solutions toward sets of mechanistically interlinked genes. Using experiments on synthetic data, we demonstrate that pathway-guided results maintain, and often improve, the accuracy of predictors even in cases where the full gene network is unknown. We apply the method to predict the drug response of breast cancer cell lines. GELnet is able to reveal genetic determinants of sensitivity and resistance for several compounds. In particular, for an EGFR/HER2 inhibitor, it finds a possible trans-differentiation resistance mechanism missed by the corresponding pathway agnostic approach.

  9. Pathway-Based Genomics Prediction using Generalized Elastic Net.

    Directory of Open Access Journals (Sweden)

    Artem Sokolov

    2016-03-01

    Full Text Available We present a novel regularization scheme called The Generalized Elastic Net (GELnet that incorporates gene pathway information into feature selection. The proposed formulation is applicable to a wide variety of problems in which the interpretation of predictive features using known molecular interactions is desired. The method naturally steers solutions toward sets of mechanistically interlinked genes. Using experiments on synthetic data, we demonstrate that pathway-guided results maintain, and often improve, the accuracy of predictors even in cases where the full gene network is unknown. We apply the method to predict the drug response of breast cancer cell lines. GELnet is able to reveal genetic determinants of sensitivity and resistance for several compounds. In particular, for an EGFR/HER2 inhibitor, it finds a possible trans-differentiation resistance mechanism missed by the corresponding pathway agnostic approach.

  10. Pathway-Based Genomics Prediction using Generalized Elastic Net

    Science.gov (United States)

    Sokolov, Artem; Carlin, Daniel E.; Paull, Evan O.; Baertsch, Robert; Stuart, Joshua M.

    2016-01-01

    We present a novel regularization scheme called The Generalized Elastic Net (GELnet) that incorporates gene pathway information into feature selection. The proposed formulation is applicable to a wide variety of problems in which the interpretation of predictive features using known molecular interactions is desired. The method naturally steers solutions toward sets of mechanistically interlinked genes. Using experiments on synthetic data, we demonstrate that pathway-guided results maintain, and often improve, the accuracy of predictors even in cases where the full gene network is unknown. We apply the method to predict the drug response of breast cancer cell lines. GELnet is able to reveal genetic determinants of sensitivity and resistance for several compounds. In particular, for an EGFR/HER2 inhibitor, it finds a possible trans-differentiation resistance mechanism missed by the corresponding pathway agnostic approach. PMID:26960204

  11. ENSO Prediction using Vector Autoregressive Models

    Science.gov (United States)

    Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.

    2013-12-01

    A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.

  12. Neuro-fuzzy generalized predictive control of boiler steam temperature

    Institute of Scientific and Technical Information of China (English)

    Xiangjie LIU; Jizhen LIU; Ping GUAN

    2007-01-01

    Power plants are nonlinear and uncertain complex systems.Reliable control of superheated steam temperature is necessary to ensure high efficiency and high load-following capability in the operation of modern power plant.A nonlinear generalized predictive controller based on neuro-fuzzy network(NFGPC)is proposed in this paper.The proposed nonlinear controller is applied to control the superheated steam temperature of a 200MW power plant.From the experiments on the plant and the simulation of the plant,much better performance than the traditional controller is obtained.

  13. Evaluating Predictive Performance of Value-at-Risk Models Based on Generalized Spectrum and MCS Tests%基于广义谱和MCS检验的VaR模型预测绩效评估

    Institute of Scientific and Technical Information of China (English)

    张玉鹏; 洪永淼

    2015-01-01

    条件VaR模型的正确设定检验等价于检验均值化的“撞击序列”是否服从鞅差分序列,然而通常的反馈检验方法只检验了该序列的部分性质。采用对该鞅差分性质进行直接检验的广义谱检验方法,全面考察中国股票市场(香港恒生指数、上证综合指数和台湾加权指数)上各参数、非参数和半参数共22个VaR模型在采用滚动窗口预测机制时的样本外预测绩效。鉴于条件VaR模型正确设定检验无法反映超过某VaR水平的尾部风险信息,为避免极端损失的发生以及增加结果的稳健性,同时采用模型置信集检验方法。研究结果表明,采用通常的反馈检验方法常会得出错误的结论;在1%和5%置信水平,与历史模拟法、极值理论模型、CAViaR模型和CARE模型相比,误差项为t分布的GARCH模型族在金融危机期间具有较好的样本外预测绩效;涨跌停板制度对于选取预测绩效最优的VaR模型具有重要影响。%Conditional VaR models′correct specification test is equivalent to testing the de-meaned hit sequence following a mar-tingale difference sequence (m.d.s), however the commonly used backtesting techniques only test some properties of the se-quence.Using generalized spectral test which directly tests the m.d.s property of the de-meaned hit sequence, we evaluate the out-of-sample predictive performance of various parametric, nonparametric and semi-parametric VaR models with a total of 22 models calculated by using rolling predictive method for China′s stock markets including Shanghai Composite Index, Hang Seng Index and Taiwan Weighted Index.Because conditional VaR models′correct specification test can not reflect the tail risk infor-mation exceeding one specific VaR level, in order to avoid the occurrence of extreme losses as well as to increase the robustness of the results, we adopt MCS ( model confidence set) test simultaneously by selecting the

  14. General expression for linear and nonlinear time series models

    Institute of Scientific and Technical Information of China (English)

    Ren HUANG; Feiyun XU; Ruwen CHEN

    2009-01-01

    The typical time series models such as ARMA, AR, and MA are founded on the normality and stationarity of a system and expressed by a linear difference equation; therefore, they are strictly limited to the linear system. However, some nonlinear factors are within the practical system; thus, it is difficult to fit the model for real systems with the above models. This paper proposes a general expression for linear and nonlinear auto-regressive time series models (GNAR). With the gradient optimization method and modified AIC information criteria integrated with the prediction error, the parameter estimation and order determination are achieved. The model simulation and experiments show that the GNAR model can accurately approximate to the dynamic characteristics of the most nonlinear models applied in academics and engineering. The modeling and prediction accuracy of the GNAR model is superior to the classical time series models. The proposed GNAR model is flexible and effective.

  15. Global Convergence of Adaptive Generalized Predictive Controller Based on Least Squares Algorithm

    Institute of Scientific and Technical Information of China (English)

    张兴会; 陈增强; 袁著祉

    2003-01-01

    Some papers on stochastic adaptive control schemes have established convergence algorithm using a leastsquares parameters. With the popular application of GPC, global convergence has become a key problem in automatic control theory. However, now global convergence of GPC has not been established for algorithms in computing a least squares iteration. A generalized model of adaptive generalized predictive control is presented. The global convergebce is also given on the basis of estimating the parameters of GPC by least squares algorithm.

  16. Generalized Kripke models for epistemic logic

    NARCIS (Netherlands)

    Voorbraak, F.

    In this paper a generalization of Kripke models is proposed for systemizing the study of the many different epistemic notions that appear in the literature. The generalized Kripke models explicitly represent an agent's epistemic states to which the epistemic notions refer. Two central

  17. Generalized Kripke models for epistemic logic

    NARCIS (Netherlands)

    Voorbraak, F.

    2008-01-01

    In this paper a generalization of Kripke models is proposed for systemizing the study of the many different epistemic notions that appear in the literature. The generalized Kripke models explicitly represent an agent's epistemic states to which the epistemic notions refer. Two central epistemic noti

  18. General Geographical Economics Model with Congestion

    NARCIS (Netherlands)

    J.G.M. van Marrewijk (Charles)

    2005-01-01

    textabstractWe derive and discuss a general, but simple geographical economics model with congestion, allowing us to explain the economic viability of small and large locations. The model generalizes some previous work and lends itself to analyzing the impact of public policy in terms of

  19. PREDICT : model for prediction of survival in localized prostate cancer

    NARCIS (Netherlands)

    Kerkmeijer, Linda G W; Monninkhof, Evelyn M.; van Oort, Inge M.; van der Poel, Henk G.; de Meerleer, Gert; van Vulpen, Marco

    2016-01-01

    Purpose: Current models for prediction of prostate cancer-specific survival do not incorporate all present-day interventions. In the present study, a pre-treatment prediction model for patients with localized prostate cancer was developed.Methods: From 1989 to 2008, 3383 patients were treated with I

  20. An application of generalized predictive control to rotorcraft terrain-following flight

    Science.gov (United States)

    Hess, Ronald A.; Jung, Yoon C.

    1989-01-01

    Generalized predictive control (GPC) describes an algorithm for the control of dynamic systems in which a control input is generated which minimizes a quadratic cost function consisting of a weighted sum of errors between desired and predicted future system output and future predicted control increments. The output predictions are obtained from an internal model of the plant dynamics. The GPC algorithm is first applied to a simplified rotorcraft terrain-following problem, and GPC performance is compared to that of a conventional compensatory automatic system in terms of flight-path following, control activity, and control law implementation. Next, more realistic vehicle dynamics are utilized, and the GPC algorithm is applied to simultaneous terrain following and velocity control in the presence of atmospheric disturbances and errors in the internal model of the vehicle. The online computational and sensing requirements for implementing the GPC algorithm are minimal. Its use for manual control models appears promising.

  1. Attitude scale and general health questionnaire subscales predict depression?

    Science.gov (United States)

    Ebrahimi, Amrollah; Afshar, Hamid; Doost, Hamid Taher Neshat; Mousavi, Seyed Ghafur; Moolavi, Hoseyn

    2012-01-01

    According to Beck theory, dysfunctional attitude has a central role in emergence of depression. The aim of this study was to determine contributions of dysfunctional attitude and general health index to depression. In this case-control study, two groups of subjects participated. The first group consisted of 65 patients with major depression and dysthymic disorder, who were recruited from Noor and Navab Safavi Psychiatry Clinics in Isfahan. The control group was consisted of 65 non-patient individuals who were accompanied or relatives of the patients and was matched with them based on age, sex and education. Both groups completed 26-item Dysfunctional Attitude Scale (DAS-26) and 28-item General Health Questionnaire (GHQ-28). Logistic regression and correlation methods were applied for statistical analysis. Logistic regression analysis showed that by an increase of one level in categorized DAS-26 scores and one score in the physical symptoms, anxiety, social dysfunction and depression subscales of GHQ-28 the risk of depression increase by 6.8, 1.6, 1.9, 3.7, 4.78 times, respectively. Capability of dysfunctional attitude and general health subscales to predict depression supports the Beck's cognitive diathesis stress theory of depression that dysfunctional attitude may be a predisposing risk factor for depression.

  2. A Generalized Rough Set Modeling Method for Welding Process

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Modeling is essential, significant and difficult for the quality and shaping control of arc welding process. A generalized rough set based modeling method was brought forward and a dynamic predictive model for pulsed gas tungsten arc welding (GTAW) was obtained by this modeling method. The results show that this modeling method can well acquire knowledge in welding and satisfy the real life application. In addition, the results of comparison between classic rough set model and back-propagation neural network model respectively are also satisfying.

  3. Predictive Modeling of Cardiac Ischemia

    Science.gov (United States)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  4. Constructing predictive models of human running.

    Science.gov (United States)

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  5. Freeway Crash Prediction Based on Generalized Linear Regression Models%基于广义线性模型的高速公路交通事故预测

    Institute of Scientific and Technical Information of China (English)

    王迎; 周燕

    2015-01-01

    In order to analysis the impact of traffic state parameters to the freeway crash,a freeway crash prediction model was proposed based on the generalized linear regression models (GLM).Using the traffic flow data and crash data from I-8E freeway in the United States,Negative binomial (NB)pre-diction model and Poisson prediction model based on the real-time traffic flow condition was developed, and the elasticity coefficient calculation method used to confirm the prominent traffic induction parameter was proposed.The results show that the freeway crash can be fitted by the GLM,and the predication ac-curacy of NB model is higher than that of Poisson prediction model.The parameters of traffic volume, share ratio and speed standard deviation have significant affected freeway crashes,and they are in a posi-tive relationship with freeway crash.Traffic volume is the most induction factor that influences freeway crash and an increase of 1 % in traffic volume leads to an increase of 5.8% in freeway crash.%为分析高速公路交通流状态参数对交通事故的影响,建立了基于广义线性模型的高速公路交通事故预测模型。采用美国8号州际高速公路的实时交通流数据和交通事故数据,建立了实时交通流状态下的高速公路泊松分布预测模型和负二项分布预测模型,并给出了模型变量弹性系数的计算方法,用以确定高速公路交通事故的突出诱导因素。研究结果表明:广义线性模型能够很好的拟合高速公路交通事故,并且负二项分布预测模型的预测精度高于泊松分布预测模型;交通量、占有率、大车比例和速度标准差是高速公路交通事故的显著影响参数,并且与之呈正向关系;交通量是诱导高速公路交通事故发生最突出因素,交通量增长1%,可导致交通事故增长5.8%。

  6. Application of generalized predictive control in networked control system

    Institute of Scientific and Technical Information of China (English)

    YANG Can; ZHU Sha-nan; KONG Wan-zeng; LU Li-ming

    2006-01-01

    A new framework for networked control system based on Generalized Predictive Control (GPC) is proposed in this paper. Clock-driven sensors, event-driven controller, and clock-driven actuators are required in this framework. A queuing strategy is proposed to overcome the network induced delay. Without redesigning, the proposed framework enables the existing GPC controller to be used in a network environment. It also does not require clock synchronization and is only slightly affected by bad network condition such as package loss. Various experiments are designed over the real network to test the proposed approach,which verify that the proposed approach can stabilize the Networked Control System (NCS) and is robust.

  7. Improved Robustness of Generalized Predictive Control for Uncertain Systems

    Science.gov (United States)

    Khelifa, Khelifi Otmane; Noureddine, Bali; Lazhari, Nezli

    2015-01-01

    An off-line methodology has been developed to improve the robustness of an initial generalized predictive control (GPC) through convex optimization of the Youla parameter. However, this method is restricted with the case of the systems affected only by unstructured uncertainties. This paper proposes an extension of this method to the systems subjected to both unstructured and polytopic uncertainties. The basic idea consists in adding supplementary constraints to the optimization problem which validates the Lipatov stability condition at each vertex of the polytope. These polytopic uncertainties impose a non convex quadratically constrained quadratic programming (QCQP) problem. Based on semidefinite programming (SDP), this problem is relaxed and solved. Therefore, the robustification provides stability robustness towards unstructured uncertainties for the nominal system, while guaranteeing stability properties over a specified polytopic domain of uncertainties. Finally, we present a numerical example to demonstrate the proposed method.

  8. A general consumer-resource population model

    Science.gov (United States)

    Lafferty, Kevin D.; DeLeo, Giulio; Briggs, Cheryl J.; Dobson, Andrew P.; Gross, Thilo; Kuris, Armand M.

    2015-01-01

    Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model.

  9. Anisotropic Generalized Ghost Pilgrim Dark Energy Model in General Relativity

    Science.gov (United States)

    Santhi, M. Vijaya; Rao, V. U. M.; Aditya, Y.

    2017-02-01

    A spatially homogeneous and anisotropic locally rotationally symmetric (LRS) Bianchi type- I Universe filled with matter and generalized ghost pilgrim dark energy (GGPDE) has been studied in general theory of relativity. To obtain determinate solution of the field equations we have used scalar expansion proportional to the shear scalar which leads to a relation between the metric potentials. Some well-known cosmological parameters (equation of state (EoS) parameter ( ω Λ), deceleration parameter ( q) and squared speed of sound {vs2}) and planes (ω _{Λ }-dot {ω }_{Λ } and statefinder) are constructed for obtained model. The discussion and significance of these parameters is totally done through pilgrim dark energy parameter ( β) and cosmic time ( t).

  10. A general circulation model (GCM) parameterization of Pinatubo aerosols

    Energy Technology Data Exchange (ETDEWEB)

    Lacis, A.A.; Carlson, B.E.; Mishchenko, M.I. [NASA Goddard Institute for Space Studies, New York, NY (United States)

    1996-04-01

    The June 1991 volcanic eruption of Mt. Pinatubo is the largest and best documented global climate forcing experiment in recorded history. The time development and geographical dispersion of the aerosol has been closely monitored and sampled. Based on preliminary estimates of the Pinatubo aerosol loading, general circulation model predictions of the impact on global climate have been made.

  11. A generalized sinusoidal model and its applications

    Institute of Scientific and Technical Information of China (English)

    KU Shao-ping; LI Ning

    2009-01-01

    A physical model of sinusoidal function was established. It is generalized that the force is directly proportional to a power function of the distance in a classical spring-oscillator system. The differential equation of the generalized model was given. Simulations were conducted with different power values. The results show that the solution of the generalized equation is a periodic function. The expressions of the amplitude and the period (frequency) of the generalized equation were derived by the physical method. All the simulation results coincide with the calculation results of the derived expressions. A special function also was deduced and proven to be convergent in the theoretical analysis. The limit value of the special function also was derived. The generalized model can be used in solving a type of differential equation and to generate periodic waveforms.

  12. Predictive modeling by the cerebellum improves proprioception.

    Science.gov (United States)

    Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J

    2013-09-04

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.

  13. DEEP: a general computational framework for predicting enhancers

    KAUST Repository

    Kleftogiannis, Dimitrios A.

    2014-11-05

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer\\'s properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.

  14. EOP MIT General Circulation Model (MITgcm)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data contains a regional implementation of the Massachusetts Institute of Technology general circulation model (MITgcm) at a 1-km spatial resolution for the...

  15. Generalized Reduced Order Model Generation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...

  16. Generalized Kahler Geometry from supersymmetric sigma models

    CERN Document Server

    Bredthauer, A; Persson, J; Zabzine, M; Bredthauer, Andreas; Lindstrom, Ulf; Persson, Jonas; Zabzine, Maxim

    2006-01-01

    We give a physical derivation of generalized Kahler geometry. Starting from a supersymmetric nonlinear sigma model, we rederive and explain the results of Gualtieri regarding the equivalence between generalized Kahler geometry and the bi-hermitean geometry of Gates-Hull-Rocek. When cast in the language of supersymmetric sigma models, this relation maps precisely to that between the Lagrangian and the Hamiltonian formalisms. We also discuss topological twist in this context.

  17. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders

    2004-01-01

    METHODOLOGY THE OMNI-PRESENCE OF LATENT VARIABLES Introduction 'True' variable measured with error Hypothetical constructs Unobserved heterogeneity Missing values and counterfactuals Latent responses Generating flexible distributions Combining information Summary MODELING DIFFERENT RESPONSE PROCESSES Introduction Generalized linear models Extensions of generalized linear models Latent response formulation Modeling durations or survival Summary and further reading CLASSICAL LATENT VARIABLE MODELS Introduction Multilevel regression models Factor models and item respons

  18. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...

  19. Empirical generalization assessment of neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1995-01-01

    competing models. Since all models are trained on the same data, a key issue is to take this dependency into account. The optimal split of the data set of size N into a cross-validation set of size Nγ and a training set of size N(1-γ) is discussed. Asymptotically (large data sees), γopt→1......This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model...

  20. Foundations of linear and generalized linear models

    CERN Document Server

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  1. All the generalized Georgi-Machacek models

    CERN Document Server

    Logan, Heather E

    2015-01-01

    The Georgi-Machacek model adds two SU(2)_L-triplet scalars to the Standard Model in such a way as to preserve custodial SU(2) symmetry. We study the generalizations of the Georgi-Machacek model to SU(2)_L representations larger than triplets. Perturbative unitarity considerations limit the possibilities to models containing only SU(2)_L quartets, quintets, or sextets. These models are phenomenologically interesting because they allow the couplings of the 125 GeV Higgs boson to WW and ZZ to be larger than their values in the Standard Model. We write down the most general custodial SU(2)-preserving scalar potentials for these models and outline their phenomenology. We find that experimental and theoretical constraints on the fermiophobic custodial-fiveplet states present in each of the models lead to absolute upper bounds on the 125 GeV Higgs boson coupling strength to WW and ZZ.

  2. Improved Generalized Predictive Control Algorithm with Offline and Online Identification and Its Application to Fixed Bed Reactor

    Institute of Scientific and Technical Information of China (English)

    余世明; 王海清

    2003-01-01

    An improved generalized predictive control algorithm is presented in this paper by incorporating offline identification into onlie identification.Unlike the existing generalized predictive control algorithms.the proposed approach divides parameters of a predictive model into the time invariant and time-varying ones,which are treated respectively by offline and onlie identification algorithms.Therefore,both the reliability and accuracy of the predictive model are improved,Two simulation examples of control of a fixed bed reactor show that this new algorithm is not only reliable and stable in the case of uncertainties and abnormal distrubances,but also adaptable to slow time varying processes.

  3. Predictive Model Assessment for Count Data

    Science.gov (United States)

    2007-09-05

    critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002

  4. Predicting glycated hemoglobin levels in the non-diabetic general population

    DEFF Research Database (Denmark)

    Rauh, Simone P; Heymans, Martijn W; Koopman, Anitra D M

    2017-01-01

    , Inter99, KORA S4/F4) were combined to predict HbA1c levels at six year follow-up. Using backward selection, age, BMI, waist circumference, use of anti-hypertensive medication, current smoking and parental history of diabetes remained in sex-specific linear regression models. To minimize overfitting......AIMS/HYPOTHESIS: To develop a prediction model that can predict HbA1c levels after six years in the non-diabetic general population, including previously used readily available predictors. METHODS: Data from 5,762 initially non-diabetic subjects from three population-based cohorts (Hoorn Study...... of coefficients, we performed internal validation using bootstrapping techniques. Explained variance, discrimination and calibration were assessed using R2, classification tables (comparing highest/lowest 50% HbA1c levels) and calibration graphs. The model was externally validated in 2,765 non-diabetic subjects...

  5. Simple implementation of general dark energy models

    Energy Technology Data Exchange (ETDEWEB)

    Bloomfield, Jolyon K. [MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Ave #37241, Cambridge, MA, 02139 (United States); Pearson, Jonathan A., E-mail: jolyon@mit.edu, E-mail: jonathan.pearson@durham.ac.uk [Centre for Particle Theory, Department of Mathematical Sciences, Durham University, South Road, Durham, DH1 3LE (United Kingdom)

    2014-03-01

    We present a formalism for the numerical implementation of general theories of dark energy, combining the computational simplicity of the equation of state for perturbations approach with the generality of the effective field theory approach. An effective fluid description is employed, based on a general action describing single-scalar field models. The formalism is developed from first principles, and constructed keeping the goal of a simple implementation into CAMB in mind. Benefits of this approach include its straightforward implementation, the generality of the underlying theory, the fact that the evolved variables are physical quantities, and that model-independent phenomenological descriptions may be straightforwardly investigated. We hope this formulation will provide a powerful tool for the comparison of theoretical models of dark energy with observational data.

  6. Mixed models for predictive modeling in actuarial science

    NARCIS (Netherlands)

    Antonio, K.; Zhang, Y.

    2012-01-01

    We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques

  7. Dark Radiation predictions from general Large Volume Scenarios

    CERN Document Server

    Hebecker, Arthur; Rompineve, Fabrizio; Witkowski, Lukas T

    2014-01-01

    Recent observations constrain the amount of Dark Radiation ($\\Delta N_{\\rm eff}$) and may even hint towards a non-zero value of $\\Delta N_{\\rm eff}$. It is by now well-known that this puts stringent constraints on the sequestered Large Volume Scenario (LVS), i.e. on LVS realisations with the Standard Model at a singularity. We go beyond this setting by considering LVS models where SM fields are realised on 7-branes in the geometric regime. As we argue, this naturally goes together with high-scale supersymmetry. The abundance of Dark Radiation is determined by the competition between the decay of the lightest modulus to axions, to the SM Higgs and to gauge fields. The latter decay channel avoids the most stringent constraints of the sequestered setting. Nevertheless, a rather robust prediction for a substantial amount of Dark Radiation can be made. This applies both to cases where the SM 4-cycles are stabilised by D-terms and are small "by accident" as well as to fibred models with the small cycles stabilised ...

  8. Micro Data and General Equilibrium Models

    DEFF Research Database (Denmark)

    Browning, Martin; Hansen, Lars Peter; Heckman, James J.

    1999-01-01

    Dynamic general equilibrium models are required to evaluate policies applied at the national level. To use these models to make quantitative forecasts requires knowledge of an extensive array of parameter values for the economy at large. This essay describes the parameters required for different ...

  9. Resonance asymptotics in the generalized Winter model

    CERN Document Server

    Exner, P; Exner, Pavel; Fraas, Martin

    2006-01-01

    We consider a modification of the Winter model describing a quantum particle in presence of a spherical barrier given by a fixed generalized point interaction. It is shown that the three classes of such interactions correspond to three different types of asymptotic behaviour of resonances of the model at high energies.

  10. Predictions of models for environmental radiological assessment

    Energy Technology Data Exchange (ETDEWEB)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa, E-mail: suelip@ird.gov.br, E-mail: dejanira@irg.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Servico de Avaliacao de Impacto Ambiental, Rio de Janeiro, RJ (Brazil); Mahler, Claudio Fernando [Coppe. Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro (UFRJ) - Programa de Engenharia Civil, RJ (Brazil)

    2011-07-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  11. A general model for bidirectional associative memories.

    Science.gov (United States)

    Shi, H; Zhao, Y; Zhuang, X

    1998-01-01

    This paper proposes a general model for bidirectional associative memories that associate patterns between the X-space and the Y-space. The general model does not require the usual assumption that the interconnection weight from a neuron in the X-space to a neuron in the Y-space is the same as the one from the Y-space to the X-space. We start by defining a supporting function to measure how well a state supports another state in a general bidirectional associative memory (GBAM). We then use the supporting function to formulate the associative recalling process as a dynamic system, explore its stability and asymptotic stability conditions, and develop an algorithm for learning the asymptotic stability conditions using the Rosenblatt perceptron rule. The effectiveness of the proposed model for recognition of noisy patterns and the performance of the model in terms of storage capacity, attraction, and spurious memories are demonstrated by some outstanding experimental results.

  12. Nonlinear chaotic model for predicting storm surges

    NARCIS (Netherlands)

    Siek, M.; Solomatine, D.P.

    This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables.

  13. Recent and past musical activity predicts cognitive aging variability: direct comparison with general lifestyle activities.

    Science.gov (United States)

    Hanna-Pladdy, Brenda; Gajewski, Byron

    2012-01-01

    Studies evaluating the impact of modifiable lifestyle factors on cognition offer potential insights into sources of cognitive aging variability. Recently, we reported an association between extent of musical instrumental practice throughout the life span (greater than 10 years) on preserved cognitive functioning in advanced age. These findings raise the question of whether there are training-induced brain changes in musicians that can transfer to non-musical cognitive abilities to allow for compensation of age-related cognitive declines. However, because of the relationship between engagement in general lifestyle activities and preserved cognition, it remains unclear whether these findings are specifically driven by musical training or the types of individuals likely to engage in greater activities in general. The current study controlled for general activity level in evaluating cognition between musicians and nomusicians. Also, the timing of engagement (age of acquisition, past versus recent) was assessed in predictive models of successful cognitive aging. Seventy age and education matched older musicians (>10 years) and non-musicians (ages 59-80) were evaluated on neuropsychological tests and general lifestyle activities. Musicians scored higher on tests of phonemic fluency, verbal working memory, verbal immediate recall, visuospatial judgment, and motor dexterity, but did not differ in other general leisure activities. Partition analyses were conducted on significant cognitive measures to determine aspects of musical training predictive of enhanced cognition. The first partition analysis revealed education best predicted visuospatial functions in musicians, followed by recent musical engagement which offset low education. In the second partition analysis, early age of musical acquisition (memory in musicians, while analyses for other measures were not predictive. Recent and past musical activity, but not general lifestyle activities, predicted variability

  14. The Cosmology of Generalized Modified Gravity Models

    CERN Document Server

    Carroll, S M; Duvvuri, V; Easson, D A; Trodden, M; Turner, M S; Carroll, Sean M.; Felice, Antonio De; Duvvuri, Vikram; Easson, Damien A.; Trodden, Mark; Turner, Michael S.

    2005-01-01

    We consider general curvature-invariant modifications of the Einstein-Hilbert action that become important only in regions of extremely low space-time curvature. We investigate the far future evolution of the universe in such models, examining the possibilities for cosmic acceleration and other ultimate destinies. The models generically possess de Sitter space as an unstable solution and exhibit an interesting set of attractor solutions which, in some cases, provide alternatives to dark energy models.

  15. Cosmology of generalized modified gravity models

    Science.gov (United States)

    Carroll, Sean M.; de Felice, Antonio; Duvvuri, Vikram; Easson, Damien A.; Trodden, Mark; Turner, Michael S.

    2005-03-01

    We consider general curvature-invariant modifications of the Einstein-Hilbert action that become important only in regions of extremely low space-time curvature. We investigate the far future evolution of the Universe in such models, examining the possibilities for cosmic acceleration and other ultimate destinies. The models generically possess de Sitter space as an unstable solution and exhibit an interesting set of attractor solutions which, in some cases, provide alternatives to dark energy models.

  16. The generic model of General Relativity

    Energy Technology Data Exchange (ETDEWEB)

    Tsamparlis, Michael, E-mail: mtsampa@phys.uoa.g [Department of Physics, Section Astrophysics Astronomy Mechanics, University of Athens, University of Athens, Zografos 15783, Athens (Greece)

    2009-10-01

    We develop a generic spacetime model in General Relativity from which all existing model results are produced under specific assumptions, depending on the case. We classify each type of possible assumption, especially the role of observers and that of symmetries, and discuss their role in the development of a model. We apply the results in a step by step approach to the case of a Bianchi I spacetime and a string fluid.

  17. Interacting holographic generalized cosmic Chaplygin gas model

    Science.gov (United States)

    Naji, Jalil

    2014-03-01

    In this paper we consider a correspondence between the holographic dark energy density and interacting generalized cosmic Chaplygin gas energy density in flat FRW universe. Then, we reconstruct the potential of the scalar field which describe the generalized cosmic Chaplygin cosmology. In the special case we obtain time-dependent energy density and study cosmological parameters. We find stability condition of this model which is depend on cosmic parameter.

  18. How to Establish Clinical Prediction Models

    Directory of Open Access Journals (Sweden)

    Yong-ho Lee

    2016-03-01

    Full Text Available A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.

  19. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....

  20. Prediction of depression in European general practice attendees: the PREDICT study

    Directory of Open Access Journals (Sweden)

    Geerlings Mirjam I

    2006-01-01

    Full Text Available Abstract Background Prevention of depression must address multiple risk factors. Estimating overall risk across a range of putative risk factors is fundamental to prevention of depression. However, we lack reliable and valid methods of risk estimation. This protocol paper introduces PREDICT, an international research study to address this risk estimation. Methods/design This is a prospective study in which consecutive general practice attendees in six European countries are recruited and followed up after six and 12 months. Prevalence of depression is assessed at baseline and each follow-up point. Consecutive attendees between April 2003 and September 2004 who were aged 18 to 75 were asked to take part. The possibility of a depressive episode was assessed using the Depression Section of the Composite International Diagnostic Interview. A selection of presumed risk factors was based on our previous work and a systematic review of the literature. It was necessary to evaluate the test-retest reliability of a number of risk factor questions that were developed specifically, or adapted, for the PREDICT study. In a separate reliability study conducted between January and November 2003, consecutive general practice attendees in the six participating European countries completed the risk factor items on two occasions, two weeks apart. The overall response rate at entry to the study was 69%. We exceeded our expected recruitment rate, achieving a total of 10,048 people in all. Reliability coefficients were generally good to excellent. Discussion Response rate to follow-up in all countries was uniformly high, which suggests that prediction will be based on almost a full cohort. The results of our reliability analysis are encouraging and suggest that data collected during the course of PREDICT will have a satisfactory level of stability. The development of a multi-factor risk score for depression will lay the foundation for future research on risk reduction

  1. Multivariate Generalized Linear Mixed Models Using R

    CERN Document Server

    Berridge, Damon M

    2011-01-01

    To provide researchers with the ability to analyze large and complex data sets using robust models, this book presents a unified framework for a broad class of models that can be applied using a dedicated R package (Sabre). The first five chapters cover the analysis of multilevel models using univariate generalized linear mixed models (GLMMs). The next few chapters extend to multivariate GLMMs and the last chapters address more specialized topics, such as parallel computing for large-scale analyses. Each chapter includes many real-world examples implemented using Sabre as well as exercises and

  2. Universality in generalized models of inflation

    CERN Document Server

    Binétruy, Pierre; Pieroni, Mauro

    2016-01-01

    We show that the cosmological evolution of a scalar field with non standard kinetic term can be described in terms of a Renormalization Group Equation. In this framework inflation corresponds to the slow evolution in a neighborhood of a fixed point and universality classes for inflationary models can be naturally introduced. Using some examples we show the application of the formalism. The predicted values for the speed of sound $c_s$ and for the amount of non-Gaussianities produced in these models are discussed. In particular, we show that it is possible to introduce models with $c_s^2 \

  3. Two criteria for evaluating risk prediction models.

    Science.gov (United States)

    Pfeiffer, R M; Gail, M H

    2011-09-01

    We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.

  4. Modelling anisotropic fluid spheres in general relativity

    CERN Document Server

    Boonserm, Petarpa; Visser, Matt

    2015-01-01

    We argue that an arbitrary general relativistic anisotropic fluid sphere, (spherically symmetric but with transverse pressure not equal to radial pressure), can nevertheless be successfully modelled by suitable linear combinations of quite ordinary classical matter: an isotropic perfect fluid, a classical electromagnetic field, and a classical (minimally coupled) scalar field. While the most general decomposition is not unique, a preferred minimal decomposition can be constructed that is unique. We show how the classical energy conditions for the anisotropic fluid sphere can be related to energy conditions for the isotropic perfect fluid, electromagnetic field, and scalar field components of the model. Furthermore we show how this decomposition relates to the distribution of electric charge density and scalar charge density throughout the model that is used to mimic the anisotropic fluid sphere. Consequently, we can build physically reasonable matter models for almost any spherically symmetric spacetime.

  5. Generalized multicritical one-matrix models

    CERN Document Server

    Ambjorn, J; Makeenko, Y

    2016-01-01

    We show that there exists a simple generalization of Kazakov's multicritical one-matrix model, which interpolates between the various multicritical points of the model. The associated multicritical potential takes the form of a power series with a heavy tail, leading to a cut of the potential and its derivative at the real axis, and reduces to a polynomial at Kazakov's multicritical points. From the combinatorial point of view the generalized model allows polygons of arbitrary large degrees (or vertices of arbitrary large degree, when considering the dual graphs), and it is the weight assigned to these large order polygons which brings about the interpolation between the multicritical points in the one-matrix model.

  6. Generalized multicritical one-matrix models

    Science.gov (United States)

    Ambjørn, J.; Budd, T.; Makeenko, Y.

    2016-12-01

    We show that there exists a simple generalization of Kazakov's multicritical one-matrix model, which interpolates between the various multicritical points of the model. The associated multicritical potential takes the form of a power series with a heavy tail, leading to a cut of the potential and its derivative at the real axis, and reduces to a polynomial at Kazakov's multicritical points. From the combinatorial point of view the generalized model allows polygons of arbitrary large degrees (or vertices of arbitrary large degree, when considering the dual graphs), and it is the weight assigned to these large order polygons which brings about the interpolation between the multicritical points in the one-matrix model.

  7. On generalized P\\'olya urn models

    CERN Document Server

    Chen, May-Ru

    2011-01-01

    We study an urn model introduced in the paper of Chen and Wei, where at each discrete time step $m$ balls are drawn at random from the urn containing colors white and black. Balls are added to the urn according to the inspected colors, generalizing the well known P\\'olya-Eggenberger urn model, case m=1. We provide exact expressions for the expectation and the variance of the number of white balls after n draws, and determine the structure of higher moments. Moreover, we discuss extensions to more than two colors. Furthermore, we introduce and discuss a new urn model where the sampling of the m balls is carried out in a step-by-step fashion, and also introduce a generalized Friedman's urn model.

  8. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing p

  9. General Equilibrium Models: Improving the Microeconomics Classroom

    Science.gov (United States)

    Nicholson, Walter; Westhoff, Frank

    2009-01-01

    General equilibrium models now play important roles in many fields of economics including tax policy, environmental regulation, international trade, and economic development. The intermediate microeconomics classroom has not kept pace with these trends, however. Microeconomics textbooks primarily focus on the insights that can be drawn from the…

  10. Models Predicting Success of Infertility Treatment: A Systematic Review

    Science.gov (United States)

    Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi

    2016-01-01

    Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461

  11. Precise methods for conducted EMI modeling,analysis,and prediction

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Focusing on the state-of-the-art conducted EMI prediction, this paper presents a noise source lumped circuit modeling and identification method, an EMI modeling method based on multiple slope approximation of switching transitions, and dou-ble Fourier integral method modeling PWM conversion units to achieve an accurate modeling of EMI noise source. Meanwhile, a new sensitivity analysis method, a general coupling model for steel ground loops, and a partial element equivalent circuit method are proposed to identify and characterize conducted EMI coupling paths. The EMI noise and propagation modeling provide an accurate prediction of conducted EMI in the entire frequency range (0―10 MHz) with good practicability and generality. Finally a new measurement approach is presented to identify the surface current of large dimensional metal shell. The proposed analytical modeling methodology is verified by experimental results.

  12. Precise methods for conducted EMI modeling,analysis, and prediction

    Institute of Scientific and Technical Information of China (English)

    MA WeiMing; ZHAO ZhiHua; MENG Jin; PAN QiJun; ZHANG Lei

    2008-01-01

    Focusing on the state-of-the-art conducted EMI prediction, this paper presents a noise source lumped circuit modeling and identification method, an EMI modeling method based on multiple slope approximation of switching transitions, and dou-ble Fourier integral method modeling PWM conversion units to achieve an accurate modeling of EMI noise source. Meanwhile, a new sensitivity analysis method, a general coupling model for steel ground loops, and a partial element equivalent circuit method are proposed to identify and characterize conducted EMI coupling paths. The EMI noise and propagation modeling provide an accurate prediction of conducted EMI in the entire frequency range (0-10 MHz) with good practicability and generality. Finally a new measurement approach is presented to identify the surface current of large dimensional metal shell. The proposed analytical modeling methodology is verified by experimental results.

  13. Using R In Generalized Linear Models

    Directory of Open Access Journals (Sweden)

    Mihaela Covrig

    2015-09-01

    Full Text Available This paper aims to approach the estimation of generalized linear models (GLM on the basis of the glm routine package in R. Particularly, regression models will be analyzed for those cases in which the explained variable follows a Poisson or a Negative Binomial distribution. The paper will briefly present the GLM methodology for count data, while the practical part will revolve around estimating and comparing models in which the response variable shows the number of claims in a portfolio of automobile insurance policies.

  14. Using a Prediction Model to Manage Cyber Security Threats

    Directory of Open Access Journals (Sweden)

    Venkatesh Jaganathan

    2015-01-01

    Full Text Available Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  15. Using a Prediction Model to Manage Cyber Security Threats.

    Science.gov (United States)

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  16. String Field Equations from Generalized Sigma Model

    Energy Technology Data Exchange (ETDEWEB)

    Bardakci, K.; Bernardo, L.M.

    1997-01-29

    We propose a new approach for deriving the string field equations from a general sigma model on the world-sheet. This approach leads to an equation which combines some of the attractive features of both the renormalization group method and the covariant beta function treatment of the massless excitations. It has the advantage of being covariant under a very general set of both local and non-local transformations in the field space. We apply it to the tachyon, massless and first massive level, and show that the resulting field equations reproduce the correct spectrum of a left-right symmetric closed bosonic string.

  17. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  18. A new general model for predicting melting thermodynamics of complementary and mismatched B-form duplexes containing locked nucleic acids: application to probe design for digital PCR detection of somatic mutations.

    Science.gov (United States)

    Hughesman, Curtis; Fakhfakh, Kareem; Bidshahri, Roza; Lund, H Louise; Haynes, Charles

    2015-02-17

    Advances in real-time polymerase chain reaction (PCR), as well as the emergence of digital PCR (dPCR) and useful modified nucleotide chemistries, including locked nucleic acids (LNAs), have created the potential to improve and expand clinical applications of PCR through their ability to better quantify and differentiate amplification products, but fully realizing this potential will require robust methods for designing dual-labeled hydrolysis probes and predicting their hybridization thermodynamics as a function of their sequence, chemistry, and template complementarity. We present here a nearest-neighbor thermodynamic model that accurately predicts the melting thermodynamics of a short oligonucleotide duplexed either to its perfect complement or to a template containing mismatched base pairs. The model may be applied to pure-DNA duplexes or to duplexes for which one strand contains any number and pattern of LNA substitutions. Perturbations to duplex stability arising from mismatched DNA:DNA or LNA:DNA base pairs are treated at the Gibbs energy level to maintain statistical significance in the regressed model parameters. This approach, when combined with the model's accounting of the temperature dependencies of the melting enthalpy and entropy, permits accurate prediction of T(m) values for pure-DNA homoduplexes or LNA-substituted heteroduplexes containing one or two independent mismatched base pairs. Terms accounting for changes in solution conditions and terminal addition of fluorescent dyes and quenchers are then introduced so that the model may be used to accurately predict and thereby tailor the T(m) of a pure-DNA or LNA-substituted hydrolysis probe when duplexed either to its perfect-match template or to a template harboring a noncomplementary base. The model, which builds on classic nearest-neighbor thermodynamics, should therefore be of use to clinicians and biologists who require probes that distinguish and quantify two closely related alleles in either a

  19. Improved Generalized Force Model considering the Comfortable Driving Behavior

    Directory of Open Access Journals (Sweden)

    De-Jie Xu

    2015-01-01

    Full Text Available This paper presents an improved generalized force model (IGFM that considers the driver’s comfortable driving behavior. Through theoretical analysis, we propose the calculation methods of comfortable driving distance and velocity. Then the stability condition of the model is obtained by the linear stability analysis. The problems of the unrealistic acceleration of the leading car existing in the previous models were solved. Furthermore, the simulation results show that IGFM can predict correct delay time of car motion and kinematic wave speed at jam density, and it can exactly describe the driver’s behavior under an urgent case, where no collision occurs. The dynamic properties of IGFM also indicate that stability has improved compared to the generalized force model.

  20. General functioning predicts reward and punishment learning in schizophrenia.

    Science.gov (United States)

    Somlai, Zsuzsanna; Moustafa, Ahmed A; Kéri, Szabolcs; Myers, Catherine E; Gluck, Mark A

    2011-04-01

    Previous studies investigating feedback-driven reinforcement learning in patients with schizophrenia have provided mixed results. In this study, we explored the clinical predictors of reward and punishment learning using a probabilistic classification learning task. Patients with schizophrenia (n=40) performed similarly to healthy controls (n=30) on the classification learning task. However, more severe negative and general symptoms were associated with lower reward-learning performance, whereas poorer general psychosocial functioning was correlated with both lower reward- and punishment-learning performances. Multiple linear regression analyses indicated that general psychosocial functioning was the only significant predictor of reinforcement learning performance when education, antipsychotic dose, and positive, negative and general symptoms were included in the analysis. These results suggest a close relationship between reinforcement learning and general psychosocial functioning in schizophrenia.

  1. General regression and representation model for classification.

    Directory of Open Access Journals (Sweden)

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  2. Generalized Models for Rock Joint Surface Shapes

    Directory of Open Access Journals (Sweden)

    Shigui Du

    2014-01-01

    Full Text Available Generalized models of joint surface shapes are the foundation for mechanism studies on the mechanical effects of rock joint surface shapes. Based on extensive field investigations of rock joint surface shapes, generalized models for three level shapes named macroscopic outline, surface undulating shape, and microcosmic roughness were established through statistical analyses of 20,078 rock joint surface profiles. The relative amplitude of profile curves was used as a borderline for the division of different level shapes. The study results show that the macroscopic outline has three basic features such as planar, arc-shaped, and stepped; the surface undulating shape has three basic features such as planar, undulating, and stepped; and the microcosmic roughness has two basic features such as smooth and rough.

  3. Higher Dimensional Generalizations of the SYK Model

    CERN Document Server

    Berkooz, Micha; Rozali, Moshe; Simón, Joan

    2016-01-01

    We discuss a 1+1 dimensional generalization of the Sachdev-Ye-Kitaev model. The model contains $N$ Majorana fermions at each lattice site with a nearest-neighbour hopping term. The SYK random interaction is restricted to low momentum fermions of definite chirality within each lattice site. This gives rise to an ordinary 1+1 field theory above some energy scale and a low energy SYK-like behavior. We exhibit a class of low-pass filters which give rise to a rich variety of hyperscaling behaviour in the IR. We also discuss another set of generalizations which describes probing an SYK system with an external fermion, together with the new scaling behavior they exhibit in the IR.

  4. Generalized Penner models to all genera

    CERN Document Server

    Ambjørn, Jan; Kristjansen, C F

    1994-01-01

    We give a complete description of the genus expansion of the one-cut solution to the generalized Penner model. The solution is presented in a form which allows us in a very straightforward manner to localize critical points and to investigate the scaling behaviour of the model in the vicinity of these points. We carry out an analysis of the critical behaviour to all genera addressing all types of multi-critical points. In certain regions of the coupling constant space the model must be defined via analytical continuation. We show in detail how this works for the Penner model. Using analytical continuation it is possible to reach the fermionic 1-matrix model. We show that the critical points of the fermionic 1-matrix model can be indexed by an integer, $m$, as it was the case for the ordinary hermitian 1-matrix model. Furthermore the $m$'th multi-critical fermionic model has to all genera the same value of $\\gamma_{str}$ as the $m$'th multi-critical hermitian model. However, the coefficients of the topological...

  5. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...

  6. Parallel Computing of Ocean General Circulation Model

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper discusses the parallel computing of the thirdgeneration Ocea n General Circulation Model (OGCM) from the State Key Laboratory of Numerical Mo deling for Atmospheric Science and Geophysical Fluid Dynamics(LASG),Institute of Atmosphere Physics(IAP). Meanwhile, several optimization strategies for paralle l computing of OGCM (POGCM) on Scalable Shared Memory Multiprocessor (S2MP) are presented. Using Message Passing Interface (MPI), we obtain super linear speedup on SGI Origin 2000 for parallel OGCM(POGCM) after optimization.

  7. Modeling the pion Generalized Parton Distribution

    CERN Document Server

    Mezrag, C

    2015-01-01

    We compute the pion Generalized Parton Distribution (GPD) in a valence dressed quarks approach. We model the Mellin moments of the GPD using Ans\\"atze for Green functions inspired by the numerical solutions of the Dyson-Schwinger Equations (DSE) and the Bethe-Salpeter Equation (BSE). Then, the GPD is reconstructed from its Mellin moment using the Double Distribution (DD) formalism. The agreement with available experimental data is very good.

  8. Massive Predictive Modeling using Oracle R Enterprise

    CERN Document Server

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  9. A General Business Model for Marine Reserves

    Science.gov (United States)

    Sala, Enric; Costello, Christopher; Dougherty, Dawn; Heal, Geoffrey; Kelleher, Kieran; Murray, Jason H.; Rosenberg, Andrew A.; Sumaila, Rashid

    2013-01-01

    Marine reserves are an effective tool for protecting biodiversity locally, with potential economic benefits including enhancement of local fisheries, increased tourism, and maintenance of ecosystem services. However, fishing communities often fear short-term income losses associated with closures, and thus may oppose marine reserves. Here we review empirical data and develop bioeconomic models to show that the value of marine reserves (enhanced adjacent fishing + tourism) may often exceed the pre-reserve value, and that economic benefits can offset the costs in as little as five years. These results suggest the need for a new business model for creating and managing reserves, which could pay for themselves and turn a profit for stakeholder groups. Our model could be expanded to include ecosystem services and other benefits, and it provides a general framework to estimate costs and benefits of reserves and to develop such business models. PMID:23573192

  10. A general business model for marine reserves.

    Directory of Open Access Journals (Sweden)

    Enric Sala

    Full Text Available Marine reserves are an effective tool for protecting biodiversity locally, with potential economic benefits including enhancement of local fisheries, increased tourism, and maintenance of ecosystem services. However, fishing communities often fear short-term income losses associated with closures, and thus may oppose marine reserves. Here we review empirical data and develop bioeconomic models to show that the value of marine reserves (enhanced adjacent fishing + tourism may often exceed the pre-reserve value, and that economic benefits can offset the costs in as little as five years. These results suggest the need for a new business model for creating and managing reserves, which could pay for themselves and turn a profit for stakeholder groups. Our model could be expanded to include ecosystem services and other benefits, and it provides a general framework to estimate costs and benefits of reserves and to develop such business models.

  11. A general business model for marine reserves.

    Science.gov (United States)

    Sala, Enric; Costello, Christopher; Dougherty, Dawn; Heal, Geoffrey; Kelleher, Kieran; Murray, Jason H; Rosenberg, Andrew A; Sumaila, Rashid

    2013-01-01

    Marine reserves are an effective tool for protecting biodiversity locally, with potential economic benefits including enhancement of local fisheries, increased tourism, and maintenance of ecosystem services. However, fishing communities often fear short-term income losses associated with closures, and thus may oppose marine reserves. Here we review empirical data and develop bioeconomic models to show that the value of marine reserves (enhanced adjacent fishing + tourism) may often exceed the pre-reserve value, and that economic benefits can offset the costs in as little as five years. These results suggest the need for a new business model for creating and managing reserves, which could pay for themselves and turn a profit for stakeholder groups. Our model could be expanded to include ecosystem services and other benefits, and it provides a general framework to estimate costs and benefits of reserves and to develop such business models.

  12. A Design of Generalized Predictive Control Systems Using a Memory-Based System Identification

    Science.gov (United States)

    Takao, Kenji; Yamamoto, Toru; Hinamoto, Takao

    In this paper, a new system identification scheme is proposed based on a memory-based modeling (MBM) method. According to the MBM method, some local models are automatically generated using input/output data pairs of the controlled object stored in the data-base. Especially, it is well known that the MBM method works suitably on nonlinear systems. Therefore, even if the nonlinearities are contained in the controlled object, accuracy identification can be performed by the proposed method. Moreover, since the parameter estimates are easily applied to many existing controllers, the good control result can be obtained for nonlinear systems. In this paper, the generalized predictive control (GPC) is used as the one of existing controllers, because the GPC is designed based on multi-step prediction, and is effective for systems with large, ambiguous and/or time-variant time-delays. Finally, the effectiveness of the newly proposed control scheme is numerically evaluated on some simulation examples.

  13. Remaining Useful Lifetime (RUL - Probabilistic Predictive Model

    Directory of Open Access Journals (Sweden)

    Ephraim Suhir

    2011-01-01

    Full Text Available Reliability evaluations and assurances cannot be delayed until the device (system is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications, by electrical, optical and mechanical measurements and testing; checked (screened during manufacturing (fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation Simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL of an electronic device (system after an appreciable deviation from its normal operation conditions has been detected, and the increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub has been assessed. The general concepts are illustrated by numerical examples. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure of a device (system at the operation stage of its lifetime.

  14. A proposed general model of information behaviour.

    Directory of Open Access Journals (Sweden)

    2003-01-01

    Full Text Available Presents a critical description of Wilson's (1996 global model of information behaviour and proposes major modification on the basis of research into information behaviour of managers, conducted in Poland. The theoretical analysis and research results suggest that Wilson's model has certain imperfections, both in its conceptual content, and in graphical presentation. The model, for example, cannot be used to describe managers' information behaviour, since managers basically are not the end users of external from organization or computerized information services, and they acquire information mainly through various intermediaries. Therefore, the model cannot be considered as a general model, applicable to every category of information users. The proposed new model encompasses the main concepts of Wilson's model, such as: person-in-context, three categories of intervening variables (individual, social and environmental, activating mechanisms, cyclic character of information behaviours, and the adoption of a multidisciplinary approach to explain them. However, the new model introduces several changes. They include: 1. identification of 'context' with the intervening variables; 2. immersion of the chain of information behaviour in the 'context', to indicate that the context variables influence behaviour at all stages of the process (identification of needs, looking for information, processing and using it; 3. stress is put on the fact that the activating mechanisms also can occur at all stages of the information acquisition process; 4. introduction of two basic strategies of looking for information: personally and/or using various intermediaries.

  15. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  16. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Stability analysis of generalized predictive control based on Kleinman's controllers

    Institute of Scientific and Technical Information of China (English)

    DING Baocang; XI Yugeng

    2004-01-01

    With Kleinman's controller, its extended form and Riccati iteration as analyzing tools, the stability of GPC under various parameter cases is discussed. The overall closed-loop stability conclusions of GPC in equivalence with Kleinman's controller are obtained, which cover some existing results and provide the theoretical foundation for stable design of predictive control.

  8. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  9. Using the Gamma-Poisson Model to Predict Library Circulations.

    Science.gov (United States)

    Burrell, Quentin L.

    1990-01-01

    Argues that the gamma mixture of Poisson processes, for all its perceived defects, can be used to make predictions regarding future library book circulations of a quality adequate for general management requirements. The use of the model is extensively illustrated with data from two academic libraries. (Nine references) (CLB)

  10. A Course in... Model Predictive Control.

    Science.gov (United States)

    Arkun, Yaman; And Others

    1988-01-01

    Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)

  11. Online elicitation of Mamdani-type fuzzy rules via TSK-based generalized predictive control.

    Science.gov (United States)

    Mahfouf, M; Abbod, M F; Linkens, D A

    2003-01-01

    Many synergies have been proposed between soft-computing techniques, such as neural networks (NNs), fuzzy logic (FL), and genetic algorithms (GAs), which have shown that such hybrid structures can work well and also add more robustness to the control system design. In this paper, a new control architecture is proposed whereby the on-line generated fuzzy rules relating to the self-organizing fuzzy logic controller (SOFLC) are obtained via integration with the popular generalized predictive control (GPC) algorithm using a Takagi-Sugeno-Kang (TSK)-based controlled autoregressive integrated moving average (CARIMA) model structure. In this approach, GPC replaces the performance index (PI) table which, as an incremental model, is traditionally used to discover, amend, and delete the rules. Because the GPC sequence is computed using predicted future outputs, the new hybrid approach rewards the time-delay very well. The new generic approach, named generalized predictive self-organizing fuzzy logic control (GPSOFLC), is simulated on a well-known nonlinear chemical process, the distillation column, and is shown to produce an effective fuzzy rule-base in both qualitative (minimum number of generated rules) and quantitative (good rules) terms.

  12. Equivalency and unbiasedness of grey prediction models

    Institute of Scientific and Technical Information of China (English)

    Bo Zeng; Chuan Li; Guo Chen; Xianjun Long

    2015-01-01

    In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction mo-dels, the equivalence and unbiasedness of grey prediction mo-dels are analyzed and verified. The results show that al the grey prediction models that are strictly derived from x(0)(k) +az(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homoge-neous exponential sequence can be accomplished. However, the models derived from dx(1)/dt+ax(1) =b are only close to those derived from x(0)(k)+az(1)(k)=b provided that|a|has to satisfy|a| < 0.1; neither could the unbiased simulation for the homoge-neous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.

  13. Gravitational Interactions in a General Multibrane Model

    CERN Document Server

    Bloomfield, Jolyon K

    2010-01-01

    The gravitational interactions of the four-dimensional effective theory describing a general N-brane model in five dimensions without radion stabilization are analyzed. The parameter space is constrained by requiring that there be no ghost modes in the theory, and that the Eddington parameterized post-Newtonian parameter gamma be consistent with observations. We show that we must reside on the brane on which the warp factor is maximized. The resultant theory contains N-1 radion modes in a nonlinear sigma model, with the target space being a subset of hyperbolic space. Imposing observational constraints on the relative strengths of gravitational interactions of dark and visible matter shows that at least 99.8% of the dark matter must live on our brane in this model.

  14. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.

  15. Assessment of performance of survival prediction models for cancer prognosis

    Directory of Open Access Journals (Sweden)

    Chen Hung-Chia

    2012-07-01

    Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

  16. Application of a neural network for gentamicin concentration prediction in a general hospital population.

    Science.gov (United States)

    Corrigan, B W; Mayo, P R; Jamali, F

    1997-02-01

    Neural network (NN) computation is computer modeling based in part on simulation of the structure and function of the brain. These modeling techniques have been found useful as pattern recognition tools. In the present study, data including age, sex, height, weight, serum creatinine concentration, dose, dosing interval, and time of measurement were collected from 240 patients with various diseases being treated with gentamicin in a general hospital setting. The patient records were randomly divided into two sets: a training set of 220 patients used to develop relationships between input and output variables (peak and trough plasma concentrations) and a testing set (blinded from the NN) of 20 to test the NN. The network model was the back-propagation, feed-forward model. Various networks were tested, and the most accurate networks for peak and trough (calculated as mean percent error, root mean squared error of the testing group, and tau value between observed and predicted values) were reported. The results indicate that NNs can predict gentamicin serum concentrations accurately from various input data over a range of patient ages and renal function and may offer advantages over traditional dose prediction methods for gentamicin.

  17. A generalized additive regression model for survival times

    DEFF Research Database (Denmark)

    Scheike, Thomas H.

    2001-01-01

    Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...

  18. A generalized additive regression model for survival times

    DEFF Research Database (Denmark)

    Scheike, Thomas H.

    2001-01-01

    Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...

  19. Property predictions using microstructural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.G. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)]. E-mail: wangk2@rpi.edu; Guo, Z. [Sente Software Ltd., Surrey Technology Centre, 40 Occam Road, Guildford GU2 7YG (United Kingdom); Sha, W. [Metals Research Group, School of Civil Engineering, Architecture and Planning, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom); Glicksman, M.E. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States); Rajan, K. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)

    2005-07-15

    Precipitation hardening in an Fe-12Ni-6Mn maraging steel during overaging is quantified. First, applying our recent kinetic model of coarsening [Phys. Rev. E, 69 (2004) 061507], and incorporating the Ashby-Orowan relationship, we link quantifiable aspects of the microstructures of these steels to their mechanical properties, including especially the hardness. Specifically, hardness measurements allow calculation of the precipitate size as a function of time and temperature through the Ashby-Orowan relationship. Second, calculated precipitate sizes and thermodynamic data determined with Thermo-Calc[copyright] are used with our recent kinetic coarsening model to extract diffusion coefficients during overaging from hardness measurements. Finally, employing more accurate diffusion parameters, we determined the hardness of these alloys independently from theory, and found agreement with experimental hardness data. Diffusion coefficients determined during overaging of these steels are notably higher than those found during the aging - an observation suggesting that precipitate growth during aging and precipitate coarsening during overaging are not controlled by the same diffusion mechanism.

  20. [Predicting suicide or predicting the unpredictable in an uncertain world: Reinforcement Learning Model-Based analysis].

    Science.gov (United States)

    Desseilles, Martin

    2012-01-01

    In general, it appears that the suicidal act is highly unpredictable with the current scientific means available. In this article, the author submits the hypothesis that predicting suicide is complex because it results in predicting a choice, in itself unpredictable. The article proposes a Reinforcement learning model-based analysis. In this model, we integrate on the one hand, four ascending modulatory neurotransmitter systems (acetylcholine, noradrenalin, serotonin, and dopamine) with their regions of respective projections and afferences, and on the other hand, various observations of brain imaging identified until now in the suicidal process.

  1. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  2. Precision Plate Plan View Pattern Predictive Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yang; YANG Quan; HE An-rui; WANG Xiao-chen; ZHANG Yun

    2011-01-01

    According to the rolling features of plate mill, a 3D elastic-plastic FEM (finite element model) based on full restart method of ANSYS/LS-DYNA was established to study the inhomogeneous plastic deformation of multipass plate rolling. By analyzing the simulation results, the difference of head and tail ends predictive models was found and modified. According to the numerical simulation results of 120 different kinds of conditions, precision plate plan view pattern predictive model was established. Based on these models, the sizing MAS (mizushima automatic plan view pattern control system) method was designed and used on a 2 800 mm plate mill. Comparing the rolled plates with and without PVPP (plan view pattern predictive) model, the reduced width deviation indicates that the olate !olan view Dattern predictive model is preeise.

  3. Modelling debris flows down general channels

    Directory of Open Access Journals (Sweden)

    S. P. Pudasaini

    2005-01-01

    Full Text Available This paper is an extension of the single-phase cohesionless dry granular avalanche model over curved and twisted channels proposed by Pudasaini and Hutter (2003. It is a generalisation of the Savage and Hutter (1989, 1991 equations based on simple channel topography to a two-phase fluid-solid mixture of debris material. Important terms emerging from the correct treatment of the kinematic and dynamic boundary condition, and the variable basal topography are systematically taken into account. For vanishing fluid contribution and torsion-free channel topography our new model equations exactly degenerate to the previous Savage-Hutter model equations while such a degeneration was not possible by the Iverson and Denlinger (2001 model, which, in fact, also aimed to extend the Savage and Hutter model. The model equations of this paper have been rigorously derived; they include the effects of the curvature and torsion of the topography, generally for arbitrarily curved and twisted channels of variable channel width. The equations are put into a standard conservative form of partial differential equations. From these one can easily infer the importance and influence of the pore-fluid-pressure distribution in debris flow dynamics. The solid-phase is modelled by applying a Coulomb dry friction law whereas the fluid phase is assumed to be an incompressible Newtonian fluid. Input parameters of the equations are the internal and bed friction angles of the solid particles, the viscosity and volume fraction of the fluid, the total mixture density and the pore pressure distribution of the fluid at the bed. Given the bed topography and initial geometry and the initial velocity profile of the debris mixture, the model equations are able to describe the dynamics of the depth profile and bed parallel depth-averaged velocity distribution from the initial position to the final deposit. A shock capturing, total variation diminishing numerical scheme is implemented to

  4. A swarm intelligence-based tuning method for the Sliding Mode Generalized Predictive Control.

    Science.gov (United States)

    Oliveira, J B; Boaventura-Cunha, J; Moura Oliveira, P B; Freire, H

    2014-09-01

    This work presents an automatic tuning method for the discontinuous component of the Sliding Mode Generalized Predictive Controller (SMGPC) subject to constraints. The strategy employs Particle Swarm Optimization (PSO) to minimize a second aggregated cost function. The continuous component is obtained by the standard procedure, by Quadratic Programming (QP), thus yielding an online dual optimization scheme. Simulations and performance indexes for common process models in industry, such as nonminimum phase and time delayed systems, result in a better performance, improving robustness and tracking accuracy.

  5. Quenching the anisotropic heisenberg chain: exact solution and generalized Gibbs ensemble predictions.

    Science.gov (United States)

    Wouters, B; De Nardis, J; Brockmann, M; Fioretto, D; Rigol, M; Caux, J-S

    2014-09-12

    We study quenches in integrable spin-1/2 chains in which we evolve the ground state of the antiferromagnetic Ising model with the anisotropic Heisenberg Hamiltonian. For this nontrivially interacting situation, an application of the first-principles-based quench-action method allows us to give an exact description of the postquench steady state in the thermodynamic limit. We show that a generalized Gibbs ensemble, implemented using all known local conserved charges, fails to reproduce the exact quench-action steady state and to correctly predict postquench equilibrium expectation values of physical observables. This is supported by numerical linked-cluster calculations within the diagonal ensemble in the thermodynamic limit.

  6. A generalized model for compact stars

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Abdul [Bodai High School (H.S.), Department of Physics, Kolkata, West Bengal (India); Ray, Saibal [Government College of Engineering and Ceramic Technology, Department of Physics, Kolkata, West Bengal (India); Rahaman, Farook [Jadavpur University, Department of Mathematics, Kolkata, West Bengal (India)

    2016-05-15

    By virtue of the maximum entropy principle, we get an Euler-Lagrange equation which is a highly nonlinear differential equation containing the mass function and its derivatives. Solving the equation by a homotopy perturbation method we derive a generalized expression for the mass which is a polynomial function of the radial distance. Using the mass function we find a partially stable configuration and its characteristics. We show that different physical features of the known compact stars, viz. Her X-1, RX J 1856-37, SAX J (SS1), SAX J (SS2), and PSR J 1614-2230, can be explained by the present model. (orig.)

  7. NBC Hazard Prediction Model Capability Analysis

    Science.gov (United States)

    1999-09-01

    Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented

  8. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e. m(

  9. The General Linear Model as Structural Equation Modeling

    Science.gov (United States)

    Graham, James M.

    2008-01-01

    Statistical procedures based on the general linear model (GLM) share much in common with one another, both conceptually and practically. The use of structural equation modeling path diagrams as tools for teaching the GLM as a body of connected statistical procedures is presented. A heuristic data set is used to demonstrate a variety of univariate…

  10. THE PRINCIPLE OF ROBUSTNESS IN GENERALIZED PREDICTIVE CONTROL

    Institute of Scientific and Technical Information of China (English)

    SunMingwei; ChenZengqiang; YuanZhuzhi

    1999-01-01

    This paper deeply analyzes the closed-loop nature of GPC in the framework of internal model control (IMC) theory. A new sort of relation lies in the feedback structure so thatrobust reason can be satisfactorily explained. The result is significant because the previous conclusions are only applied to open-loop stable plant (or model).

  11. Snow hydrology in a general circulation model

    Science.gov (United States)

    Marshall, Susan; Roads, John O.; Glatzmaier, Gary

    1994-01-01

    A snow hydrology has been implemented in an atmospheric general circulation model (GCM). The snow hydrology consists of parameterizations of snowfall and snow cover fraction, a prognostic calculation of snow temperature, and a model of the snow mass and hydrologic budgets. Previously, only snow albedo had been included by a specified snow line. A 3-year GCM simulation with this now more complete surface hydrology is compared to a previous GCM control run with the specified snow line, as well as with observations. In particular, the authors discuss comparisons of the atmospheric and surface hydrologic budgets and the surface energy budget for U.S. and Canadian areas. The new snow hydrology changes the annual cycle of the surface moisture and energy budgets in the model. There is a noticeable shift in the runoff maximum from winter in the control run to spring in the snow hydrology run. A substantial amount of GCM winter precipitation is now stored in the seasonal snowpack. Snow cover also acts as an important insulating layer between the atmosphere and the ground. Wintertime soil temperatures are much higher in the snow hydrology experiment than in the control experiment. Seasonal snow cover is important for dampening large fluctuations in GCM continental skin temperature during the Northern Hemisphere winter. Snow depths and snow extent show good agreement with observations over North America. The geographic distribution of maximum depths is not as well simulated by the model due, in part, to the coarse resolution of the model. The patterns of runoff are qualitatively and quantitatively similar to observed patterns of streamflow averaged over the continental United States. The seasonal cycles of precipitation and evaporation are also reasonably well simulated by the model, although their magnitudes are larger than is observed. This is due, in part, to a cold bias in this model, which results in a dry model atmosphere and enhances the hydrologic cycle everywhere.

  12. Attitude scale and general health questionnaire subscales predict depression?

    OpenAIRE

    Amrollah Ebrahimi; Hamid Afshar; Hamid Taher Neshat Doost; Seyed Ghafur Mousavi; Hoseyn Moolavi

    2012-01-01

    Background: According to Beck theory, dysfunctional attitude has a central role in emergence of depression. The aim of this study was to determine contributions of dysfunctional attitude and general health index to depression. Methods: In this case-control study, two groups of subjects participated. The first group consisted of 65 patients with major depression and dysthymic disorder, who were recruited from Noor and Navab Safavi Psychiatry Clinics in Isfa-han. The control group was consi...

  13. The epistemological status of general circulation models

    Science.gov (United States)

    Loehle, Craig

    2017-05-01

    Forecasts of both likely anthropogenic effects on climate and consequent effects on nature and society are based on large, complex software tools called general circulation models (GCMs). Forecasts generated by GCMs have been used extensively in policy decisions related to climate change. However, the relation between underlying physical theories and results produced by GCMs is unclear. In the case of GCMs, many discretizations and approximations are made, and simulating Earth system processes is far from simple and currently leads to some results with unknown energy balance implications. Statistical testing of GCM forecasts for degree of agreement with data would facilitate assessment of fitness for use. If model results need to be put on an anomaly basis due to model bias, then both visual and quantitative measures of model fit depend strongly on the reference period used for normalization, making testing problematic. Epistemology is here applied to problems of statistical inference during testing, the relationship between the underlying physics and the models, the epistemic meaning of ensemble statistics, problems of spatial and temporal scale, the existence or not of an unforced null for climate fluctuations, the meaning of existing uncertainty estimates, and other issues. Rigorous reasoning entails carefully quantifying levels of uncertainty.

  14. General linear matrix model, Minkowski spacetime and the Standard Model

    CERN Document Server

    Belyea, Chris

    2010-01-01

    The Hermitian matrix model with general linear symmetry is argued to decouple into a finite unitary matrix model that contains metastable multidimensional lattice configurations and a fermion determinant. The simplest metastable state is a Hermitian Weyl kinetic operator of either handedness on a 3+1 D lattice with general nonlocal interactions. The Hermiticity produces 16 effective Weyl fermions by species doubling, 8 left- and 8 right-handed. These are identified with a Standard Model generation. Only local non-anomalous gauge fields within the soup of general fluctuations can survive at long distances, and the degrees of freedom for gauge fields of an $SU(8)_L X SU(8)_R$ GUT are present. Standard Model gauge symmetries associate with particular species symmetries, for example change of QCD color associates with permutation of doubling status amongst space directions. Vierbein gravity is probably also generated. While fundamental Higgs fields are not possible, low fermion current masses can arise from chira...

  15. Predictive Control, Competitive Model Business Planning, and Innovation ERP

    DEFF Research Database (Denmark)

    Nourani, Cyrus F.; Lauth, Codrina

    2015-01-01

    is not viewed as the sum of its component elements, but the product of their interactions. The paper starts with introducing a systems approach to business modeling. A competitive business modeling technique, based on the author's planning techniques is applied. Systemic decisions are based on common......New optimality principles are put forth based on competitive model business planning. A Generalized MinMax local optimum dynamic programming algorithm is presented and applied to business model computing where predictive techniques can determine local optima. Based on a systems model an enterprise...... Loops, are applied to complex management decisions. Predictive modeling specifics are briefed. A preliminary optimal game modeling technique is presented in brief with applications to innovation and R&D management. Conducting gap and risk analysis can assist with this process. Example application areas...

  16. Prediction and reconstruction of future and missing unobservable modified Weibull lifetime based on generalized order statistics

    Directory of Open Access Journals (Sweden)

    Amany E. Aly

    2016-04-01

    Full Text Available When a system consisting of independent components of the same type, some appropriate actions may be done as soon as a portion of them have failed. It is, therefore, important to be able to predict later failure times from earlier ones. One of the well-known failure distributions commonly used to model component life, is the modified Weibull distribution (MWD. In this paper, two pivotal quantities are proposed to construct prediction intervals for future unobservable lifetimes based on generalized order statistics (gos from MWD. Moreover, a pivotal quantity is developed to reconstruct missing observations at the beginning of experiment. Furthermore, Monte Carlo simulation studies are conducted and numerical computations are carried out to investigate the efficiency of presented results. Finally, two illustrative examples for real data sets are analyzed.

  17. General Versus Specific Trait Anxiety Measures in the Prediction of Fear of Snakes, Heights, and Darkness

    Science.gov (United States)

    Mellstrom, Martin, Jr.; And Others

    1976-01-01

    The relations between general and specific trait anxiety tests and fear measures in three actual situations were investigated. The results indicate that the specific tests were clearly superior to the general ones in predicting fear of snakes but only slightly superior in predicting fear of heights and darkness. (Author)

  18. Generalization Gradients in Human Predictive Learning: Effects of Discrimination Training and within-Subjects Testing

    Science.gov (United States)

    Vervliet, Bram; Iberico, Carlos; Vervoort, Ellen; Baeyens, Frank

    2011-01-01

    Generalization gradients have been investigated widely in animal conditioning experiments, but much less so in human predictive learning tasks. Here, we apply the experimental design of a recent study on conditioned fear generalization in humans (Lissek et al., 2008) to a predictive learning task, and examine the effects of a number of relevant…

  19. Prediction model for spring dust weather frequency in North China

    Institute of Scientific and Technical Information of China (English)

    LANG XianMei

    2008-01-01

    It is of great social and scientific importance and also very difficult to make reliable prediction for dust weather frequency (DWF) in North China. In this paper, the correlation between spring DWF in Beijing and Tianjin observation stations, taken as examples in North China, and seasonally averaged surface air temperature, precipitation, Arctic Oscillation, Antarctic Oscillation, South Oscillation, near surface meridional wind and Eurasian westerly index is respectively calculated so as to construct a prediction model for spring DWF in North China by using these climatic factors. Two prediction models, I.e. Model-Ⅰ and model-Ⅱ, are then set up respectively based on observed climate data and the 32-year (1970--2001) extra-seasonal hindcast experiment data as reproduced by the nine-level Atmospheric General Circulation Model developed at the Institute of Atmospheric Physics (IAP9L-AGCM). It is indicated that the correlation coefficient between the observed and predicted DWF reaches 0.933 in the model-Ⅰ, suggesting a high prediction skill one season ahead. The corresponding value is high up to 0.948 for the subsequent model-Ⅱ, which involves synchronous spring climate data reproduced by the IAP9L-AGCM relative to the model-Ⅰ. The model-Ⅱ can not only make more precise prediction but also can bring forward the lead time of real-time prediction from the model-Ⅰ's one season to half year. At last, the real-time predictability of the two models is evaluated. It follows that both the models display high prediction skill for both the interannual variation and linear trend of spring DWF in North China, and each is also featured by different advantages. As for the model-Ⅱ, the prediction skill is much higher than that of original approach by use of the IAP9L-AGCM alone. Therefore, the prediction idea put forward here should be popularized in other regions in China where dust weather occurs frequently.

  20. Prediction model for spring dust weather frequency in North China

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    It is of great social and scientific importance and also very difficult to make reliable prediction for dust weather frequency (DWF) in North China. In this paper, the correlation between spring DWF in Beijing and Tianjin observation stations, taken as examples in North China, and seasonally averaged surface air temperature, precipitation, Arctic Oscillation, Antarctic Oscillation, South Oscillation, near surface meridional wind and Eurasian westerly index is respectively calculated so as to construct a prediction model for spring DWF in North China by using these climatic factors. Two prediction models, i.e. model-I and model-II, are then set up respectively based on observed climate data and the 32-year (1970 -2001) extra-seasonal hindcast experiment data as reproduced by the nine-level Atmospheric General Circulation Model developed at the Institute of Atmospheric Physics (IAP9L-AGCM). It is indicated that the correlation coefficient between the observed and predicted DWF reaches 0.933 in the model-I, suggesting a high prediction skill one season ahead. The corresponding value is high up to 0.948 for the subsequent model-II, which involves synchronous spring climate data reproduced by the IAP9L-AGCM relative to the model-I. The model-II can not only make more precise prediction but also can bring forward the lead time of real-time prediction from the model-I’s one season to half year. At last, the real-time predictability of the two models is evaluated. It follows that both the models display high prediction skill for both the interannual variation and linear trend of spring DWF in North China, and each is also featured by different advantages. As for the model-II, the prediction skill is much higher than that of original approach by use of the IAP9L-AGCM alone. Therefore, the prediction idea put forward here should be popularized in other regions in China where dust weather occurs frequently.

  1. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  2. Modelling Chemical Reasoning to Predict Reactions

    CERN Document Server

    Segler, Marwin H S

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180,000 randomly selected binary reactions. We show that our data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-) discovering novel transformations (even including transition-metal catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph, and because each single reaction prediction is typically ac...

  3. Efficient decoding algorithms for generalized hidden Markov model gene finders

    Directory of Open Access Journals (Sweden)

    Delcher Arthur L

    2005-01-01

    Full Text Available Abstract Background The Generalized Hidden Markov Model (GHMM has proven a useful framework for the task of computational gene prediction in eukaryotic genomes, due to its flexibility and probabilistic underpinnings. As the focus of the gene finding community shifts toward the use of homology information to improve prediction accuracy, extensions to the basic GHMM model are being explored as possible ways to integrate this homology information into the prediction process. Particularly prominent among these extensions are those techniques which call for the simultaneous prediction of genes in two or more genomes at once, thereby increasing significantly the computational cost of prediction and highlighting the importance of speed and memory efficiency in the implementation of the underlying GHMM algorithms. Unfortunately, the task of implementing an efficient GHMM-based gene finder is already a nontrivial one, and it can be expected that this task will only grow more onerous as our models increase in complexity. Results As a first step toward addressing the implementation challenges of these next-generation systems, we describe in detail two software architectures for GHMM-based gene finders, one comprising the common array-based approach, and the other a highly optimized algorithm which requires significantly less memory while achieving virtually identical speed. We then show how both of these architectures can be accelerated by a factor of two by optimizing their content sensors. We finish with a brief illustration of the impact these optimizations have had on the feasibility of our new homology-based gene finder, TWAIN. Conclusions In describing a number of optimizations for GHMM-based gene finders and making available two complete open-source software systems embodying these methods, it is our hope that others will be more enabled to explore promising extensions to the GHMM framework, thereby improving the state-of-the-art in gene prediction

  4. Physical/chemical modeling for photovoltaic module life prediction

    Science.gov (United States)

    Moacanin, J.; Carroll, W. F.; Gupta, A.

    1979-01-01

    The paper presents a generalized methodology for identification and evaluation of potential degradation and failure of terrestrial photovoltaic encapsulation. Failure progression modeling and an interaction matrix are utilized to complement the conventional approach to failure degradation mode identification. Comparison of the predicted performance based on these models can produce: (1) constraints on system or component design, materials or operating conditions, (2) qualification (predicted satisfactory function), and (3) uncertainty. The approach has been applied to an investigation of an unexpected delamination failure; it is being used to evaluate thermomechanical interactions in photovoltaic modules and to study corrosion of contacts and interconnects.

  5. Statistical characteristics of irreversible predictability time in regional ocean models

    Directory of Open Access Journals (Sweden)

    P. C. Chu

    2005-01-01

    Full Text Available Probabilistic aspects of regional ocean model predictability is analyzed using the probability density function (PDF of the irreversible predictability time (IPT (called τ-PDF computed from an unconstrained ensemble of stochastic perturbations in initial conditions, winds, and open boundary conditions. Two-attractors (a chaotic attractor and a small-amplitude stable limit cycle are found in the wind-driven circulation. Relationship between attractor's residence time and IPT determines the τ-PDF for the short (up to several weeks and intermediate (up to two months predictions. The τ-PDF is usually non-Gaussian but not multi-modal for red-noise perturbations in initial conditions and perturbations in the wind and open boundary conditions. Bifurcation of τ-PDF occurs as the tolerance level varies. Generally, extremely successful predictions (corresponding to the τ-PDF's tail toward large IPT domain are not outliers and share the same statistics as a whole ensemble of predictions.

  6. Building a generalized distributed system model

    Science.gov (United States)

    Mukkamala, R.

    1992-01-01

    The key elements in the second year (1991-92) of our project are: (1) implementation of the distributed system prototype; (2) successful passing of the candidacy examination and a PhD proposal acceptance by the funded student; (3) design of storage efficient schemes for replicated distributed systems; and (4) modeling of gracefully degrading reliable computing systems. In the third year of the project (1992-93), we propose to: (1) complete the testing of the prototype; (2) enhance the functionality of the modules by enabling the experimentation with more complex protocols; (3) use the prototype to verify the theoretically predicted performance of locking protocols, etc.; and (4) work on issues related to real-time distributed systems. This should result in efficient protocols for these systems.

  7. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  8. Genetic models of homosexuality: generating testable predictions

    OpenAIRE

    Gavrilets, Sergey; Rice, William R.

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality inclu...

  9. The DSM-5 dimensional trait model and five-factor models of general personality.

    Science.gov (United States)

    Gore, Whitney L; Widiger, Thomas A

    2013-08-01

    The current study tests empirically the relationship of the dimensional trait model proposed for the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) with five-factor models of general personality. The DSM-5 maladaptive trait dimensional model proposal included 25 traits organized within five broad domains (i.e., negative affectivity, detachment, antagonism, disinhibition, and psychoticism). Consistent with the authors of the proposal, it was predicted that negative affectivity would align with five-factor model (FFM) neuroticism, detachment with FFM introversion, antagonism with FFM antagonism, disinhibition with low FFM conscientiousness and, contrary to the proposal; psychoticism would align with FFM openness. Three measures of alternative five-factor models of general personality were administered to 445 undergraduates along with the Personality Inventory for DSM-5. The results provided support for the hypothesis that all five domains of the DSM-5 dimensional trait model are maladaptive variants of general personality structure, including the domain of psychoticism.

  10. Wind farm production prediction - The Zephyr model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)

    2002-06-01

    This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)

  11. Multipath diffusion: A general numerical model

    Science.gov (United States)

    Lee, J. K. W.; Aldama, A. A.

    1992-06-01

    The effect of high-diffusivity pathways on bulk diffusion of a solute in a material has been modeled previously for simple geometries such as those in tracer diffusion experiments, but not for the geometries and boundary conditions appropriate for experiments involving bulk exchange. Using a coupled system of equations for simultaneous diffusion of a solute through two families of diffusion pathways with differing diffusivities, a general 1-D finite difference model written in FORTRAN has been developed which can be used to examine the effect of high-diffusivity paths on partial and total concentration profiles within a homogeneous isotropic sphere, infinite cylinder, and infinite slab. The partial differential equations are discretized using the θ-method/central-difference scheme, and an iterative procedure analogous to the Gauss-Seidel method is employed to solve the two systems of coupled equations. Using Fourier convergence analysis, the procedure is shown to be unconditionally convergent. Computer simulations demonstrate that a multipath diffusion mechanism can enhance significantly the bulk diffusivity of a diffusing solute species through a material. The amount of solute escaping from a material is dependent strongly on the exchange coefficients, which govern the transfer of solute from the crystal lattice to the high-diffusivity paths and vice versa. In addition, the exchange coefficients ( ϰ1, and ϰ2) seem to control not only the amount of solute that is lost, but also the shape of the concentration profile. If | K1| < | K2|, concentration profiles generally are non-Fickian in shape, typically having shallow concentration gradients near the center (radius r = 0) and steep gradients towards the outer boundary of the material ( r = R). When | K1| ⩾ | K2| a concentration profile is generated which resembles a Fickian (volume) diffusion profile with an apparent bulk diffusivity between that of the crystal lattice and that of the high-diffusivity pathways

  12. Outcome Prediction in Mathematical Models of Immune Response to Infection.

    Directory of Open Access Journals (Sweden)

    Manuel Mai

    Full Text Available Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.

  13. Generalized reproduction numbers and the prediction of patterns in waterborne disease.

    Science.gov (United States)

    Gatto, Marino; Mari, Lorenzo; Bertuzzo, Enrico; Casagrandi, Renato; Righetto, Lorenzo; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea

    2012-11-27

    Understanding, predicting, and controlling outbreaks of waterborne diseases are crucial goals of public health policies, but pose challenging problems because infection patterns are influenced by spatial structure and temporal asynchrony. Although explicit spatial modeling is made possible by widespread data mapping of hydrology, transportation infrastructure, population distribution, and sanitation, the precise condition under which a waterborne disease epidemic can start in a spatially explicit setting is still lacking. Here we show that the requirement that all the local reproduction numbers R0 be larger than unity is neither necessary nor sufficient for outbreaks to occur when local settlements are connected by networks of primary and secondary infection mechanisms. To determine onset conditions, we derive general analytical expressions for a reproduction matrix G0, explicitly accounting for spatial distributions of human settlements and pathogen transmission via hydrological and human mobility networks. At disease onset, a generalized reproduction number Λ0 (the dominant eigenvalue of G0) must be larger than unity. We also show that geographical outbreak patterns in complex environments are linked to the dominant eigenvector and to spectral properties of G0. Tests against data and computations for the 2010 Haiti and 2000 KwaZulu-Natal cholera outbreaks, as well as against computations for metapopulation networks, demonstrate that eigenvectors of G0 provide a synthetic and effective tool for predicting the disease course in space and time. Networked connectivity models, describing the interplay between hydrology, epidemiology, and social behavior sustaining human mobility, thus prove to be key tools for emergency management of waterborne infections.

  14. Early improvement with pregabalin predicts endpoint response in patients with generalized anxiety disorder: an integrated and predictive data analysis.

    Science.gov (United States)

    Montgomery, Stuart A; Lyndon, Gavin; Almas, Mary; Whalen, Ed; Prieto, Rita

    2017-01-01

    Generalized anxiety disorder (GAD), a common mental disorder, has several treatment options including pregabalin. Not all patients respond to treatment; quickly determining which patients will respond is an important treatment goal. Patient-level data were pooled from nine phase II and III randomized, double-blind, short-term, placebo-controlled trials of pregabalin for the treatment of GAD. Efficacy outcomes included the change from baseline in the Hamilton Anxiety Scale (HAM-A) total score and psychic and somatic subscales. Predictive modelling assessed baseline characteristics and early clinical responses to determine those predictive of clinical improvement at endpoint. A total of 2155 patients were included in the analysis (1447 pregabalin, 708 placebo). Pregabalin significantly improved the HAM-A total score compared with the placebo at endpoint, treatment difference (95% confidence interval), -2.61 (-3.21 to -2.01), Ppredictive of an endpoint greater than or equal to 50% improvement in the HAM-A total score. Pregabalin is an effective treatment option for patients with GAD. Patients with early response to pregabalin are more likely to respond significantly at endpoint.

  15. Consolidation of data base for Army generalized missile model

    Science.gov (United States)

    Klenke, D. J.; Hemsch, M. J.

    1980-01-01

    Data from plume interaction tests, nose mounted canard configuration tests, and high angle of attack tests on the Army Generalized Missile model are consolidated in a computer program which makes them readily accessible for plotting, listing, and evaluation. The program is written in FORTRAN and will run on an ordinary minicomputer. It has the capability of retrieving any coefficient from the existing DATAMAN tapes and displaying it in tabular or plotted form. Comparisons of data taken in several wind tunnels and of data with the predictions of Program MISSILE2 are also presented.

  16. Predictive model for segmented poly(urea

    Directory of Open Access Journals (Sweden)

    Frankl P.

    2012-08-01

    Full Text Available Segmented poly(urea has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM – a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.

  17. Prediction of benzodiazepines solubility using different cosolvency models.

    Science.gov (United States)

    Nokhodchi, A; Shokri, J; Barzegar-Jalali, M; Ghafourian, T

    2002-07-01

    The solubility of four benzodiazepines (BZPs) including diazepam (DIZ), lorazepam (LRZ) clonazepam (CLZ), and chlordiazepoxide (CHZ) in water-cosolvent (ethanol propylene glycol and polyethylene glycol 200) binary systems were studied. In general, increasing the volume fraction of cosolvents resulted in an increase in the solubility of benzodiazepines. The mole fraction solubilities were fitted to the various cosolvency models, namely extended Hildebrand approach (EHA), excess free energy (EFE), combined nearly ideal binary solvent/Redlich-Kister (CNIBS/R-K), general single model (GSM), mixture response surface (MR-S). double log-log (DL-L), and linear double log-log (LDL-L). The results showed that DL-L model was the best model in predicting the solubility of all drugs in all the water-cosolvent mixtures (OAE% = 4.71). The minimum and maximum errors were observed for benzodiazepine's solubility in water-propylene glycol and water-ethanol mixtures which were 2.67 and 11.78%, respectively. Three models (EFE, CNIBS/R-K and LDL-L) were chosen as general models for solubility descriptions of these structurally similar drugs in each of the solvent systems. Among these models, the EFE model was the best in predicting the solubility of benzodiazepines in binary solvent mixtures (OAE% = 11.19).

  18. Prediction Error Representation in Individuals With Generalized Anxiety Disorder During Passive Avoidance.

    Science.gov (United States)

    White, Stuart F; Geraci, Marilla; Lewis, Elizabeth; Leshin, Joseph; Teng, Cindy; Averbeck, Bruno; Meffert, Harma; Ernst, Monique; Blair, James R; Grillon, Christian; Blair, Karina S

    2017-02-01

    Deficits in reinforcement-based decision making have been reported in generalized anxiety disorder. However, the pathophysiology of these deficits is largely unknown; published studies have mainly examined adolescents, and the integrity of core functional processes underpinning decision making remains undetermined. In particular, it is unclear whether the representation of reinforcement prediction error (PE) (the difference between received and expected reinforcement) is disrupted in generalized anxiety disorder. This study addresses these issues in adults with the disorder. Forty-six unmedicated individuals with generalized anxiety disorder and 32 healthy comparison subjects group-matched on IQ, gender, and age performed a passive avoidance task while undergoing functional MRI. Data analyses were performed using a computational modeling approach. Behaviorally, individuals with generalized anxiety disorder showed impaired reinforcement-based decision making. Imaging results revealed that during feedback, individuals with generalized anxiety disorder relative to healthy subjects showed a reduced correlation between PE and activity within the ventromedial prefrontal cortex, ventral striatum, and other structures implicated in decision making. In addition, individuals with generalized anxiety disorder relative to healthy participants showed a reduced correlation between punishment PEs, but not reward PEs, and activity within the left and right lentiform nucleus/putamen. This is the first study to identify computational impairments during decision making in generalized anxiety disorder. PE signaling is significantly disrupted in individuals with the disorder and may lead to their decision-making deficits and excessive worry about everyday problems by disrupting the online updating ("reality check") of the current relationship between the expected values of current response options and the actual received rewards and punishments.

  19. Testing general relativity with compact coalescing binaries: comparing exact and predictive methods to compute the Bayes factor

    CERN Document Server

    Del Pozzo, Walter; Mandel, Ilya; Vecchio, Alberto

    2014-01-01

    The second generation of gravitational-wave detectors is scheduled to start operations in 2015. Gravitational-wave signatures of compact binary coalescences could be used to accurately test the strong-field dynamical predictions of general relativity. Computationally expensive data analysis pipelines, including TIGER, have been developed to carry out such tests. As a means to cheaply assess whether a particular deviation from general relativity can be detected, Cornish et al. and Vallisneri recently proposed an approximate scheme to compute the Bayes factor between a general-relativity gravitational-wave model and a model representing a class of alternative theories of gravity parametrised by one additional parameter. This approximate scheme is based on only two easy-to-compute quantities: the signal-to-noise ratio of the signal and the fitting factor between the signal and the manifold of possible waveforms within general relativity. In this work, we compare the prediction from the approximate formula agains...

  20. Predictive QSAR modeling of phosphodiesterase 4 inhibitors.

    Science.gov (United States)

    Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr

    2012-02-01

    A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Functional methods in the generalized Dicke model

    Energy Technology Data Exchange (ETDEWEB)

    Alcalde, M. Aparicio; Lemos, A.L.L. de; Svaiter, N.F. [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil)]. E-mails: aparicio@cbpf.br; aluis@cbpf.br; nfuxsvai@cbpf.br

    2007-07-01

    The Dicke model describes an ensemble of N identical two-level atoms (qubits) coupled to a single quantized mode of a bosonic field. The fermion Dicke model should be obtained by changing the atomic pseudo-spin operators by a linear combination of Fermi operators. The generalized fermion Dicke model is defined introducing different coupling constants between the single mode of the bosonic field and the reservoir, g{sub 1} and g{sub 2} for rotating and counter-rotating terms respectively. In the limit N -> {infinity}, the thermodynamic of the fermion Dicke model can be analyzed using the path integral approach with functional method. The system exhibits a second order phase transition from normal to superradiance at some critical temperature with the presence of a condensate. We evaluate the critical transition temperature and present the spectrum of the collective bosonic excitations for the general case (g{sub 1} {ne} 0 and g{sub 2} {ne} 0). There is quantum critical behavior when the coupling constants g{sub 1} and g{sub 2} satisfy g{sub 1} + g{sub 2}=({omega}{sub 0} {omega}){sup 1/2}, where {omega}{sub 0} is the frequency of the mode of the field and {omega} is the energy gap between energy eigenstates of the qubits. Two particular situations are analyzed. First, we present the spectrum of the collective bosonic excitations, in the case g{sub 1} {ne} 0 and g{sub 2} {ne} 0, recovering the well known results. Second, the case g{sub 1} {ne} 0 and g{sub 2} {ne} 0 is studied. In this last case, it is possible to have a super radiant phase when only virtual processes are introduced in the interaction Hamiltonian. Here also appears a quantum phase transition at the critical coupling g{sub 2} ({omega}{sub 0} {omega}){sup 1/2}, and for larger values for the critical coupling, the system enter in this super radiant phase with a Goldstone mode. (author)

  2. A general framework for multivariate multi-index drought prediction based on Multivariate Ensemble Streamflow Prediction (MESP)

    Science.gov (United States)

    Hao, Zengchao; Hao, Fanghua; Singh, Vijay P.

    2016-08-01

    Drought is among the costliest natural hazards worldwide and extreme drought events in recent years have caused huge losses to various sectors. Drought prediction is therefore critically important for providing early warning information to aid decision making to cope with drought. Due to the complicated nature of drought, it has been recognized that the univariate drought indicator may not be sufficient for drought characterization and hence multivariate drought indices have been developed for drought monitoring. Alongside the substantial effort in drought monitoring with multivariate drought indices, it is of equal importance to develop a drought prediction method with multivariate drought indices to integrate drought information from various sources. This study proposes a general framework for multivariate multi-index drought prediction that is capable of integrating complementary prediction skills from multiple drought indices. The Multivariate Ensemble Streamflow Prediction (MESP) is employed to sample from historical records for obtaining statistical prediction of multiple variables, which is then used as inputs to achieve multivariate prediction. The framework is illustrated with a linearly combined drought index (LDI), which is a commonly used multivariate drought index, based on climate division data in California and New York in the United States with different seasonality of precipitation. The predictive skill of LDI (represented with persistence) is assessed by comparison with the univariate drought index and results show that the LDI prediction skill is less affected by seasonality than the meteorological drought prediction based on SPI. Prediction results from the case study show that the proposed multivariate drought prediction outperforms the persistence prediction, implying a satisfactory performance of multivariate drought prediction. The proposed method would be useful for drought prediction to integrate drought information from various sources

  3. Generalized effective medium resistivity model for low resistivity reservoir

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    With the advancement in oil exploration,producible oil and gas are being found in low resistivity reservoirs,which may otherwise be erroneously thought as water zones from their resistivity.However,the evaluation of low resistivity reservoirs remains difficult from log interpretation.Since low resistivity in hydrocarbon bearing sands can be caused by dispersed clay,laminated shale,conductive matrix grains,microscopic capillary pores and high saline water,a new resistivity model is required for more accurate hydrocarbon saturation prediction for low resistivity formations.Herein,a generalized effective medium resistivity model has been proposed for low resistivity reservoirs,based on experimental measurements on artificial low resistivity shaly sand samples,symmetrical anisotropic effective medium theory for resistivity interpretations,and geneses and conductance mechanisms of low resistivity reservoirs.By analyzing effects of some factors on the proposed model,we show theoretically the model can describe conductance mechanisms of low resistivity reservoirs with five geneses.Also,shale distribution largely affects water saturation predicted by the model.Resistivity index decreases as fraction and conductivity of laminated shale,or fraction of dispersed clay,or conductivity of rock matrix grains increases.Resistivity index decreases as matrix percolation exponent,or percolation rate of capillary bound water increases,and as percolation exponent of capillary bound water,or matrix percolation rate,or free water percolation rate decreases.Rock sample data from low resistivity reservoirs with different geneses and interpretation results for log data show that the proposed model can be applied in low resistivity reservoirs containing high salinity water,dispersed clay,microscopic capillary pores,laminated shale and conductive matrix grains,and thus is considered as a generalized resistivity model for low resistivity reservoir evaluation.

  4. Generalized effective medium resistivity model for low resistivity reservoir

    Institute of Scientific and Technical Information of China (English)

    SONG YanJie; TANG XiaoMin

    2008-01-01

    With the advancement in oil exploration, producible oil and gas are being found in low resistivity reservoirs, which may otherwise be erroneously thought as water zones from their resistivity. However,the evaluation of low resistivity reservoirs remains difficult from log interpretation. Since low resistivity in hydrocarbon bearing sands can be caused by dispersed clay, laminated shale, conductive matrix grains, microscopic capillary pores and high saline water, a new resistivity model is required for more accurate hydrocarbon saturation prediction for low resistivity formations. Herein, a generalized effective medium resistivity model has been proposed for low resistivity reservoirs, based on experimental measurements on artificial low resistivity shaly sand samples, symmetrical anisotropic effective medium theory for resistivity interpretations, and geneses and conductance mechanisms of low resistivity reservoirs. By analyzing effects of some factors on the proposed model, we show theoretically the model can describe conductance mechanisms of low resistivity reservoirs with five geneses. Also,shale distribution largely affects water saturation predicted by the model. Resistivity index decreases as fraction and conductivity of laminated shale, or fraction of dispersed clay, or conductivity of rock matrix grains increases. Resistivity index decreases as matrix percolation exponent, or percolation rate of capillary bound water increases, and as percolation exponent of capillary bound water, or matrix percolation rate, or free water percolation rate decreases. Rock sample data from low resistivity reservoirs with different geneses and interpretation results for log data show that the proposed model can be applied in low resistivity reservoirs containing high salinity water, dispersed clay, microscopic capillary pores, laminated shale and conductive matrix grains, and thus is considered as a generalized resistivity model for low resistivity reservoir evaluation.

  5. A Novel Trigger Model for Sales Prediction with Data Mining Techniques

    Directory of Open Access Journals (Sweden)

    Wenjie Huang

    2015-05-01

    Full Text Available Previous research on sales prediction has always used a single prediction model. However, no single model can perform the best for all kinds of merchandise. Accurate prediction results for just one commodity are meaningless to sellers. A general prediction for all commodities is needed. This paper illustrates a novel trigger system that can match certain kinds of commodities with a prediction model to give better prediction results for different kinds of commodities. We find some related factors for classification. Several classical prediction models are included as basic models for classification. We compared the results of the trigger model with other single models. The results show that the accuracy of the trigger model is better than that of a single model. This has implications for business in that sellers can utilize the proposed system to effectively predict the sales of several commodities.

  6. Bayesian Calibration of Generalized Pools of Predictive Distributions

    Directory of Open Access Journals (Sweden)

    Roberto Casarin

    2016-03-01

    Full Text Available Decision-makers often consult different experts to build reliable forecasts on variables of interest. Combining more opinions and calibrating them to maximize the forecast accuracy is consequently a crucial issue in several economic problems. This paper applies a Bayesian beta mixture model to derive a combined and calibrated density function using random calibration functionals and random combination weights. In particular, it compares the application of linear, harmonic and logarithmic pooling in the Bayesian combination approach. The three combination schemes, i.e., linear, harmonic and logarithmic, are studied in simulation examples with multimodal densities and an empirical application with a large database of stock data. All of the experiments show that in a beta mixture calibration framework, the three combination schemes are substantially equivalent, achieving calibration, and no clear preference for one of them appears. The financial application shows that the linear pooling together with beta mixture calibration achieves the best results in terms of calibrated forecast.

  7. Towards a general theory of neural computation based on prediction by single neurons.

    Directory of Open Access Journals (Sweden)

    Christopher D Fiorillo

    Full Text Available Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise". A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of

  8. Towards a general theory of neural computation based on prediction by single neurons.

    Science.gov (United States)

    Fiorillo, Christopher D

    2008-10-01

    Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise"). A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons

  9. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-02-01

    Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  10. Calibrated predictions for multivariate competing risks models.

    Science.gov (United States)

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  11. General Study of Perturbations in Bouncing and Cyclic Models

    Science.gov (United States)

    Mayes, Riley; Biswas, Tirthabir; Lattyak, Colleen

    2015-04-01

    Perturbations are important in both understanding and evaluating the importance of bounces and turnarounds in models that predict a cyclic evolution of our Universe. Moreover, tracking these perturbations through the entirety of the cycle is important as it provides an outlet for a qualitative comparison with Cosmic Microwave Background (CMB) observations. However, tracking these perturbations through each cycle proves difficult as the physics to describe bounces and turnarounds is not well established. Therefore, we first studied general anaytical and numerical techniques in order to understand the evolution of fluctuations in simple cosmological models where physics is better understood. In our research, we first developed analytical techniques from background solutions to establish a solid foundation for describing super-Hubble fluctuations in our early Universe. These analytical solutions were developed for both bounces and turnarounds allowing us to numerically verify and then further investigate the consequences of these solutions in models such as bounce inflation and cyclic inflation.

  12. A General Mechanistic Model of Solid Oxide Fuel Cells

    Institute of Scientific and Technical Information of China (English)

    SHI Yixiang; CAI Ningsheng

    2006-01-01

    A comprehensive model considering all forms of polarization was developed. The model considers the intricate interdependency among the electrode microstructure, the transport phenomena, and the electrochemical processes. The active three-phase boundary surface was expressed as a function of electrode microstructure parameters (porosity, coordination number, contact angle, etc.). The exchange current densities used in the simulation were obtained by fitting a general formulation to the polarization curves proposed as a function of cell temperature and oxygen partial pressure. A validation study shows good agreement with published experimental data. Distributions of overpotentials, gas component partial pressures, and electronic/ionic current densities have been calculated. The effects of a porous electrode structure and of various operation conditions on cell performance were also predicted. The mechanistic model proposed can be used to interpret experimental observations and optimize cell performance by incorporating reliable experimental data.

  13. Aeolian Sediment Transport Integration in General Stratigraphic Forward Modeling

    Directory of Open Access Journals (Sweden)

    T. Salles

    2011-01-01

    Full Text Available A large number of numerical models have been developed to simulate the physical processes involved in saltation, and, recently to investigate the interaction between soil vegetation cover and aeolian transport. These models are generally constrained to saltation of monodisperse particles while natural saltation occurs over mixed soils. We present a three-dimensional numerical model of steady-state saltation that can simulate aeolian erosion, transport and deposition for unvegetated mixed soils. Our model simulates the motion of saltating particles using a cellular automata algorithm. A simple set of rules is used and takes into account an erosion formula, a transport model, a wind exposition function, and an avalanching process. The model is coupled to the stratigraphic forward model Sedsim that accounts for a larger number of geological processes. The numerical model predicts a wide range of typical dune shapes, which have qualitative correspondence to real systems. The model reproduces the internal structure and composition of the resulting aeolian deposits. It shows the complex formation of dune systems with cross-bedding strata development, bounding surfaces overlaid by fine sediment and inverse grading deposits. We aim to use it to simulate the complex interactions between different sediment transport processes and their resulting geological morphologies.

  14. Global Solar Dynamo Models: Simulations and Predictions

    Indian Academy of Sciences (India)

    Mausumi Dikpati; Peter A. Gilman

    2008-03-01

    Flux-transport type solar dynamos have achieved considerable success in correctly simulating many solar cycle features, and are now being used for prediction of solar cycle timing and amplitude.We first define flux-transport dynamos and demonstrate how they work. The essential added ingredient in this class of models is meridional circulation, which governs the dynamo period and also plays a crucial role in determining the Sun’s memory about its past magnetic fields.We show that flux-transport dynamo models can explain many key features of solar cycles. Then we show that a predictive tool can be built from this class of dynamo that can be used to predict mean solar cycle features by assimilating magnetic field data from previous cycles.

  15. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    Science.gov (United States)

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  16. Model Predictive Control of Sewer Networks

    Science.gov (United States)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.

    2017-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.

  17. DKIST Polarization Modeling and Performance Predictions

    Science.gov (United States)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  18. Modelling Chemical Reasoning to Predict Reactions

    OpenAIRE

    Segler, Marwin H. S.; Waller, Mark P.

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outpe...

  19. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert; Knox, James

    2016-01-01

    Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  20. Raman Model Predicting Hardness of Covalent Crystals

    OpenAIRE

    Zhou, Xiang-Feng; Qian, Quang-Rui; Sun, Jian; Tian, Yongjun; Wang, Hui-Tian

    2009-01-01

    Based on the fact that both hardness and vibrational Raman spectrum depend on the intrinsic property of chemical bonds, we propose a new theoretical model for predicting hardness of a covalent crystal. The quantitative relationship between hardness and vibrational Raman frequencies deduced from the typical zincblende covalent crystals is validated to be also applicable for the complex multicomponent crystals. This model enables us to nondestructively and indirectly characterize the hardness o...

  1. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts

  2. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  3. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  4. Prediction modelling for population conviction data

    NARCIS (Netherlands)

    Tollenaar, N.

    2017-01-01

    In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.

  5. A Predictive Model for MSSW Student Success

    Science.gov (United States)

    Napier, Angela Michele

    2011-01-01

    This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…

  6. Predictability of extreme values in geophysical models

    NARCIS (Netherlands)

    Sterk, A.E.; Holland, M.P.; Rabassa, P.; Broer, H.W.; Vitolo, R.

    2012-01-01

    Extreme value theory in deterministic systems is concerned with unlikely large (or small) values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical model

  7. A revised prediction model for natural conception

    NARCIS (Netherlands)

    Bensdorp, A.J.; Steeg, J.W. van der; Steures, P.; Habbema, J.D.; Hompes, P.G.; Bossuyt, P.M.; Veen, F. van der; Mol, B.W.; Eijkemans, M.J.; Kremer, J.A.M.; et al.,

    2017-01-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis

  8. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  9. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  10. Leptogenesis in minimal predictive seesaw models

    CERN Document Server

    Björkeroth, Fredrik; Varzielas, Ivo de Medeiros; King, Stephen F

    2015-01-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\

  11. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...... as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard...... linear contrast in a generalized linear model using the probit link function. All methods developed in the paper are implemented in our free R-package sensR (http://www.cran.r-project.org/package=sensR/). This includes the basic power and sample size calculations for these four discrimination tests...

  12. An evaporation duct prediction model coupled with the MM5

    Institute of Scientific and Technical Information of China (English)

    JIAO Lin; ZHANG Yonggang

    2015-01-01

    Evaporation duct is an abnormal refractive phenomenon in the marine atmosphere boundary layer. It has been generally accepted that the evaporation duct prominently affects the performance of the electronic equipment over the sea because of its wide distribution and frequent occurrence. It has become a research focus of the navies all over the world. At present, the diagnostic models of the evaporation duct are all based on the Monin-Obukhov similarity theory, with only differences in the flux and character scale calculations in the surface layer. These models are applicable to the stationary and uniform open sea areas without considering the alongshore effect. This paper introduces the nonlinear factorav and the gust wind itemwg into the Babin model, and thus extends the evaporation duct diagnostic model to the offshore area under extremely low wind speed. In addition, an evaporation duct prediction model is designed and coupled with the fifth generation mesoscale model (MM5). The tower observational data and radar data at the Pingtan island of Fujian Province on May 25–26, 2002 were used to validate the forecast results. The outputs of the prediction model agree with the observations from 0 to 48 h. The relative error of the predicted evaporation duct height is 19.3% and the prediction results are consistent with the radar detection.

  13. Implementation of a PETN failure model using ARIA's general chemistry framework

    Energy Technology Data Exchange (ETDEWEB)

    Hobbs, Michael L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-01

    We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model, implementation, and validation.

  14. Specialized Language Models using Dialogue Predictions

    CERN Document Server

    Popovici, C; Popovici, Cosmin; Baggia, Paolo

    1996-01-01

    This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...

  15. A generalized methodology to characterize composite materials for pyrolysis models

    Science.gov (United States)

    McKinnon, Mark B.

    The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to

  16. General analysis of dark radiation in sequestered string models

    Energy Technology Data Exchange (ETDEWEB)

    Cicoli, Michele [ICTP,Strada Costiera 11, Trieste 34014 (Italy); Dipartimento di Fisica e Astronomia, Università di Bologna,via Irnerio 46, 40126 Bologna (Italy); INFN, Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy); Muia, Francesco [Dipartimento di Fisica e Astronomia, Università di Bologna,via Irnerio 46, 40126 Bologna (Italy); INFN, Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy)

    2015-12-22

    We perform a general analysis of axionic dark radiation produced from the decay of the lightest modulus in the sequestered LARGE Volume Scenario. We discuss several cases depending on the form of the Kähler metric for visible sector matter fields and the mechanism responsible for achieving a de Sitter vacuum. The leading decay channels which determine dark radiation predictions are to hidden sector axions, visible sector Higgses and SUSY scalars depending on their mass. We show that in most of the parameter space of split SUSY-like models squarks and sleptons are heavier than the lightest modulus. Hence dark radiation predictions previously obtained for MSSM-like cases hold more generally also for split SUSY-like cases since the decay channel to SUSY scalars is kinematically forbidden. However the inclusion of string loop corrections to the Kähler potential gives rise to a parameter space region where the decay channel to SUSY scalars opens up, leading to a significant reduction of dark radiation production. In this case, the simplest model with a shift-symmetric Higgs sector can suppress the excess of dark radiation ΔN{sub eff} to values as small as 0.14, in perfect agreement with current experimental bounds. Depending on the exact mass of the SUSY scalars all values in the range 0.14≲ΔN{sub eff}≲1.6 are allowed. Interestingly dark radiation overproduction can be avoided also in the absence of a Giudice-Masiero coupling.

  17. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  18. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  19. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...

  20. A general mixture model for sediment laden flows

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping; Bombardelli, Fabián

    2017-09-01

    A mixture model for general description of sediment-laden flows is developed based on an Eulerian-Eulerian two-phase flow theory, with the aim at gaining computational speed in the prediction, but preserving the accuracy of the complete two-fluid model. The basic equations of the model include the mass and momentum conservation equations for the sediment-water mixture, and the mass conservation equation for sediment. However, a newly-obtained expression for the slip velocity between phases allows for the computation of the sediment motion, without the need of solving the momentum equation for sediment. The turbulent motion is represented for both the fluid and the particulate phases. A modified k-ε model is used to describe the fluid turbulence while an algebraic model is adopted for turbulent motion of particles. A two-dimensional finite difference method based on the SMAC scheme was used to numerically solve the mathematical model. The model is validated through simulations of fluid and suspended sediment motion in steady open-channel flows, both in equilibrium and non-equilibrium states, as well as in oscillatory flows. The computed sediment concentrations, horizontal velocity and turbulent kinetic energy of the mixture are all shown to be in good agreement with available experimental data, and importantly, this is done at a fraction of the computational efforts required by the complete two-fluid model.

  1. Generalized total least squares prediction algorithm for universal 3D similarity transformation

    Science.gov (United States)

    Wang, Bin; Li, Jiancheng; Liu, Chao; Yu, Jie

    2017-02-01

    Three-dimensional (3D) similarity datum transformation is extensively applied to transform coordinates from GNSS-based datum to a local coordinate system. Recently, some total least squares (TLS) algorithms have been successfully developed to solve the universal 3D similarity transformation problem (probably with big rotation angles and an arbitrary scale ratio). However, their procedures of the parameter estimation and new point (non-common point) transformation were implemented separately, and the statistical correlation which often exists between the common and new points in the original coordinate system was not considered. In this contribution, a generalized total least squares prediction (GTLSP) algorithm, which implements the parameter estimation and new point transformation synthetically, is proposed. All of the random errors in the original and target coordinates, and their variance-covariance information will be considered. The 3D transformation model in this case is abstracted as a kind of generalized errors-in-variables (EIV) model and the equation for new point transformation is incorporated into the functional model as well. Then the iterative solution is derived based on the Gauss-Newton approach of nonlinear least squares. The performance of GTLSP algorithm is verified in terms of a simulated experiment, and the results show that GTLSP algorithm can improve the statistical accuracy of the transformed coordinates compared with the existing TLS algorithms for 3D similarity transformation.

  2. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  3. Climatology of the HOPE-G global ocean general circulation model - Sea ice general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Legutke, S. [Deutsches Klimarechenzentrum (DKRZ), Hamburg (Germany); Maier-Reimer, E. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany)

    1999-12-01

    The HOPE-G global ocean general circulation model (OGCM) climatology, obtained in a long-term forced integration is described. HOPE-G is a primitive-equation z-level ocean model which contains a dynamic-thermodynamic sea-ice model. It is formulated on a 2.8 grid with increased resolution in low latitudes in order to better resolve equatorial dynamics. The vertical resolution is 20 layers. The purpose of the integration was both to investigate the models ability to reproduce the observed general circulation of the world ocean and to obtain an initial state for coupled atmosphere - ocean - sea-ice climate simulations. The model was driven with daily mean data of a 15-year integration of the atmosphere general circulation model ECHAM4, the atmospheric component in later coupled runs. Thereby, a maximum of the flux variability that is expected to appear in coupled simulations is included already in the ocean spin-up experiment described here. The model was run for more than 2000 years until a quasi-steady state was achieved. It reproduces the major current systems and the main features of the so-called conveyor belt circulation. The observed distribution of water masses is reproduced reasonably well, although with a saline bias in the intermediate water masses and a warm bias in the deep and bottom water of the Atlantic and Indian Oceans. The model underestimates the meridional transport of heat in the Atlantic Ocean. The simulated heat transport in the other basins, though, is in good agreement with observations. (orig.)

  4. Climatology of the HOPE-G global ocean general circulation model - Sea ice general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Legutke, S. [Deutsches Klimarechenzentrum (DKRZ), Hamburg (Germany); Maier-Reimer, E. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany)

    1999-12-01

    The HOPE-G global ocean general circulation model (OGCM) climatology, obtained in a long-term forced integration is described. HOPE-G is a primitive-equation z-level ocean model which contains a dynamic-thermodynamic sea-ice model. It is formulated on a 2.8 grid with increased resolution in low latitudes in order to better resolve equatorial dynamics. The vertical resolution is 20 layers. The purpose of the integration was both to investigate the models ability to reproduce the observed general circulation of the world ocean and to obtain an initial state for coupled atmosphere - ocean - sea-ice climate simulations. The model was driven with daily mean data of a 15-year integration of the atmosphere general circulation model ECHAM4, the atmospheric component in later coupled runs. Thereby, a maximum of the flux variability that is expected to appear in coupled simulations is included already in the ocean spin-up experiment described here. The model was run for more than 2000 years until a quasi-steady state was achieved. It reproduces the major current systems and the main features of the so-called conveyor belt circulation. The observed distribution of water masses is reproduced reasonably well, although with a saline bias in the intermediate water masses and a warm bias in the deep and bottom water of the Atlantic and Indian Oceans. The model underestimates the meridional transport of heat in the Atlantic Ocean. The simulated heat transport in the other basins, though, is in good agreement with observations. (orig.)

  5. Gas explosion prediction using CFD models

    Energy Technology Data Exchange (ETDEWEB)

    Niemann-Delius, C.; Okafor, E. [RWTH Aachen Univ. (Germany); Buhrow, C. [TU Bergakademie Freiberg Univ. (Germany)

    2006-07-15

    A number of CFD models are currently available to model gaseous explosions in complex geometries. Some of these tools allow the representation of complex environments within hydrocarbon production plants. In certain explosion scenarios, a correction is usually made for the presence of buildings and other complexities by using crude approximations to obtain realistic estimates of explosion behaviour as can be found when predicting the strength of blast waves resulting from initial explosions. With the advance of computational technology, and greater availability of computing power, computational fluid dynamics (CFD) tools are becoming increasingly available for solving such a wide range of explosion problems. A CFD-based explosion code - FLACS can, for instance, be confidently used to understand the impact of blast overpressures in a plant environment consisting of obstacles such as buildings, structures, and pipes. With its porosity concept representing geometry details smaller than the grid, FLACS can represent geometry well, even when using coarse grid resolutions. The performance of FLACS has been evaluated using a wide range of field data. In the present paper, the concept of computational fluid dynamics (CFD) and its application to gas explosion prediction is presented. Furthermore, the predictive capabilities of CFD-based gaseous explosion simulators are demonstrated using FLACS. Details about the FLACS-code, some extensions made to FLACS, model validation exercises, application, and some results from blast load prediction within an industrial facility are presented. (orig.)

  6. Genetic models of homosexuality: generating testable predictions.

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-12-22

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.

  7. A Study On Distributed Model Predictive Consensus

    CERN Document Server

    Keviczky, Tamas

    2008-01-01

    We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.

  8. Modeling and predicting page-view dynamics on Wikipedia

    CERN Document Server

    Thij, Marijn ten; Laniado, David; Kaltenbrunner, Andreas

    2012-01-01

    The simplicity of producing and consuming online content makes it difficult to estimate how much attention will be devoted from Internet users to any given content. This work presents a general overview of temporal patterns in the access to content on a huge collaborative platform. We propose a model for predicting the popularity of promoted content, inspired by the analysis of the page-view dynamics on Wikipedia. Compared to previous studies, the observed popularity patterns are more complex; however, our model uses just few parameters to fully describe them. The model is validated through empirical measurements.

  9. Adaptive modelling of structured molecular representations for toxicity prediction

    Science.gov (United States)

    Bertinetto, Carlo; Duce, Celia; Micheli, Alessio; Solaro, Roberto; Tiné, Maria Rosaria

    2012-12-01

    We investigated the possibility of modelling structure-toxicity relationships by direct treatment of the molecular structure (without using descriptors) through an adaptive model able to retain the appropriate structural information. With respect to traditional descriptor-based approaches, this provides a more general and flexible way to tackle prediction problems that is particularly suitable when little or no background knowledge is available. Our method employs a tree-structured molecular representation, which is processed by a recursive neural network (RNN). To explore the realization of RNN modelling in toxicological problems, we employed a data set containing growth impairment concentrations (IGC50) for Tetrahymena pyriformis.

  10. Appropriate model selection methods for nonstationary generalized extreme value models

    Science.gov (United States)

    Kim, Hanbeen; Kim, Sooyoung; Shin, Hongjoon; Heo, Jun-Haeng

    2017-04-01

    Several evidences of hydrologic data series being nonstationary in nature have been found to date. This has resulted in the conduct of many studies in the area of nonstationary frequency analysis. Nonstationary probability distribution models involve parameters that vary over time. Therefore, it is not a straightforward process to apply conventional goodness-of-fit tests to the selection of an appropriate nonstationary probability distribution model. Tests that are generally recommended for such a selection include the Akaike's information criterion (AIC), corrected Akaike's information criterion (AICc), Bayesian information criterion (BIC), and likelihood ratio test (LRT). In this study, the Monte Carlo simulation was performed to compare the performances of these four tests, with regard to nonstationary as well as stationary generalized extreme value (GEV) distributions. Proper model selection ratios and sample sizes were taken into account to evaluate the performances of all the four tests. The BIC demonstrated the best performance with regard to stationary GEV models. In case of nonstationary GEV models, the AIC proved to be better than the other three methods, when relatively small sample sizes were considered. With larger sample sizes, the AIC, BIC, and LRT presented the best performances for GEV models which have nonstationary location and/or scale parameters, respectively. Simulation results were then evaluated by applying all four tests to annual maximum rainfall data of selected sites, as observed by the Korea Meteorological Administration.

  11. Mathematical models for predicting indoor air quality from smoking activity.

    Science.gov (United States)

    Ott, W R

    1999-05-01

    Much progress has been made over four decades in developing, testing, and evaluating the performance of mathematical models for predicting pollutant concentrations from smoking in indoor settings. Although largely overlooked by the regulatory community, these models provide regulators and risk assessors with practical tools for quantitatively estimating the exposure level that people receive indoors for a given level of smoking activity. This article reviews the development of the mass balance model and its application to predicting indoor pollutant concentrations from cigarette smoke and derives the time-averaged version of the model from the basic laws of conservation of mass. A simple table is provided of computed respirable particulate concentrations for any indoor location for which the active smoking count, volume, and concentration decay rate (deposition rate combined with air exchange rate) are known. Using the indoor ventilatory air exchange rate causes slightly higher indoor concentrations and therefore errs on the side of protecting health, since it excludes particle deposition effects, whereas using the observed particle decay rate gives a more accurate prediction of indoor concentrations. This table permits easy comparisons of indoor concentrations with air quality guidelines and indoor standards for different combinations of active smoking counts and air exchange rates. The published literature on mathematical models of environmental tobacco smoke also is reviewed and indicates that these models generally give good agreement between predicted concentrations and actual indoor measurements.

  12. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    R. G. SILVA

    1999-03-01

    Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.

  13. Predicting microRNA precursors with a generalized Gaussian components based density estimation algorithm

    Directory of Open Access Journals (Sweden)

    Wu Chi-Yeh

    2010-01-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G

  14. Formalization of the model of the enterprise insolvency risk prediction

    Directory of Open Access Journals (Sweden)

    Elena V. Shirinkina

    2015-12-01

    Full Text Available Objective to improve the conceptual apparatus and analytical procedures of insolvency risk identification. Methods general scientific methods of systemic and comparative analysis economicstatistical and dynamic analysis of economic processes and phenomena. Results nowadays managing the insolvency risk is relevant for any company regardless of the economy sector. Instability manifests itself through the uncertainty of the directions of the external environment changes and their high frequency. Analysis of the economic literature showed that currently there is no single approach to systematization of methods for insolvency risk prediction which means that there is no objective view on tools that can be used to monitor the insolvency risk. In this respect scientific and practical search of representative indicators for the formalization of the models predicting the insolvency is very important. Therefore the study has solved the following tasks defined the nature of the insolvency risk and its identification in the process of financial relations in management system proved the representativeness of the indicators in the insolvency risk prediction and formed the model of the risk insolvency prediction. Scientific novelty grounding the model of risk insolvency prediction. Practical significance development of a theoretical framework to address issues arising in the diagnosis of insolvent enterprises and application of the results obtained in the practice of the bankruptcy institution bodies. The presented model allows to predict the insolvency risk of the enterprise through the general development trend and the fluctuation boundaries of bankruptcy risk to determine the significance of each indicatorfactor its quantitative impact and therefore to avoid the risk of the enterprise insolvency. nbsp

  15. On Modeling and Constrained Model Predictive Control of Open Irrigation Canals

    Directory of Open Access Journals (Sweden)

    Lihui Cen

    2017-01-01

    Full Text Available This paper proposes a model predictive control of open irrigation canals with constraints. The Saint-Venant equations are widely used in hydraulics to model an open canal. As a set of hyperbolic partial differential equations, they are not solved explicitly and difficult to design optimal control algorithms. In this work, a prediction model of an open canal is developed by discretizing the Saint-Venant equations in both space and time. Based on the prediction model, a constrained model predictive control was firstly investigated for the case of one single-pool canal and then generalized to the case of a cascaded canal with multipools. The hydraulic software SICC was used to simulate the canal and test the algorithms with application to a real-world irrigation canal of Yehe irrigation area located in Hebei province.

  16. Fermion masses and mixing in general warped extra dimensional models

    Science.gov (United States)

    Frank, Mariana; Hamzaoui, Cherif; Pourtolami, Nima; Toharia, Manuel

    2015-06-01

    We analyze fermion masses and mixing in a general warped extra dimensional model, where all the Standard Model (SM) fields, including the Higgs, are allowed to propagate in the bulk. In this context, a slightly broken flavor symmetry imposed universally on all fermion fields, without distinction, can generate the full flavor structure of the SM, including quarks, charged leptons and neutrinos. For quarks and charged leptons, the exponential sensitivity of their wave functions to small flavor breaking effects yield hierarchical masses and mixing as it is usual in warped models with fermions in the bulk. In the neutrino sector, the exponential wave-function factors can be flavor blind and thus insensitive to the small flavor symmetry breaking effects, directly linking their masses and mixing angles to the flavor symmetric structure of the five-dimensional neutrino Yukawa couplings. The Higgs must be localized in the bulk and the model is more successful in generalized warped scenarios where the metric background solution is different than five-dimensional anti-de Sitter (AdS5 ). We study these features in two simple frameworks, flavor complimentarity and flavor democracy, which provide specific predictions and correlations between quarks and leptons, testable as more precise data in the neutrino sector becomes available.

  17. Generalized Semi-Analytical Models of Supernova Light Curves

    CERN Document Server

    Chatzopoulos, Emmanouil; Vinko, Jozsef

    2011-01-01

    We present generalized supernova (SN) light curve (LC) models for a variety of power inputs. We provide an expression for the power input that is produced by self-similar forward and reverse shocks in SN ejecta - circumstellar matter (CSM) interaction. We find that this ejecta-CSM interaction luminosity is in agreement with results from multi-dimensional radiation hydrodynamics simulations in the optically-thin case. We develop a model for the case of an optically-thick CSM by invoking an approximation for the effects of radiative diffusion. In the context of this model, we provide predictions for the time of forward shock break-out from the optically-thick part of the CSM envelope. We also introduce a hybrid LC model that incorporates ejecta-CSM interaction plus Ni-56 and Co-56 radioactive decay input. We fit this hybrid model to the LC of the Super-Luminous Supernova (SLSN) 2006gy. We find that this model provides a better fit to the LC of this event than previously presented models. We also address the rel...

  18. Performance model to predict overall defect density

    Directory of Open Access Journals (Sweden)

    J Venkatesh

    2012-08-01

    Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.

  19. Development of Neural-Based Generalized Predictive Control System of Strip Shape for a Reversal 6-High Mill

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Shape control in strip rolling is intractable because ofmultivariate, non-linearity, time-variation and coupling of the process. In this paper, a generalized predictive control algorithm based on BP neural network model was introduced for on-line controlling the strip flatness of a reversal UC mill. Some comparative experiments were done with a conventional feedback control based on linear regression model. The results clearly demonstrated the advantage of the proposed scheme.

  20. Using General Outcome Measures to Predict Student Performance on State-Mandated Assessments: An Applied Approach for Establishing Predictive Cutscores

    Science.gov (United States)

    Leblanc, Michael; Dufore, Emily; McDougal, James

    2012-01-01

    Cutscores for reading and math (general outcome measures) to predict passage on New York state-mandated assessments were created by using a freely available Excel workbook. The authors used linear regression to create the cutscores and diagnostic indicators were provided. A rationale and procedure for using this method is outlined. This method…

  1. A Chemical Containment Model for the General Purpose Work Station

    Science.gov (United States)

    Flippen, Alexis A.; Schmidt, Gregory K.

    1994-01-01

    Contamination control is a critical safety requirement imposed on experiments flying on board the Spacelab. The General Purpose Work Station, a Spacelab support facility used for life sciences space flight experiments, is designed to remove volatile compounds from its internal airpath and thereby minimize contamination of the Spacelab. This is accomplished through the use of a large, multi-stage filter known as the Trace Contaminant Control System. Many experiments planned for the Spacelab require the use of toxic, volatile fixatives in order to preserve specimens prior to postflight analysis. The NASA-Ames Research Center SLS-2 payload, in particular, necessitated the use of several toxic, volatile compounds in order to accomplish the many inflight experiment objectives of this mission. A model was developed based on earlier theories and calculations which provides conservative predictions of the resultant concentrations of these compounds given various spill scenarios. This paper describes the development and application of this model.

  2. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  3. General Theory versus ENA Theory: Comparing Their Predictive Accuracy and Scope.

    Science.gov (United States)

    Ellis, Lee; Hoskin, Anthony; Hartley, Richard; Walsh, Anthony; Widmayer, Alan; Ratnasingam, Malini

    2015-12-01

    General theory attributes criminal behavior primarily to low self-control, whereas evolutionary neuroandrogenic (ENA) theory envisions criminality as being a crude form of status-striving promoted by high brain exposure to androgens. General theory predicts that self-control will be negatively correlated with risk-taking, while ENA theory implies that these two variables should actually be positively correlated. According to ENA theory, traits such as pain tolerance and muscularity will be positively associated with risk-taking and criminality while general theory makes no predictions concerning these relationships. Data from Malaysia and the United States are used to test 10 hypotheses derived from one or both of these theories. As predicted by both theories, risk-taking was positively correlated with criminality in both countries. However, contrary to general theory and consistent with ENA theory, the correlation between self-control and risk-taking was positive in both countries. General theory's prediction of an inverse correlation between low self-control and criminality was largely supported by the U.S. data but only weakly supported by the Malaysian data. ENA theory's predictions of positive correlations between pain tolerance, muscularity, and offending were largely confirmed. For the 10 hypotheses tested, ENA theory surpassed general theory in predictive scope and accuracy.

  4. Computer program to predict noise of general aviation aircraft: User's guide

    Science.gov (United States)

    Mitchell, J. A.; Barton, C. K.; Kisner, L. S.; Lyon, C. A.

    1982-01-01

    Program NOISE predicts General Aviation Aircraft far-field noise levels at FAA FAR Part 36 certification conditions. It will also predict near-field and cabin noise levels for turboprop aircraft and static engine component far-field noise levels.

  5. Generalized Predictive Control of Dynamic Systems with Rigid-Body Modes

    Science.gov (United States)

    Kvaternik, Raymond G.

    2013-01-01

    Numerical simulations to assess the effectiveness of Generalized Predictive Control (GPC) for active control of dynamic systems having rigid-body modes are presented. GPC is a linear, time-invariant, multi-input/multi-output predictive control method that uses an ARX model to characterize the system and to design the controller. Although the method can accommodate both embedded (implicit) and explicit feedforward paths for incorporation of disturbance effects, only the case of embedded feedforward in which the disturbances are assumed to be unknown is considered here. Results from numerical simulations using mathematical models of both a free-free three-degree-of-freedom mass-spring-dashpot system and the XV-15 tiltrotor research aircraft are presented. In regulation mode operation, which calls for zero system response in the presence of disturbances, the simulations showed reductions of nearly 100%. In tracking mode operations, where the system is commanded to follow a specified path, the GPC controllers produced the desired responses, even in the presence of disturbances.

  6. Pressure prediction model for compression garment design.

    Science.gov (United States)

    Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q

    2010-01-01

    Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.

  7. A Note on the Identifiability of Generalized Linear Mixed Models

    DEFF Research Database (Denmark)

    Labouriau, Rodrigo

    2014-01-01

    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity ...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization......I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...

  8. Statistical assessment of predictive modeling uncertainty

    Science.gov (United States)

    Barzaghi, Riccardo; Marotta, Anna Maria

    2017-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

  9. Seasonal Predictability in a Model Atmosphere.

    Science.gov (United States)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  10. 140 Characters to Victory?: Using Twitter to Predict the UK 2015 General Election

    CERN Document Server

    Burnap, Pete; Sloan, Luke; Southern, Rosalynd; Williams, Matthew

    2015-01-01

    The election forecasting 'industry' is a growing one, both in the volume of scholars producing forecasts and methodological diversity. In recent years a new approach has emerged that relies on social media and particularly Twitter data to predict election outcomes. While some studies have shown the method to hold a surprising degree of accuracy there has been criticism over the lack of consistency and clarity in the methods used, along with inevitable problems of population bias. In this paper we set out a 'baseline' model for using Twitter as an election forecasting tool that we then apply to the UK 2015 General Election. The paper builds on existing literature by extending the use of Twitter as a forecasting tool to the UK context and identifying its limitations, particularly with regard to its application in a multi-party environment with geographic concentration of power for minor parties.

  11. Prediction of chronic critical illness in a general intensive care unit

    Directory of Open Access Journals (Sweden)

    Sérgio H. Loss

    2013-06-01

    Full Text Available OBJECTIVE: To assess the incidence, costs, and mortality associated with chronic critical illness (CCI, and to identify clinical predictors of CCI in a general intensive care unit. METHODS: This was a prospective observational cohort study. All patients receiving supportive treatment for over 20 days were considered chronically critically ill and eligible for the study. After applying the exclusion criteria, 453 patients were analyzed. RESULTS: There was an 11% incidence of CCI. Total length of hospital stay, costs, and mortality were significantly higher among patients with CCI. Mechanical ventilation, sepsis, Glasgow score < 15, inadequate calorie intake, and higher body mass index were independent predictors for cci in the multivariate logistic regression model. CONCLUSIONS: CCI affects a distinctive population in intensive care units with higher mortality, costs, and prolonged hospitalization. Factors identifiable at the time of admission or during the first week in the intensive care unit can be used to predict CCI.

  12. PEMAHAMAN DASAR ANALISIS MODEL COMPUTABLE GENERAL EQUILIBRIUM (CGE

    Directory of Open Access Journals (Sweden)

    Mardiyah Hayati

    2013-11-01

    Full Text Available Simple paper about basic understanding of computable general equilibrium aimed to give basic understanding about CGE. It consist of history of CGE, assumption of CGE model, excess and lack of CGE model, and creation of simple CGE model for closed economy. CGE model is suitable to be used for seeing impact of new policy implementation. It is because CGE model use general equilibrium in which this theory of general equilibrium explaining about inter-relation among markets in the economy system. CGE model was introduced in 1960s known as Johansen model. Next, it is expanded into various models such as: ORANI Model, General Trade Analysis Project (GTAP Model, and Applied General Equilibrium (AGE Model. In Indonesia, there are CGE ORANI Model, Wayang, Indonesia-E3 and IRCGE. CGE Model is created by assumption of perfect competition. Consumer maximizes utility, producer maximizes profit, and company maximizes zero profit condition.

  13. A kinetic model for predicting biodegradation.

    Science.gov (United States)

    Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O

    2007-01-01

    Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.

  14. A general route diversity model for convergent terrestrial microwave links

    Science.gov (United States)

    Paulson, Kevin S.; Usman, Isa S.; Watson, Robert J.

    2006-06-01

    This research examines route diversity as a fade mitigation technique in the presence of rain for convergent, terrestrial, microwave links. A general model is derived which predicts the joint distribution of rain attenuation on arbitrary pairs of convergent microwave links, directly from the link parameters. It is assumed that pairs of links have joint rain attenuation distributions that are bilognormally distributed. Four of the five distribution parameters can be estimated from International Telecommunication Union recommendation models. A maximum likelihood estimation method was used in a previous paper to estimate the fifth parameter, that is, the covariance or correlation. In this paper an empirical model is reported, linking the correlation of log rain fade with the geometry and radio parameters of the pair of links. From these distributions, the advantage due to route diversity may be calculated for arbitrary fade margins. Furthermore, the predicted diversity statistics vary smoothly and yield plausible extrapolations into low-probability scenarios. Diversity improvement is calculated for a set of example link scenarios.

  15. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  16. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  17. Assessment of specific characteristics of abnormal general movements : does it enhance the prediction of cerebral palsy?

    NARCIS (Netherlands)

    Hamer, Elisa G.; Bos, Arend F.; Hadders-Algra, Mijna

    2011-01-01

    AIM Abnormal general movements at around 3 months corrected age indicate a high risk of cerebral palsy (CP). We aimed to determine whether specific movement characteristics can improve the predictive power of definitely abnormal general movements. METHOD Video recordings of 46 infants with definitel

  18. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  19. NUCLEAR AND HEAVY ION PHYSICS: α-decay half-lives of superheavy nuclei and general predictions

    Science.gov (United States)

    Dong, Jian-Min; Zhang, Hong-Fei; Wang, Yan-Zhao; Zuo, Wei; Su, Xin-Ning; Li, Jun-Qing

    2009-08-01

    The generalized liquid drop model (GLDM) and the cluster model have been employed to calculate the α-decay half-lives of superheavy nuclei (SHN) using the experimental α-decay Q values. The results of the cluster model are slightly poorer than those from the GLDM if experimental Q values are used. The prediction powers of these two models with theoretical Q values from Audi et al. (QAudi) and Muntian et al. (QM) have been tested to find that the cluster model with QAudi and QM could provide reliable results for Z > 112 but the GLDM with QAudi for Z <= 112. The half-lives of some still unknown nuclei are predicted by these two models and these results may be useful for future experimental assignment and identification.

  20. Optimal model-free prediction from multivariate time series.

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  1. General Analysis of Dark Radiation in Sequestered String Models

    CERN Document Server

    Cicoli, Michele

    2015-01-01

    We perform a general analysis of axionic dark radiation produced from the decay of the lightest modulus in the sequestered LARGE Volume Scenario. We discuss several cases depending on the form of the Kahler metric for visible sector matter fields and the mechanism responsible for achieving a de Sitter vacuum. The leading decay channels which determine dark radiation predictions are to hidden sector axions, visible sector Higgses and SUSY scalars depending on their mass. We show that in most of the parameter space of split SUSY-like models squarks and sleptons are heavier than the lightest modulus. Hence dark radiation predictions previously obtained for MSSM-like cases hold more generally also for split SUSY-like cases since the decay channel to SUSY scalars is kinematically forbidden. However the inclusion of string loop corrections to the Kahler potential gives rise to a parameter space region where the decay channel to SUSY scalars opens up, leading to a significant reduction of dark radiation production. ...

  2. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  3. Probabilistic prediction models for aggregate quarry siting

    Science.gov (United States)

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  4. Predicting Footbridge Response using Stochastic Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing s...... as it pinpoints which decisions to be concerned about when the goal is to predict footbridge response. The studies involve estimating footbridge responses using Monte-Carlo simulations and focus is on estimating vertical structural response to single person loading....

  5. Nonconvex Model Predictive Control for Commercial Refrigeration

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp

    2013-01-01

    is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...

  6. Nonlinear continuous-time generalized predictive control of solar power plant

    Directory of Open Access Journals (Sweden)

    Khoukhi Billal

    2015-01-01

    Full Text Available This paper presents an application of nonlinear continuous-time generalized predictive control (GPC to the distributed collector field of a solar power plant. The major characteristic of a solar power plant is that the primary energy source, solar radiation, cannot be manipulated. Solar radiation varies throughout the day, causing changes in plant dynamics and strong perturbations in the process. A brief description of the solar power plant and its simulator is given. After that, basic concepts of predictive control and continuous-time generalized predictive control are introduced. A new control strategy, named nonlinear continuous-time generalized predictive control (NCGPC, is then derived to control the process. The simulation results show that the NCGPC gives a greater flexibility to achieve performance goals and better perturbation rejection than classical control.

  7. Modeling local item dependence with the hierarchical generalized linear model.

    Science.gov (United States)

    Jiao, Hong; Wang, Shudong; Kamata, Akihito

    2005-01-01

    Local item dependence (LID) can emerge when the test items are nested within common stimuli or item groups. This study proposes a three-level hierarchical generalized linear model (HGLM) to model LID when LID is due to such contextual effects. The proposed three-level HGLM was examined by analyzing simulated data sets and was compared with the Rasch-equivalent two-level HGLM that ignores such a nested structure of test items. The results demonstrated that the proposed model could capture LID and estimate its magnitude. Also, the two-level HGLM resulted in larger mean absolute differences between the true and the estimated item difficulties than those from the proposed three-level HGLM. Furthermore, it was demonstrated that the proposed three-level HGLM estimated the ability distribution variance unaffected by the LID magnitude, while the two-level HGLM with no LID consideration increasingly underestimated the ability variance as the LID magnitude increased.

  8. Predictive In Vivo Models for Oncology.

    Science.gov (United States)

    Behrens, Diana; Rolff, Jana; Hoffmann, Jens

    2016-01-01

    Experimental oncology research and preclinical drug development both substantially require specific, clinically relevant in vitro and in vivo tumor models. The increasing knowledge about the heterogeneity of cancer requested a substantial restructuring of the test systems for the different stages of development. To be able to cope with the complexity of the disease, larger panels of patient-derived tumor models have to be implemented and extensively characterized. Together with individual genetically engineered tumor models and supported by core functions for expression profiling and data analysis, an integrated discovery process has been generated for predictive and personalized drug development.Improved “humanized” mouse models should help to overcome current limitations given by xenogeneic barrier between humans and mice. Establishment of a functional human immune system and a corresponding human microenvironment in laboratory animals will strongly support further research.Drug discovery, systems biology, and translational research are moving closer together to address all the new hallmarks of cancer, increase the success rate of drug development, and increase the predictive value of preclinical models.

  9. Statistical Seasonal Sea Surface based Prediction Model

    Science.gov (United States)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  10. Prediction of age-related macular degeneration in the general population: The three continent AMD consortium

    NARCIS (Netherlands)

    G.H.S. Buitendijk (Gabrielle); E. Rochtchina (Elena); C.E. Myers (Chelsea); C.M. van Duijn (Cock); K.E. Lee (Kristine); B.E.K. Klein (Barbara); S.M. Meuer (Stacy); P.T.V.M. de Jong (Paulus); E.G. Holliday (Elizabeth); A.G. Tan (Ava); A.G. Uitterlinden (André); T.A. Sivakumaran (Theru); J. Attia (John); A. Hofman (Albert); P. Mitchell (Paul); J.R. Vingerling (Hans); S.K. Iyengar (Sudha); A.C.J.W. Janssens (Cécile); J.J. Wang (Jie Jin); B.E.K. Klein (Barbara); C.C.W. Klaver (Caroline)

    2013-01-01

    textabstractPurpose Prediction models for age-related macular degeneration (AMD) based on case-control studies have a tendency to overestimate risks. The aim of this study is to develop a prediction model for late AMD based on data from population-based studies. Design Three population-based studies

  11. Meteorological Drought Prediction Using a Multi-Model Ensemble Approach

    Science.gov (United States)

    Chen, L.; Mo, K. C.; Zhang, Q.; Huang, J.

    2013-12-01

    In the United States, drought is among the costliest natural hazards, with an annual average of 6 billion dollars in damage. Drought prediction from monthly to seasonal time scales is of critical importance to disaster mitigation, agricultural planning, and multi-purpose reservoir management. Started in December 2012, NOAA Climate Prediction Center (CPC) has been providing operational Standardized Precipitation Index (SPI) Outlooks using the National Multi-Model Ensemble (NMME) forecasts, to support CPC's monthly drought outlooks and briefing activities. The current NMME system consists of six model forecasts from U.S. and Canada modeling centers, including the CFSv2, CM2.1, GEOS-5, CCSM3.0, CanCM3, and CanCM4 models. In this study, we conduct an assessment of the meteorological drought predictability using the retrospective NMME forecasts for the period from 1982 to 2010. Before predicting SPI, monthly-mean precipitation (P) forecasts from each model were bias corrected and spatially downscaled (BCSD) to regional grids of 0.5-degree resolution over the contiguous United States based on the probability distribution functions derived from the hindcasts. The corrected P forecasts were then appended to the CPC Unified Precipitation Analysis to form a P time series for computing 3-month and 6-month SPIs. The ensemble SPI forecasts are the equally weighted mean of the six model forecasts. Two performance measures, the anomaly correlation and root-mean-square errors against the observations, are used to evaluate forecast skill. For P forecasts, errors vary among models and skill generally is low after the second month. All model P forecasts have higher skill in winter and lower skill in summer. In wintertime, BCSD improves both P and SPI forecast skill. Most improvements are over the western mountainous regions and along the Great Lake. Overall, SPI predictive skill is regionally and seasonally dependent. The six-month SPI forecasts are skillful out to four months. For

  12. An evaluation of prior influence on the predictive ability of Bayesian model averaging.

    Science.gov (United States)

    St-Louis, Véronique; Clayton, Murray K; Pidgeon, Anna M; Radeloff, Volker C

    2012-03-01

    Model averaging is gaining popularity among ecologists for making inference and predictions. Methods for combining models include Bayesian model averaging (BMA) and Akaike's Information Criterion (AIC) model averaging. BMA can be implemented with different prior model weights, including the Kullback-Leibler prior associated with AIC model averaging, but it is unclear how the prior model weight affects model results in a predictive context. Here, we implemented BMA using the Bayesian Information Criterion (BIC) approximation to Bayes factors for building predictive models of bird abundance and occurrence in the Chihuahuan Desert of New Mexico. We examined how model predictive ability differed across four prior model weights, and how averaged coefficient estimates, standard errors and coefficients' posterior probabilities varied for 16 bird species. We also compared the predictive ability of BMA models to a best single-model approach. Overall, Occam's prior of parsimony provided the best predictive models. In general, the Kullback-Leibler prior, however, favored complex models of lower predictive ability. BMA performed better than a best single-model approach independently of the prior model weight for 6 out of 16 species. For 6 other species, the choice of the prior model weight affected whether BMA was better than the best single-model approach. Our results demonstrate that parsimonious priors may be favorable over priors that favor complexity for making predictions. The approach we present has direct applications in ecology for better predicting patterns of species' abundance and occurrence.

  13. Direct simulation of diatomic gases using the generalized hard sphere model

    Science.gov (United States)

    Hash, D. B.; Hassan, H. A.

    1993-01-01

    The generalized hard sphere model which incorporates the effects of attraction and repulsion is used to predict flow measurements in tests involving extremely low freestream temperatures. For the two cases considered, a Mach 26 nitrogen shock and a Mach 20 nitrogen flow over a flat place, only rotational excitation is deemed important, and appropriate modifications for the Borgnakke-Larsen procedure are developed. In general, for the cases considered, the present model performed better than the variable hard sphere model.

  14. Prediction of blast-induced air overpressure: a hybrid AI-based predictive model.

    Science.gov (United States)

    Jahed Armaghani, Danial; Hajihassani, Mohsen; Marto, Aminaton; Shirani Faradonbeh, Roohollah; Mohamad, Edy Tonnizam

    2015-11-01

    Blast operations in the vicinity of residential areas usually produce significant environmental problems which may cause severe damage to the nearby areas. Blast-induced air overpressure (AOp) is one of the most important environmental impacts of blast operations which needs to be predicted to minimize the potential risk of damage. This paper presents an artificial neural network (ANN) optimized by the imperialist competitive algorithm (ICA) for the prediction of AOp induced by quarry blasting. For this purpose, 95 blasting operations were precisely monitored in a granite quarry site in Malaysia and AOp values were recorded in each operation. Furthermore, the most influential parameters on AOp, including the maximum charge per delay and the distance between the blast-face and monitoring point, were measured and used to train the ICA-ANN model. Based on the generalized predictor equation and considering the measured data from the granite quarry site, a new empirical equation was developed to predict AOp. For comparison purposes, conventional ANN models were developed and compared with the ICA-ANN results. The results demonstrated that the proposed ICA-ANN model is able to predict blast-induced AOp more accurately than other presented techniques.

  15. A Bayesian modeling approach for generalized semiparametric structural equation models.

    Science.gov (United States)

    Song, Xin-Yuan; Lu, Zhao-Hua; Cai, Jing-Heng; Ip, Edward Hak-Sing

    2013-10-01

    In behavioral, biomedical, and psychological studies, structural equation models (SEMs) have been widely used for assessing relationships between latent variables. Regression-type structural models based on parametric functions are often used for such purposes. In many applications, however, parametric SEMs are not adequate to capture subtle patterns in the functions over the entire range of the predictor variable. A different but equally important limitation of traditional parametric SEMs is that they are not designed to handle mixed data types-continuous, count, ordered, and unordered categorical. This paper develops a generalized semiparametric SEM that is able to handle mixed data types and to simultaneously model different functional relationships among latent variables. A structural equation of the proposed SEM is formulated using a series of unspecified smooth functions. The Bayesian P-splines approach and Markov chain Monte Carlo methods are developed to estimate the smooth functions and the unknown parameters. Moreover, we examine the relative benefits of semiparametric modeling over parametric modeling using a Bayesian model-comparison statistic, called the complete deviance information criterion (DIC). The performance of the developed methodology is evaluated using a simulation study. To illustrate the method, we used a data set derived from the National Longitudinal Survey of Youth.

  16. Urban background noise mapping: the general model

    NARCIS (Netherlands)

    Wei, W.; Botteldooren, D.; Renterghem, T. van; Hornikx, M.; Forssen, J.; Salomons, E.; Ogren, M.

    2014-01-01

    Surveys show that inhabitants of dwellings exposed to high noise levels benefit from having access to a quiet side. However, current practice in noise prediction often underestimates the noise levels at a shielded façade. Multiple reflections between façades in street canyons and inner yards are com

  17. Urban background noise mapping: the general model

    NARCIS (Netherlands)

    Wei, W.; Botteldooren, D.; Renterghem, T. van; Hornikx, M.; Forssen, J.; Salomons, E.; Ogren, M.

    2014-01-01

    Surveys show that inhabitants of dwellings exposed to high noise levels benefit from having access to a quiet side. However, current practice in noise prediction often underestimates the noise levels at a shielded façade. Multiple reflections between façades in street canyons and inner yards are com

  18. Hyperbolic value addition and general models of animal choice.

    Science.gov (United States)

    Mazur, J E

    2001-01-01

    Three mathematical models of choice--the contextual-choice model (R. Grace, 1994), delay-reduction theory (N. Squires & E. Fantino, 1971), and a new model called the hyperbolic value-added model--were compared in their ability to predict the results from a wide variety of experiments with animal subjects. When supplied with 2 or 3 free parameters, all 3 models made fairly accurate predictions for a large set of experiments that used concurrent-chain procedures. One advantage of the hyperbolic value-added model is that it is derived from a simpler model that makes accurate predictions for many experiments using discrete-trial adjusting-delay procedures. Some results favor the hyperbolic value-added model and delay-reduction theory over the contextual-choice model, but more data are needed from choice situations for which the models make distinctly different predictions.

  19. General review on in vitro hepatocyte models and their applications.

    Science.gov (United States)

    Guguen-Guillouzo, Christiane; Guillouzo, Andre

    2010-01-01

    In vitro hepatocyte models represent very useful systems in both fundamental research and various application areas. Primary hepatocytes appear as the closest model for the liver in vivo. However, they are phenotypically unstable, have a limited life span and in addition, exhibit large interdonor variability when of human origin. Hepatoma cell lines appear as an alternative but only the HepaRG cell line exhibits various functions, including major cytochrome P450 activities, at levels close to those found in primary hepatocytes. In vitro hepatocyte models have brought a substantial contribution to the understanding of the biochemistry, physiology, and cell biology of the normal and diseased liver and in various application domains such as xenobiotic metabolism and toxicity, virology, parasitology, and more generally cell therapies. In the future, new well-differentiated hepatocyte cell lines derived from tumors or from either embryonic or adult stem cells might be expected and although hepatocytes will continue to be used in various fields, these in vitro liver models should allow marked advances, especially in cell-based therapies and predictive and mechanistic hepatotoxicity of new drugs and other chemicals. All models will benefit from new developments in throughput screening based on cell chips coupled with high-content imaging and in toxicogenomics technologies.

  20. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  1. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  2. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  3. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  4. Modeling and Prediction of Krueger Device Noise

    Science.gov (United States)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  5. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  6. Generalization of Random Intercept Multilevel Models

    Directory of Open Access Journals (Sweden)

    Rehan Ahmad Khan

    2013-10-01

    Full Text Available The concept of random intercept models in a multilevel model developed by Goldstein (1986 has been extended for k-levels. The random variation in intercepts at individual level is marginally split into components by incorporating higher levels of hierarchy in the single level model. So, one can control the random variation in intercepts by incorporating the higher levels in the model.

  7. Regression Model to Predict Global Solar Irradiance in Malaysia

    Directory of Open Access Journals (Sweden)

    Hairuniza Ahmed Kutty

    2015-01-01

    Full Text Available A novel regression model is developed to estimate the monthly global solar irradiance in Malaysia. The model is developed based on different available meteorological parameters, including temperature, cloud cover, rain precipitate, relative humidity, wind speed, pressure, and gust speed, by implementing regression analysis. This paper reports on the details of the analysis of the effect of each prediction parameter to identify the parameters that are relevant to estimating global solar irradiance. In addition, the proposed model is compared in terms of the root mean square error (RMSE, mean bias error (MBE, and the coefficient of determination (R2 with other models available from literature studies. Seven models based on single parameters (PM1 to PM7 and five multiple-parameter models (PM7 to PM12 are proposed. The new models perform well, with RMSE ranging from 0.429% to 1.774%, R2 ranging from 0.942 to 0.992, and MBE ranging from −0.1571% to 0.6025%. In general, cloud cover significantly affects the estimation of global solar irradiance. However, cloud cover in Malaysia lacks sufficient influence when included into multiple-parameter models although it performs fairly well in single-parameter prediction models.

  8. Reliable Estimation of Prediction Uncertainty for Physicochemical Property Models.

    Science.gov (United States)

    Proppe, Jonny; Reiher, Markus

    2017-07-11

    One of the major challenges in computational science is to determine the uncertainty of a virtual measurement, that is the prediction of an observable based on calculations. As highly accurate first-principles calculations are in general unfeasible for most physical systems, one usually resorts to parameteric property models of observables, which require calibration by incorporating reference data. The resulting predictions and their uncertainties are sensitive to systematic errors such as inconsistent reference data, parametric model assumptions, or inadequate computational methods. Here, we discuss the calibration of property models in the light of bootstrapping, a sampling method that can be employed for identifying systematic errors and for reliable estimation of the prediction uncertainty. We apply bootstrapping to assess a linear property model linking the (57)Fe Mössbauer isomer shift to the contact electron density at the iron nucleus for a diverse set of 44 molecular iron compounds. The contact electron density is calculated with 12 density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91, PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error diagnostics and reliable, locally resolved uncertainties for isomer-shift predictions. Pure and hybrid density functionals yield average prediction uncertainties of 0.06-0.08 mm s(-1) and 0.04-0.05 mm s(-1), respectively, the latter being close to the average experimental uncertainty of 0.02 mm s(-1). Furthermore, we show that both model parameters and prediction uncertainty depend significantly on the composition and number of reference data points. Accordingly, we suggest that rankings of density functionals based on performance measures (e.g., the squared coefficient of correlation, r(2), or the root-mean-square error, RMSE) should not be inferred from a single data set. This study presents the first statistically rigorous calibration analysis for theoretical M

  9. Optimal feedback scheduling of model predictive controllers

    Institute of Scientific and Technical Information of China (English)

    Pingfang ZHOU; Jianying XIE; Xiaolong DENG

    2006-01-01

    Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FSCBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.

  10. Objective calibration of numerical weather prediction models

    Science.gov (United States)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  11. Prediction models from CAD models of 3D objects

    Science.gov (United States)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  12. Smooth-Threshold Multivariate Genetic Prediction with Unbiased Model Selection.

    Science.gov (United States)

    Ueki, Masao; Tamiya, Gen

    2016-04-01

    We develop a new genetic prediction method, smooth-threshold multivariate genetic prediction, using single nucleotide polymorphisms (SNPs) data in genome-wide association studies (GWASs). Our method consists of two stages. At the first stage, unlike the usual discontinuous SNP screening as used in the gene score method, our method continuously screens SNPs based on the output from standard univariate analysis for marginal association of each SNP. At the second stage, the predictive model is built by a generalized ridge regression simultaneously using the screened SNPs with SNP weight determined by the strength of marginal association. Continuous SNP screening by the smooth thresholding not only makes prediction stable but also leads to a closed form expression of generalized degrees of freedom (GDF). The GDF leads to the Stein's unbiased risk estimation (SURE), which enables data-dependent choice of optimal SNP screening cutoff without using cross-validation. Our method is very rapid because computationally expensive genome-wide scan is required only once in contrast to the penalized regression methods including lasso and elastic net. Simulation studies that mimic real GWAS data with quantitative and binary traits demonstrate that the proposed method outperforms the gene score method and genomic best linear unbiased prediction (GBLUP), and also shows comparable or sometimes improved performance with the lasso and elastic net being known to have good predictive ability but with heavy computational cost. Application to whole-genome sequencing (WGS) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) exhibits that the proposed method shows higher predictive power than the gene score and GBLUP methods.

  13. Hybrid Model for Early Onset Prediction of Driver Fatigue with Observable Cues

    Directory of Open Access Journals (Sweden)

    Mingheng Zhang

    2014-01-01

    Full Text Available This paper presents a hybrid model for early onset prediction of driver fatigue, which is the major reason of severe traffic accidents. The proposed method divides the prediction problem into three stages, that is, SVM-based model for predicting the early onset driver fatigue state, GA-based model for optimizing the parameters in the SVM, and PCA-based model for reducing the dimensionality of the complex features datasets. The model and algorithm are illustrated with driving experiment data and comparison results also show that the hybrid method can generally provide a better performance for driver fatigue state prediction.

  14. Model predictive control of MSMPR crystallizers

    Science.gov (United States)

    Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc

    2005-02-01

    A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.

  15. An Anisotropic Hardening Model for Springback Prediction

    Science.gov (United States)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  16. Large eddy simulation subgrid model for soot prediction

    Science.gov (United States)

    El-Asrag, Hossam Abd El-Raouf Mostafa

    Soot prediction in realistic systems is one of the most challenging problems in theoretical and applied combustion. Soot formation as a chemical process is very complicated and not fully understood. The major difficulty stems from the chemical complexity of the soot formation process as well as its strong coupling with the other thermochemical and fluid processes that occur simultaneously. Soot is a major byproduct of incomplete combustion, having a strong impact on the environment as well as the combustion efficiency. Therefore, innovative methods is needed to predict soot in realistic configurations in an accurate and yet computationally efficient way. In the current study, a new soot formation subgrid model is developed and reported here. The new model is designed to be used within the context of the Large Eddy Simulation (LES) framework, combined with Linear Eddy Mixing (LEM) as a subgrid combustion model. The final model can be applied equally to premixed and non-premixed flames over any required geometry and flow conditions in the free, the transition, and the continuum regimes. The soot dynamics is predicted using a Method of Moments approach with Lagrangian Interpolative Closure (MOMIC) for the fractional moments. Since no prior knowledge of the particles distribution is required, the model is generally applicable. The current model accounts for the basic soot transport phenomena as transport by molecular diffusion and Thermophoretic forces. The model is first validated against experimental results for non-sooting swirling non-premixed and partially premixed flames. Next, a set of canonical premixed sooting flames are simulated, where the effect of turbulence, binary diffusivity and C/O ratio on soot formation are studied. Finally, the model is validated against a non-premixed jet sooting flame. The effect of the flame structure on the different soot formation stages as well as the particle size distribution is described. Good results are predicted with

  17. Predicting the future completing models of observed complex systems

    CERN Document Server

    Abarbanel, Henry

    2013-01-01

    Predicting the Future: Completing Models of Observed Complex Systems provides a general framework for the discussion of model building and validation across a broad spectrum of disciplines. This is accomplished through the development of an exact path integral for use in transferring information from observations to a model of the observed system. Through many illustrative examples drawn from models in neuroscience, fluid dynamics, geosciences, and nonlinear electrical circuits, the concepts are exemplified in detail. Practical numerical methods for approximate evaluations of the path integral are explored, and their use in designing experiments and determining a model's consistency with observations is investigated. Using highly instructive examples, the problems of data assimilation and the means to treat them are clearly illustrated. This book will be useful for students and practitioners of physics, neuroscience, regulatory networks, meteorology and climate science, network dynamics, fluid dynamics, and o...

  18. Predictive Control, Competitive Model Business Planning, and Innovation ERP

    DEFF Research Database (Denmark)

    Nourani, Cyrus F.; Lauth, Codrina

    2015-01-01

    New optimality principles are put forth based on competitive model business planning. A Generalized MinMax local optimum dynamic programming algorithm is presented and applied to business model computing where predictive techniques can determine local optima. Based on a systems model an enterprise...... is not viewed as the sum of its component elements, but the product of their interactions. The paper starts with introducing a systems approach to business modeling. A competitive business modeling technique, based on the author's planning techniques is applied. Systemic decisions are based on common...... organizational goals, and as such business planning and resource assignments should strive to satisfy higher organizational goals. It is critical to understand how different decisions affect and influence one another. Here, a business planning example is presented where systems thinking technique, using Causal...

  19. Predictive modelling of ferroelectric tunnel junctions

    Science.gov (United States)

    Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.

    2016-05-01

    Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.

  20. Simple predictions from multifield inflationary models.

    Science.gov (United States)

    Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C

    2014-04-25

    We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.

  1. Assortativity in generalized preferential attachment models

    CERN Document Server

    Krot, Alexander

    2016-01-01

    In this paper, we analyze assortativity of preferential attachment models. We deal with a wide class of preferential attachment models (PA-class). It was previously shown that the degree distribution in all models of the PA-class follows a power law. Also, the global and the average local clustering coefficients were analyzed. We expand these results by analyzing the assortativity property of the PA-class of models. Namely, we analyze the behavior of $d_{nn}(d)$ which is the average degree of a neighbor of a vertex of degree $d$.

  2. Economic model predictive control theory, formulations and chemical process applications

    CERN Document Server

    Ellis, Matthew; Christofides, Panagiotis D

    2017-01-01

    This book presents general methods for the design of economic model predictive control (EMPC) systems for broad classes of nonlinear systems that address key theoretical and practical considerations including recursive feasibility, closed-loop stability, closed-loop performance, and computational efficiency. Specifically, the book proposes: Lyapunov-based EMPC methods for nonlinear systems; two-tier EMPC architectures that are highly computationally efficient; and EMPC schemes handling explicitly uncertainty, time-varying cost functions, time-delays and multiple-time-scale dynamics. The proposed methods employ a variety of tools ranging from nonlinear systems analysis, through Lyapunov-based control techniques to nonlinear dynamic optimization. The applicability and performance of the proposed methods are demonstrated through a number of chemical process examples. The book presents state-of-the-art methods for the design of economic model predictive control systems for chemical processes. In addition to being...

  3. Modeling and Prediction of Hot Deformation Flow Curves

    Science.gov (United States)

    Mirzadeh, Hamed; Cabrera, Jose Maria; Najafizadeh, Abbas

    2012-01-01

    The modeling of hot flow stress and prediction of flow curves for unseen deformation conditions are important in metal-forming processes because any feasible mathematical simulation needs accurate flow description. In the current work, in an attempt to summarize, generalize, and introduce efficient methods, the dynamic recrystallization (DRX) flow curves of a 17-4 PH martensitic precipitation hardening stainless steel, a medium carbon microalloyed steel, and a 304 H austenitic stainless steel were modeled and predicted using (1) a hyperbolic sine equation with strain dependent constants, (2) a developed constitutive equation in a simple normalized stress-normalized strain form and its modified version, and (3) a feed-forward artificial neural network (ANN). These methods were critically discussed, and the ANN technique was found to be the best for the modeling available flow curves; however, the developed constitutive equation showed slightly better performance than that of ANN and significantly better predicted values than those of the hyperbolic sine equation in prediction of flow curves for unseen deformation conditions.

  4. Particle model with generalized Poincaré symmetry

    Science.gov (United States)

    Smith, A.

    2017-08-01

    Using the techniques of nonlinear coset realization with a generalized Poincaré group, we construct a relativistic particle model, invariant under the generalized symmetries, providing a dynamical realization of the B5 algebra.

  5. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...

  6. A Modified Model Predictive Control Scheme

    Institute of Scientific and Technical Information of China (English)

    Xiao-Bing Hu; Wen-Hua Chen

    2005-01-01

    In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.

  7. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....

  8. Explicit model predictive control accuracy analysis

    OpenAIRE

    Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano

    2015-01-01

    Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. repre...

  9. Generalized Plasma Skimming Model for Cells and Drug Carriers in the Microvasculature

    CERN Document Server

    Lee, Tae-Rin; Yang, Jiho

    2016-01-01

    In microvascular transport, where both blood and drug carriers are involved, plasma skimming has a key role on changing hematocrit level and drug carrier concentration in capillary beds after continuous vessel bifurcation in the microvasculature. While there have been numerous studies on modeling the plasma skimming of blood, previous works lacked in consideration of its interaction with drug carriers. In this paper, a generalized plasma skimming model is suggested to predict the redistributions of both the cells and drug carriers at each bifurcation. In order to examine its applicability, this new model was applied on a single bifurcation system to predict the redistribution of red blood cells and drug carriers. Furthermore, this model was tested at microvascular network level under different plasma skimming conditions for predicting the concentration of drug carriers. Based on these results, the applicability of this generalized plasma skimming model is fully discussed and future works along with the model'...

  10. Stratospheric General Circulation with Chemistry Model (SGCCM)

    Science.gov (United States)

    Rood, Richard B.; Douglass, Anne R.; Geller, Marvin A.; Kaye, Jack A.; Nielsen, J. Eric; Rosenfield, Joan E.; Stolarski, Richard S.

    1990-01-01

    In the past two years constituent transport and chemistry experiments have been performed using both simple single constituent models and more complex reservoir species models. Winds for these experiments have been taken from the data assimilation effort, Stratospheric Data Analysis System (STRATAN).

  11. Generalized coupling in the Kuramoto model

    DEFF Research Database (Denmark)

    Filatrella, G.; Pedersen, Niels Falsig; Wiesenfeld, K.

    2007-01-01

    We propose a modification of the Kuramoto model to account for the effective change in the coupling constant among the oscillators, as suggested by some experiments on Josephson junction, laser arrays, and mechanical systems, where the active elements are turned on one by one. The resulting model...... with the behavior of Josephson junctions coupled via a cavity....

  12. Critical conceptualism in environmental modeling and prediction.

    Science.gov (United States)

    Christakos, G

    2003-10-15

    Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.

  13. Detecting dynamical interdependence and generalized synchrony through mutual prediction in a neural ensemble

    Science.gov (United States)

    Schiff, Steven J.; So, Paul; Chang, Taeun; Burke, Robert E.; Sauer, Tim

    1996-12-01

    A method to characterize dynamical interdependence among nonlinear systems is derived based on mutual nonlinear prediction. Systems with nonlinear correlation will show mutual nonlinear prediction when standard analysis with linear cross correlation might fail. Mutual nonlinear prediction also provides information on the directionality of the coupling between systems. Furthermore, the existence of bidirectional mutual nonlinear prediction in unidirectionally coupled systems implies generalized synchrony. Numerical examples studied include three classes of unidirectionally coupled systems: systems with identical parameters, nonidentical parameters, and stochastic driving of a nonlinear system. This technique is then applied to the activity of motoneurons within a spinal cord motoneuron pool. The interrelationships examined include single neuron unit firing, the total number of neurons discharging at one time as measured by the integrated monosynaptic reflex, and intracellular measurements of integrated excitatory postsynaptic potentials (EPSP's). Dynamical interdependence, perhaps generalized synchrony, was identified in this neuronal network between simultaneous single unit firings, between units and the population, and between units and intracellular EPSP's.

  14. Generalized bottom-tau unification, neutrino oscillations and dark matter: Predictions from a lepton quarticity flavor approach

    Science.gov (United States)

    Centelles Chuliá, Salvador; Srivastava, Rahul; Valle, José W. F.

    2017-10-01

    We propose an A4 extension of the Standard Model with a Lepton Quarticity symmetry correlating dark matter stability with the Dirac nature of neutrinos. The flavor symmetry predicts (i) a generalized bottom-tau mass relation involving all families, (ii) small neutrino masses are induced a la seesaw, (iii) CP must be significantly violated in neutrino oscillations, (iv) the atmospheric angle θ23 lies in the second octant, and (v) only the normal neutrino mass ordering is realized.

  15. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  16. Generalized versus non-generalized neural network model for multi-lead inflow forecasting at Aswan High Dam

    Directory of Open Access Journals (Sweden)

    A. El-Shafie

    2011-03-01

    Full Text Available Artificial neural networks (ANN have been found efficient, particularly in problems where characteristics of the processes are stochastic and difficult to describe using explicit mathematical models. However, time series prediction based on ANN algorithms is fundamentally difficult and faces problems. One of the major shortcomings is the search for the optimal input pattern in order to enhance the forecasting capabilities for the output. The second challenge is the over-fitting problem during the training procedure and this occurs when ANN loses its generalization. In this research, autocorrelation and cross correlation analyses are suggested as a method for searching the optimal input pattern. On the other hand, two generalized methods namely, Regularized Neural Network (RNN and Ensemble Neural Network (ENN models are developed to overcome the drawbacks of classical ANN models. Using Generalized Neural Network (GNN helped avoid over-fitting of training data which was observed as a limitation of classical ANN models. Real inflow data collected over the last 130 years at Lake Nasser was used to train, test and validate the proposed model. Results show that the proposed GNN model outperforms non-generalized neural network and conventional auto-regressive models and it could provide accurate inflow forecasting.

  17. The Survival Probability in Generalized Poisson Risk Model

    Institute of Scientific and Technical Information of China (English)

    GONGRi-zhao

    2003-01-01

    In this paper we generalize the aggregated premium income process from a constant rate process to a poisson process for the classical compound Poinsson risk model,then for the generalized model and the classical compound poisson risk model ,we respectively get its survival probability in finite time period in case of exponential claim amounts.

  18. Description of the General Equilibrium Model of Ecosystem Services (GEMES)

    Science.gov (United States)

    Travis Warziniack; David Finnoff; Jenny Apriesnig

    2017-01-01

    This paper serves as documentation for the General Equilibrium Model of Ecosystem Services (GEMES). GEMES is a regional computable general equilibrium model that is composed of values derived from natural capital and ecosystem services. It models households, producing sectors, and governments, linked to one another through commodity and factor markets. GEMES was...

  19. Evidence for a General Factor Model of ADHD in Adults

    Science.gov (United States)

    Gibbins, Christopher; Toplak, Maggie E.; Flora, David B.; Weiss, Margaret D.; Tannock, Rosemary

    2012-01-01

    Objective: To examine factor structures of "Diagnostic and Statistical Manual of Mental Disorders" (4th ed.) symptoms of ADHD in adults. Method: Two sets of models were tested: (a) models with inattention and hyperactivity/impulsivity as separate but correlated latent constructs and (b) hierarchical general factor models with a general factor for…

  20. BEYOND SEM: GENERAL LATENT VARIABLE MODELING

    National Research Council Canada - National Science Library

    Muthén, Bengt O

    2002-01-01

    This article gives an overview of statistical analysis with latent variables. Using traditional structural equation modeling as a starting point, it shows how the idea of latent variables captures a wide variety of statistical concepts...

  1. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...... presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  2. Generalized circuit model for coupled plasmonic systems

    CERN Document Server

    Benz, Felix; Tserkezis, Christos; Chikkaraddy, Rohit; Sigle, Daniel O; Pukenas, Laurynas; Evans, Stephen D; Aizpurua, Javier; Baumberg, Jeremy J

    2015-01-01

    We develop an analytic circuit model for coupled plasmonic dimers separated by small gaps that provides a complete account of the optical resonance wavelength. Using a suitable equivalent circuit, it shows how partially conducting links can be treated and provides quantitative agreement with both experiment and full electromagnetic simulations. The model highlights how in the conducting regime, the kinetic inductance of the linkers set the spectral blue-shifts of the coupled plasmon.

  3. Forced versus coupled dynamics in Earth system modelling and prediction

    Directory of Open Access Journals (Sweden)

    B. Knopf

    2005-01-01

    Full Text Available We compare coupled nonlinear climate models and their simplified forced counterparts with respect to predictability and phase space topology. Various types of uncertainty plague climate change simulation, which is, in turn, a crucial element of Earth System modelling. Since the currently preferred strategy for simulating the climate system, or the Earth System at large, is the coupling of sub-system modules (representing, e.g. atmosphere, oceans, global vegetation, this paper explicitly addresses the errors and indeterminacies generated by the coupling procedure. The focus is on a comparison of forced dynamics as opposed to fully, i.e. intrinsically, coupled dynamics. The former represents a particular type of simulation, where the time behaviour of one complex systems component is prescribed by data or some other external information source. Such a simplifying technique is often employed in Earth System models in order to save computing resources, in particular when massive model inter-comparisons need to be carried out. Our contribution to the debate is based on the investigation of two representative model examples, namely (i a low-dimensional coupled atmosphere-ocean simulator, and (ii a replica-like simulator embracing corresponding components.Whereas in general the forced version (ii is able to mimic its fully coupled counterpart (i, we show in this paper that for a considerable fraction of parameter- and state-space, the two approaches qualitatively differ. Here we take up a phenomenon concerning the predictability of coupled versus forced models that was reported earlier in this journal: the observation that the time series of the forced version display artificial predictive skill. We present an explanation in terms of nonlinear dynamical theory. In particular we observe an intermittent version of artificial predictive skill, which we call on-off synchronization, and trace it back to the appearance of unstable periodic orbits. We also

  4. A predictive fitness model for influenza

    Science.gov (United States)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  5. Predictive Model of Radiative Neutrino Masses

    CERN Document Server

    Babu, K S

    2013-01-01

    We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z_4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: The hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with \\delta_{CP} = \\pi; and the effective mass in neutrinoless double beta decay lies in a narrow range, m_{\\beta \\beta} = (17.6 - 18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tan\\beta, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The non-standard neutral Higgs bosons, if t...

  6. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...

  7. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  8. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  9. A generalized theoretical model for "continuous particle separation in a microchannel having asymmetrically arranged multiple branches"

    DEFF Research Database (Denmark)

    Andersen, Karsten Brandt; Levinsen, Simon; Svendsen, Winnie Edith;

    2009-01-01

    In this article we present a generalized theoretical model for the continuous separation of particles using the pinched flow fractionation method. So far the theoretical models have not been able to predict the separation of particles without the use of correction factors. In this article we pres...

  10. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  11. Robust Continuous-time Generalized Predictive Control for Large Time-delay System

    Institute of Scientific and Technical Information of China (English)

    WEI Huan; PAN Li-deng; ZHEN Xin-ping

    2008-01-01

    A simple delay-predictive continuous-time generalized predictive controller with filter (F - SDCGPC) is proposed. By using modified predictive output signal and cost function, the delay compensator is incorporated in the control law with observer structure, and a filter is added for enhancing robustness. The design of filter does not affect the nominal set-point response, and it is more flexible than the design of observer polynomial. The analysis and simulation results show that the F - SDCGPC has better robustness than the observer structure without filter when large time-delay error is considered.

  12. Provably Safe and Robust Learning-Based Model Predictive Control

    CERN Document Server

    Aswani, Anil; Sastry, S Shankar; Tomlin, Claire

    2011-01-01

    Controller design for systems typically faces a trade-off between robustness and performance, and the reliability of linear controllers has caused many control practitioners to focus on the former. However, there is a renewed interest in improving system performance to deal with growing energy and pollution constraints. This paper describes a learning-based model predictive control (MPC) scheme. The MPC provides deterministic guarantees on robustness and safety, and the learning is used to identify richer models of the system to improve controller performance. Our scheme uses a linear model with bounds on its uncertainty to construct invariant sets which help to provide the guarantees, and it can be generalized to other classes of models and to pseudo-spectral methods. This framework allows us to handle state and input constraints and optimize system performance with respect to a cost function. The learning occurs through the use of an oracle which returns the value and gradient of unmodeled dynamics at discr...

  13. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  14. A general approach for predicting the filtration of soft and permeable colloids: the milk example.

    Science.gov (United States)

    Bouchoux, Antoine; Qu, Peng; Bacchin, Patrice; Gésan-Guiziou, Geneviève

    2014-01-14

    Membrane filtration operations (ultra-, microfiltration) are now extensively used for concentrating or separating an ever-growing variety of colloidal dispersions. However, the phenomena that determine the efficiency of these operations are not yet fully understood. This is especially the case when dealing with colloids that are soft, deformable, and permeable. In this paper, we propose a methodology for building a model that is able to predict the performance (flux, concentration profiles) of the filtration of such objects in relation with the operating conditions. This is done by focusing on the case of milk filtration, all experiments being performed with dispersions of milk casein micelles, which are sort of ″natural″ colloidal microgels. Using this example, we develop the general idea that a filtration model can always be built for a given colloidal dispersion as long as this dispersion has been characterized in terms of osmotic pressure Π and hydraulic permeability k. For soft and permeable colloids, the major issue is that the permeability k cannot be assessed in a trivial way like in the case for hard-sphere colloids. To get around this difficulty, we follow two distinct approaches to actually measure k: a direct approach, involving osmotic stress experiments, and a reverse-calculation approach, that consists of estimating k through well-controlled filtration experiments. The resulting filtration model is then validated against experimental measurements obtained from combined milk filtration/SAXS experiments. We also give precise examples of how the model can be used, as well as a brief discussion on the possible universality of the approach presented here.

  15. Toward a general psychological model of tension and suspense

    Directory of Open Access Journals (Sweden)

    Moritz eLehne

    2015-02-01

    Full Text Available Tension and suspense are powerful emotional experiences that occur in a wide variety of contexts (e.g., in music, film, literature, and everyday life. The omnipresence of tension experiences suggests that they build on very basic cognitive and affective mechanisms. However, the psychological underpinnings of tension experiences remain largely unexplained, and tension and suspense are rarely discussed from a general, domain-independent perspective. In this paper, we argue that tension experiences in different contexts (e.g., musical tension or suspense in a movie build on the same underlying psychological processes. We discuss key components of tension experiences and propose a domain-independent model of tension and suspense. According to this model, tension experiences originate from states of conflict, instability, dissonance, or uncertainty that trigger predictive processes directed at future events of emotional significance. We also discuss possible neural mechanisms underlying experiences of tension. The model provides a theoretical framework that can inform future empirical research on tension phenomena.

  16. A generalized model for estimating the energy density of invertebrates

    Science.gov (United States)

    James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.

    2012-01-01

    Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2  =  0.96, p calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.

  17. Invariance Properties for General Diagnostic Classification Models

    Science.gov (United States)

    Bradshaw, Laine P.; Madison, Matthew J.

    2016-01-01

    In item response theory (IRT), the invariance property states that item parameter estimates are independent of the examinee sample, and examinee ability estimates are independent of the test items. While this property has long been established and understood by the measurement community for IRT models, the same cannot be said for diagnostic…

  18. A General Thermal Equilibrium Discharge Flow Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO; Min-fu; ZHANG; Dong-xu; LV; Yu-feng

    2015-01-01

    In isentropic and thermal equilibrium assumptions,a discharge flow model was derived,which unified the rules of normal temperature water discharge,high temperature and high pressure water discharge,two-phase critical flow,saturated steam and superheated steam critical

  19. A generalized network model for polymeric liquids

    NARCIS (Netherlands)

    Jongschaap, R.J.J.; Kamphuis, H.; Doeksen, D.K.

    1983-01-01

    A kinetic model was developed for relating the molecular structure and the rheological behaviour of polymer-like materials in which bonds are being created and broken. In particular, the stress contribution of molecules that are not a part of the network was taken account of. In two limiting cases

  20. Methods for Handling Missing Variables in Risk Prediction Models

    NARCIS (Netherlands)

    Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.

    2016-01-01

    Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient

  1. Convex foundations for generalized MaxEnt models

    Science.gov (United States)

    Frongillo, Rafael; Reid, Mark D.

    2014-12-01

    We present an approach to maximum entropy models that highlights the convex geometry and duality of generalized exponential families (GEFs) and their connection to Bregman divergences. Using our framework, we are able to resolve a puzzling aspect of the bijection of Banerjee and coauthors between classical exponential families and what they call regular Bregman divergences. Their regularity condition rules out all but Bregman divergences generated from log-convex generators. We recover their bijection and show that a much broader class of divergences correspond to GEFs via two key observations: 1) Like classical exponential families, GEFs have a "cumulant" C whose subdifferential contains the mean: Eo˜pθ[φ(o)]∈∂C(θ) ; 2) Generalized relative entropy is a C-Bregman divergence between parameters: DF(pθ,pθ')= D C(θ,θ') , where DF becomes the KL divergence for F = -H. We also show that every incomplete market with cost function C can be expressed as a complete market, where the prices are constrained to be a GEF with cumulant C. This provides an entirely new interpretation of prediction markets, relating their design back to the principle of maximum entropy.

  2. A Model for Predicting Thermomechanical Response of Large Space Structures.

    Science.gov (United States)

    1984-06-01

    Dr. Tony Amos (202)767-4937 u.DD FORM 1473, 83 APR EDITION OF I JAN 73 IS OBSOLETE. SECURITY CLASSIFICATION CF THIS PAGE .. ’o 1 v A MODEL FOR...SYMBOL Dr. Tony Amos (202)767-4937 SDD FORM 1473, 83 APR E c , TION OF 1 JAN 73 ,S OBSOLETE. SECLUHITY (CLASSI. ICATION Oi) THI PAGE -i LARGE SPACE...94] for predicting the buckling loads associated with general instability of beam-like lattice trusses. Bazant and Christensen [95] present a

  3. Comparative model accuracy of a data-fitted generalized Aw-Rascle-Zhang model

    CERN Document Server

    Fan, Shimao; Seibold, Benjamin

    2013-01-01

    The Aw-Rascle-Zhang (ARZ) model can be interpreted as a generalization of the Lighthill-Whitham-Richards (LWR) model, possessing a family of fundamental diagram curves, each of which represents a class of drivers with a different empty road velocity. A weakness of this approach is that different drivers possess vastly different densities at which traffic flow stagnates. This drawback can be overcome by modifying the pressure relation in the ARZ model, leading to the generalized Aw-Rascle-Zhang (GARZ) model. We present an approach to determine the parameter functions of the GARZ model from fundamental diagram measurement data. The predictive accuracy of the resulting data-fitted GARZ model is compared to other traffic models by means of a three-detector test setup, employing two types of data: vehicle trajectory data, and sensor data. This work also considers the extension of the ARZ and the GARZ models to models with a relaxation term, and conducts an investigation of the optimal relaxation time.

  4. On A General Frame For Macroeconomic Modelling

    Directory of Open Access Journals (Sweden)

    Emil DINGA

    2012-03-01

    Full Text Available The purpose of the research project was to identify the methodological bases for the aggregate description of the Romanian national economy, both logically and in terms of the sources of empirical data for modelling. The specific objectives of the project were: a description of the economic markets in correlation with the logic description of the economic behaviours; b determination of the sectoral blocks of the Romanian economy, on the basis of the homogeneity of the economic; activity and behaviour; c association of the sectoral blocks to the national accounts, so as to ensure the sources of empirical data for the calibration and utilisation of the model; d association of the sectoral blocks to the economic markets; e association of the national accounts with the economic markets; f identification of the classes of interactions between the determined sectoral blocks.

  5. Generalized Quadratic Linearization of Machine Models

    OpenAIRE

    Parvathy Ayalur Krishnamoorthy; Kamaraj Vijayarajan; Devanathan Rajagopalan

    2011-01-01

    In the exact linearization of involutive nonlinear system models, the issue of singularity needs to be addressed in practical applications. The approximate linearization technique due to Krener, based on Taylor series expansion, apart from being applicable to noninvolutive systems, allows the singularity issue to be circumvented. But approximate linearization, while removing terms up to certain order, also introduces terms of higher order than those removed into the system. To overcome th...

  6. Predictive Ability of the General Ability Index (GAI) versus the Full Scale IQ among Gifted Referrals

    Science.gov (United States)

    Rowe, Ellen W.; Kingsley, Jessica M.; Thompson, Dawna F.

    2010-01-01

    The General Ability Index (GAI) is a composite ability score for the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) that minimizes the impact of tasks involving working memory and processing speed. The goal of the current study was to compare the degree to which the Full Scale IQ (FSIQ) and the GAI predict academic achievement…

  7. Predictive value of the official cancer alarm symptoms in general practice

    DEFF Research Database (Denmark)

    Krasnik Huggenberger, Ivan; Andersen, John Sahl

    2015-01-01

    Introduction: The objective of this study was to investigate the evidence for positive predictive value (PPV) of alarm symptoms and combinations of symptoms for colorectal cancer, breast cancer, prostate cancer and lung cancer in general practice. Methods: This study is based on a literature search...

  8. Comparison of H-infinity control and generalized predictive control for a laser scanner system

    DEFF Research Database (Denmark)

    Ordys, A.W.; Stoustrup, Jakob; Smillie, I.

    2000-01-01

    This paper describes tests performed on a laser scanner system to assess the feasibility of H-infinity control and generalized predictive control design techniques in achieving a required performance in a trajectory folowing problem. The two methods are compared with respect to achieved scan times...

  9. Decadal prediction skill using a high-resolution climate model

    Science.gov (United States)

    Monerie, Paul-Arthur; Coquart, Laure; Maisonnave, Éric; Moine, Marie-Pierre; Terray, Laurent; Valcke, Sophie

    2017-02-01

    The ability of a high-resolution coupled atmosphere-ocean general circulation model (with a horizontal resolution of a quarter of a degree in the ocean and of about 0.5° in the atmosphere) to predict the annual means of temperature, precipitation, sea-ice volume and extent is assessed based on initialized hindcasts over the 1993-2009 period. Significant skill in predicting sea surface temperatures is obtained, especially over the North Atlantic, the tropical Atlantic and the Indian Ocean. The Sea Ice Extent and volume are also reasonably predicted in winter (March) and summer (September). The model skill is mainly due to the external forcing associated with well-mixed greenhouse gases. A decrease in the global warming rate associated with a negative phase of the Pacific Decadal Oscillation is simulated by the model over a suite of 10-year periods when initialized from starting dates between 1999 and 2003. The model ability to predict regional change is investigated by focusing on the mid-90's Atlantic Ocean subpolar gyre warming. The model simulates the North Atlantic warming associated with a meridional heat transport increase, a strengthening of the North Atlantic current and a deepening of the mixed layer over the Labrador Sea. The atmosphere plays a role in the warming through a modulation of the North Atlantic Oscillation: a negative sea level pressure anomaly, located south of the subpolar gyre is associated with a wind speed decrease over the subpolar gyre. This leads to a reduced oceanic heat-loss and favors a northward displacement of anomalously warm and salty subtropical water that both concur to the subpolar gyre warming. We finally conclude that the subpolar gyre warming is mainly triggered by ocean dynamics with a possible contribution of atmospheric circulation favoring its persistence.

  10. Prediction of survival with alternative modeling techniques using pseudo values.

    Directory of Open Access Journals (Sweden)

    Tjeerd van der Ploeg

    Full Text Available BACKGROUND: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo values enable statistically appropriate analyses of survival outcomes when used in seven alternative modeling techniques. METHODS: In this case study, we analyzed survival of 1282 Dutch patients with newly diagnosed Head and Neck Squamous Cell Carcinoma (HNSCC with conventional Kaplan-Meier and Cox regression analysis. We subsequently calculated pseudo values to reflect the individual survival patterns. We used these pseudo values to compare recursive partitioning (RPART, neural nets (NNET, logistic regression (LR general linear models (GLM and three variants of support vector machines (SVM with respect to dichotomous 60-month survival, and continuous pseudo values at 60 months or estimated survival time. We used the area under the ROC curve (AUC and the root of the mean squared error (RMSE to compare the performance of these models using bootstrap validation. RESULTS: Of a total of 1282 patients, 986 patients died during a median follow-up of 66 months (60-month survival: 52% [95% CI: 50%-55%]. The LR model had the highest optimism corrected AUC (0.791 to predict 60-month survival, followed by the SVM model with a linear kernel (AUC 0.787. The GLM model had the smallest optimism corrected RMSE when continuous pseudo values were considered for 60-month survival or the estimated survival time followed by SVM models with a linear kernel. The estimated importance of predictors varied substantially by the specific aspect of survival studied and modeling technique used. CONCLUSIONS: The use of pseudo values makes it readily possible to apply alternative modeling techniques to survival problems, to compare their performance and to search further for promising

  11. Estimating the magnitude of prediction uncertainties for the APLE model

    Science.gov (United States)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  12. 基于加权损失函数下广义指数预报因子模型的汇率预测%Foreign Exchange Rates Prediction Based on Generalized Exponential Predictor Models with Weighted Loss Function

    Institute of Scientific and Technical Information of China (English)

    尹伟; 严威; 缪柏其

    2012-01-01

    The genenralized exponential predictor models for exchange rate forecasting based on weighted loss function is proposed.This method construct some exponential predictors through different smoothing parameters firstly, and then the weighted loss function based on absolute loss and square loss was proposed to select vaxiable,under which we combine exponential predictors to construct genneralized predictor model.At last compare with some existing methods,the models we proposed improves forecast precision.%本文提出在加权损失函数下构建汇率预测的广义指数预报因子模型。该方法首先选取有限个不同滑动参数构造指数预报因子,同时基于绝对值损失和平方损失的提出加权损失函数作为变量筛选的准则,然后在该准则下将指数预报因子进行线性组合,建立汇率预报的广义指数预报因子模型。本文最后用英镑/美元单周汇率数据与文献中的一些已有方法做比较,实证分析表明本文提出的方法在汇率预测效果上有较大改进。

  13. A Unified Model of All Generalizations from the Jones Polynomial

    Institute of Scientific and Technical Information of China (English)

    QIAN Shang-Wu; GU Zhi-Yu

    2001-01-01

    From the basic properties of skein systems, we build a generalized tangle algebra (GTA). The elements of GTA are four basic tangles. There are three operations, which are connection, splicing and scalar multiplication. From GTA we derive two generalized recursion formulae (GRF) and prove the existence of a generalized skein relation which satisfies GRF. The obtained generalized skein relation epitomizes all generalizations from the Jones polynomial and thus forms a unified model. Two important topological parameters, twisting measure and loop values, appear explicitly in the expressions of the unified model, and this fact greatly simplifies the operations.

  14. Utility of the Montreal Cognitive Assessment and Mini-Mental State Examination in predicting general intellectual abilities.

    Science.gov (United States)

    Sugarman, Michael A; Axelrod, Bradley N

    2014-09-01

    To determine whether scores from 2 commonly used cognitive screening tests can help predict general intellectual functioning in older adults. Cutoff scores for determining cognitive impairment have been validated for both the Montreal Cognitive Assessment (MoCA) and the Mini-Mental State Examination (MMSE). However, less is known about how the 2 measures relate to general intellectual functioning as measured by the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). A sample of 186 older adults referred for neuropsychological assessment completed the MoCA, MMSE, and WAIS-IV. Regression equations determined how accurately the screening measures could predict the WAIS-IV Full Scale Intelligence Quotient (FSIQ). We also determined how predictive the MoCA and MMSE were when combined with 2 premorbid estimates of FSIQ: the Test of Premorbid Functioning (TOPF) (a reading test of phonetically irregular words) and a predicted TOPF score based on demographic variables. MoCA and MMSE both correlated moderately with WAIS-IV FSIQ. Hierarchical regression models containing the MoCA or MMSE combined with TOPF scores accounted for 58% and 49%, respectively, of the variance in obtained FSIQ. Both regression equations accurately estimated FSIQ to within 10 points in >75% of the sample. Both the MoCA and MMSE provide reasonable estimates of FSIQ. Prediction improves when these measures are combined with other estimates of FSIQ. We provide 4 equations designed to help clinicians interpret these screening measures.

  15. The General Optimal Market Area Model

    Science.gov (United States)

    1988-06-01

    Spatial Competition, American Economic Review 68 (1978) 896. [19] G.M. Carter, J.M. Chaiken, and E. Ignall, Response Areas for Two Emergency Units...25] B.C. Eaton and R.G. Lipsey, The Non-Uniqueness of Equilibrium in the L6schian Location Model, American Economic Review 66 (1976) 77. [26, B.C...4 (1972) 154. [86] S. Valavanis, L6sch on Location, American Economic Review 45 (1955) 637. [87] B. Von Hohenbalken and D.S. West, Manhattan versus

  16. Prediction of Catastrophes: an experimental model

    CERN Document Server

    Peters, Randall D; Pomeau, Yves

    2012-01-01

    Catastrophes of all kinds can be roughly defined as short duration-large amplitude events following and followed by long periods of "ripening". Major earthquakes surely belong to the class of 'catastrophic' events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in both of two dynamical models. The first is an "abstract" model in which a time dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for creep). Hope thus exists that similar changes in the response to ...

  17. Factors influencing protein tyrosine nitration--structure-based predictive models.

    Science.gov (United States)

    Bayden, Alexander S; Yakovlev, Vasily A; Graves, Paul R; Mikkelsen, Ross B; Kellogg, Glen E

    2011-03-15

    Models for exploring tyrosine nitration in proteins have been created based on 3D structural features of 20 proteins for which high-resolution X-ray crystallographic or NMR data are available and for which nitration of 35 total tyrosines has been experimentally proven under oxidative stress. Factors suggested in previous work to enhance nitration were examined with quantitative structural descriptors. The role of neighboring acidic and basic residues is complex: for the majority of tyrosines that are nitrated the distance to the heteroatom of the closest charged side chain corresponds to the distance needed for suspected nitrating species to form hydrogen bond bridges between the tyrosine and that charged amino acid. This suggests that such bridges play a very important role in tyrosine nitration. Nitration is generally hindered for tyrosines that are buried and for those tyrosines for which there is insufficient space for the nitro group. For in vitro nitration, closed environments with nearby heteroatoms or unsaturated centers that can stabilize radicals are somewhat favored. Four quantitative structure-based models, depending on the conditions of nitration, have been developed for predicting site-specific tyrosine nitration. The best model, relevant for both in vitro and in vivo cases, predicts 30 of 35 tyrosine nitrations (positive predictive value) and has a sensitivity of 60/71 (11 false positives). Copyright © 2010 Elsevier Inc. All rights reserved.

  18. Predictive modeling of low solubility semiconductor alloys

    Science.gov (United States)

    Rodriguez, Garrett V.; Millunchick, Joanna M.

    2016-09-01

    GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.

  19. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  20. Leptogenesis in minimal predictive seesaw models

    Science.gov (United States)

    Björkeroth, Fredrik; de Anda, Francisco J.; de Medeiros Varzielas, Ivo; King, Stephen F.

    2015-10-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to ( ν e , ν μ , ν τ ) proportional to (0, 1, 1) and (1, n, n - 2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A 4 vacuum alignment provides the required Yukawa structures with n = 3, while a {{Z}}_9 symmetry fixes the relatives phase to be a ninth root of unity.

  1. Fractional-Order Generalized Predictive Control: Application for Low-Speed Control of Gasoline-Propelled Cars

    Directory of Open Access Journals (Sweden)

    M. Romero

    2013-01-01

    Full Text Available There is an increasing interest in using fractional calculus applied to control theory generalizing classical control strategies as the PID controller and developing new ones with the intention of taking advantage of characteristics supplied by this mathematical tool for the controller definition. In this work, the fractional generalization of the successful and spread control strategy known as model predictive control is applied to drive autonomously a gasoline-propelled vehicle at low speeds. The vehicle is a Citroën C3 Pluriel that was modified to act over the throttle and brake pedals. Its highly nonlinear dynamics are an excellent test bed for applying beneficial characteristics of fractional predictive formulation to compensate unmodeled dynamics and external disturbances.

  2. Nostradamus 2014 prediction, modeling and analysis of complex systems

    CERN Document Server

    Suganthan, Ponnuthurai; Chen, Guanrong; Snasel, Vaclav; Abraham, Ajith; Rössler, Otto

    2014-01-01

    The prediction of behavior of complex systems, analysis and modeling of its structure is a vitally important problem in engineering, economy and generally in science today. Examples of such systems can be seen in the world around us (including our bodies) and of course in almost every scientific discipline including such “exotic” domains as the earth’s atmosphere, turbulent fluids, economics (exchange rate and stock markets), population growth, physics (control of plasma), information flow in social networks and its dynamics, chemistry and complex networks. To understand such complex dynamics, which often exhibit strange behavior, and to use it in research or industrial applications, it is paramount to create its models. For this purpose there exists a rich spectrum of methods, from classical such as ARMA models or Box Jenkins method to modern ones like evolutionary computation, neural networks, fuzzy logic, geometry, deterministic chaos amongst others. This proceedings book is a collection of accepted ...

  3. Modeling electrokinetics in ionic liquids: General

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Chao [Physical and Computational Science Directorate, Pacific Northwest National Laboratory, Richland WA USA; Bao, Jie [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland WA USA; Pan, Wenxiao [Department of Mechanical Engineering, University of Wisconsin-Madison, Madison WI USA; Sun, Xin [Physical and Computational Science Directorate, Pacific Northwest National Laboratory, Richland WA USA

    2017-04-07

    Using direct numerical simulations we provide a thorough study on the electrokinetics of ionic liquids. In particular, the modfied Poisson-Nernst-Planck (MPNP) equations are solved to capture the crowding and overscreening effects that are the characteristics of an ionic liquid. For modeling electrokinetic flows in an ionic liquid, the MPNP equations are coupled with the Navier-Stokes equations to study the coupling of ion transport, hydrodynamics, and electrostatic forces. Specifically, we consider the ion transport between two parallel plates, charging dynamics in a 2D straight-walled pore, electro-osmotic ow in a nano-channel, electroconvective instability on a plane ion-selective surface, and electroconvective ow on a curved ion-selective surface. We discuss how the crowding and overscreening effects and their interplay affect the electrokinetic behaviors of ionic liquids in these application problems.

  4. Active generalized predictive control of turbine tip clearance for aero-engines

    Institute of Scientific and Technical Information of China (English)

    Peng Kai; Fan Ding; Yang Fan; Fu Qiang; Li Yong

    2013-01-01

    Active control of turbine blade tip clearance continues to be a concern in design and con-trol of gas turbines. Ever increasing demands for improved efficiency and higher operating temper-atures require more stringent tolerances on turbine tip clearance. In this paper, a turbine tip clearance control apparatus and a model of turbine tip clearance are proposed;an implicit active generalized predictive control (GPC), with auto-regressive (AR) error modification and fuzzy adjustment on control horizon, is presented, as well as a quantitative analysis method of robust per-turbation radius of the system. The active clearance control (ACC) of aero-engine turbine tip clear-ance is evaluated in a lapse-rate take-off transient, along with the comparative and quantitative analysis of the stability and robustness of the active tip clearance control system. The results show that the resultant active tip clearance control system with the improved GPC has favorable steady-state and dynamic performance and benefits of increased efficiency, reduced specific fuel consump-tion, and additional service life.

  5. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  6. Reliable prediction of electric spark sensitivity of nitramines: a general correlation with detonation pressure.

    Science.gov (United States)

    Keshavarz, Mohammad Hossein; Pouretedal, Hamid Reza; Semnani, Abolfazl

    2009-08-15

    For nitramines, a general correlation has been introduced to predict electric spark sensitivity through detonation pressure. New method uses maximum obtainable detonation pressure as a fundamental relation so that it can be corrected for some nitramines which have some specific molecular structure. There is no need to use crystal density and heat of formation of nitramine explosives for predicting detonation pressure and electric spark sensitivity. The predicted electric spark sensitivities are compared with calculated results on the basis of quantum mechanical computations for some nitramines that latter can be applied. The root mean square (rms) deviations from experiment for new method and the predicted results of complicated quantum mechanical method are 1.18 and 3.49J, respectively.

  7. A generalized model via random walks for information filtering

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhuo-Ming, E-mail: zhuomingren@gmail.com [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Kong, Yixiu [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Shang, Ming-Sheng, E-mail: msshang@cigit.ac.cn [Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Zhang, Yi-Cheng [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland)

    2016-08-06

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  8. A general and simple method for obtaining R2 from generalized linear mixed‐effects models

    National Research Council Canada - National Science Library

    Nakagawa, Shinichi; Schielzeth, Holger; O'Hara, Robert B

    2013-01-01

    The use of both linear and generalized linear mixed‐effects models ( LMM s and GLMM s) has become popular not only in social and medical sciences, but also in biological sciences, especially in the field of ecology and evolution...

  9. Comparing model predictions for ecosystem-based management

    DEFF Research Database (Denmark)

    Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste

    2016-01-01

    Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...

  10. Quality guaranteed aggregation based model predictive control and stability analysis

    Institute of Scientific and Technical Information of China (English)

    LI DeWei; XI YuGeng

    2009-01-01

    The input aggregation strategy can reduce the online computational burden of the model predictive controller. But generally aggregation based MPC controller may lead to poor control quality. Therefore, a new concept, equivalent aggregation, is proposed to guarantee the control quality of aggregation based MPC. From the general framework of input linear aggregation, the design methods of equivalent aggregation are developed for unconstrained and terminal zero constrained MPC, which guarantee the actual control inputs exactly to be equal to that of the original MPC. For constrained MPC, quasi-equivalent aggregation strategies are also discussed, aiming to make the difference between the control inputs of aggregation based MPC and original MPC as small as possible. The stability conditions are given for the quasi-equivalent aggregation based MPC as well.

  11. Committee neural network model for rock permeability prediction

    Science.gov (United States)

    Bagheripour, Parisa

    2014-05-01

    Quantitative formulation between conventional well log data and rock permeability, undoubtedly the most critical parameter of hydrocarbon reservoir, could be a potent tool for solving problems associated with almost all tasks involved in petroleum engineering. The present study proposes a novel approach in charge of the quest for high-accuracy method of permeability prediction. At the first stage, overlapping of conventional well log data (inputs) was eliminated by means of principal component analysis (PCA). Subsequently, rock permeability was predicted from extracted PCs using multi-layer perceptron (MLP), radial basis function (RBF), and generalized regression neural network (GRNN). Eventually, a committee neural network (CNN) was constructed by virtue of genetic algorithm (GA) to enhance the precision of ultimate permeability prediction. The values of rock permeability, derived from the MPL, RBF, and GRNN models, were used as inputs of CNN. The proposed CNN combines results of different ANNs to reap beneficial advantages of all models and consequently producing more accurate estimations. The GA, embedded in the structure of the CNN assigns a weight factor to each ANN which shows relative involvement of each ANN in overall prediction of rock permeability from PCs of conventional well logs. The proposed methodology was applied in Kangan and Dalan Formations, which are the major carbonate reservoir rocks of South Pars Gas Field-Iran. A group of 350 data points was used to establish the CNN model, and a group of 245 data points was employed to assess the reliability of constructed CNN model. Results showed that the CNN method performed better than individual intelligent systems performing alone.

  12. North Atlantic thermohaline circulation predictability in a coupled ocean-atmosphere model

    CERN Document Server

    Griffies, S M; Griffies, Stephen M.; Bryan, Kirk

    1995-01-01

    Predictability of the North Atlantic thermohaline circulation (THC) variability as simulated in the GFDL coupled ocean-atmosphere general circulation model is established for a set of ensemble experiments. The ensembles consist of identical oceanic initial conditions underneath a model atmosphere chosen randomly from the model climatology. This experimental design is based on the separation in time scales present in the model which motivates the assumption that the predictability deduced from these ensembles provides an upper limit to the model's THC predictability. The climatology is taken from a multi-century model integration whose THC variability has power concentrated at the 40-60 year time scale. A linear stochastic perspective is shown to be generally consistent with the ensemble statistics. The linear theory suggests a natural measure of ensemble predictability as the time at which the ensemble variance becomes a subjectively defined fraction (0.5 used here) of the climatological variance. It is furth...

  13. A General Low-Cost Indirect Branch Prediction Using Target Address Pointers

    Institute of Scientific and Technical Information of China (English)

    谢子超; 佟冬; 黄明凯

    2014-01-01

    Nowadays energy-efficiency becomes the first design metric in chip development. To pursue higher energy efficiency, the processor architects should reduce or eliminate those unnecessary energy dissipations. Indirect-branch pre-diction has become a performance bottleneck, especially for the applications written in object-oriented languages. Previous hardware-based indirect-branch predictors are generally inefficient, for they either require significant hardware storage or predict indirect-branch targets slowly. In this paper, we propose an energy-efficient indirect-branch prediction technique called TAP (target address pointer) prediction. Its key idea includes two parts: utilizing specific hardware pointers to accelerate the indirect branch prediction flow and reusing the existing processor components to reduce additional hardware costs and power consumption. When fetching an indirect branch, TAP prediction first gets the specific pointers called target address pointers from the conditional branch predictor, and then uses such pointers to generate virtual addresses which index the indirect-branch targets. This technique spends similar time compared to the dedicated storage techniques without requiring additional large amounts of storage. Our evaluation shows that TAP prediction with some representative state-of-the-art branch predictors can improve performance significantly over the baseline processor. Compared with those hardware-based indirect-branch predictors, the TAP-Perceptron scheme achieves performance improvement equivalent to that provided by an 8 K-entry TTC predictor, and also outperforms the VPC predictor.

  14. A Predictive Model of Geosynchronous Magnetopause Crossings

    CERN Document Server

    Dmitriev, A; Chao, J -K

    2013-01-01

    We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at given location for given solar wind pressure Psw, Bz component of interplanetary magnetic field (IMF) and geomagnetic conditions characterized by 1-min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 minutes when geosynchronous satellites of GOES and LANL series are located in the magnetosheath (so-called MSh intervals) in 1994 to 2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger amplitude of negative SYM-H the lower solar wind pressure is required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm-time intensification of t...

  15. Predictive modeling for EBPC in EBDW

    Science.gov (United States)

    Zimmermann, Rainer; Schulz, Martin; Hoppe, Wolfgang; Stock, Hans-Jürgen; Demmerle, Wolfgang; Zepka, Alex; Isoyan, Artak; Bomholt, Lars; Manakli, Serdar; Pain, Laurent

    2009-10-01

    We demonstrate a flow for e-beam proximity correction (EBPC) to e-beam direct write (EBDW) wafer manufacturing processes, demonstrating a solution that covers all steps from the generation of a test pattern for (experimental or virtual) measurement data creation, over e-beam model fitting, proximity effect correction (PEC), and verification of the results. We base our approach on a predictive, physical e-beam simulation tool, with the possibility to complement this with experimental data, and the goal of preparing the EBPC methods for the advent of high-volume EBDW tools. As an example, we apply and compare dose correction and geometric correction for low and high electron energies on 1D and 2D test patterns. In particular, we show some results of model-based geometric correction as it is typical for the optical case, but enhanced for the particularities of e-beam technology. The results are used to discuss PEC strategies, with respect to short and long range effects.

  16. Numerical modeling capabilities to predict repository performance

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used.

  17. SVM model for estimating the parameters of the probability-integral method of predicting mining subsidence

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hua; WANG Yun-jia; LI Yong-feng

    2009-01-01

    A new mathematical model to estimate the parameters of the probability-integral method for mining subsidence prediction is proposed. Based on least squares support vector machine (LS-SVM) theory, it is capable of improving the precision and reliability of mining subsidence prediction. Many of the geological and mining factors involved are related in a nonlinear way. The new model is based on statistical theory (SLT) and empirical risk minimization (ERM) principles. Typical data collected from observation stations were used for the learning and training samples. The calculated results from the LS-SVM model were compared with the prediction results of a back propagation neural network (BPNN) model. The results show that the parameters were more precisely predicted by the LS-SVM model than by the BPNN model. The LS-SVM model was faster in computation and had better generalized performance. It provides a highly effective method for calculating the predicting parameters of the probability-integral method.

  18. General Description of Fission Observables: GEF Model Code

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, K.-H. [CENBG, CNRS/IN2 P3, Chemin du Solarium, B.P. 120, F-33175 Gradignan (France); Jurado, B., E-mail: jurado@cenbg.in2p3.fr [CENBG, CNRS/IN2 P3, Chemin du Solarium, B.P. 120, F-33175 Gradignan (France); Amouroux, C. [CEA, DSM-Saclay (France); Schmitt, C., E-mail: schmitt@ganil.fr [GANIL, Bd. Henri Becquerel, B.P. 55027, F-14076 Caen Cedex 05 (France)

    2016-01-15

    The GEF (“GEneral description of Fission observables”) model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is

  19. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  20. Intermediate-generalized Chaplygin gas inflationary universe model

    Energy Technology Data Exchange (ETDEWEB)

    Herrera, Ramon; Olivares, Marco; Videla, Nelson [Pontificia Universidad Catolica de Valparaiso, Instituto de Fisica, Casilla, Valparaiso (Chile)

    2013-01-15

    An intermediate inflationary universe model in the context of a generalized Chaplygin gas is considered. For the matter we consider two different energy densities; a standard scalar field and a tachyon field, respectively. In general, we discuss the conditions of an inflationary epoch for these models. We also, use recent astronomical observations from Wilkinson Microwave Anisotropy Probe seven year data for constraining the parameters appearing in our models. (orig.)