WorldWideScience

Sample records for model predictions based

  1. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...

  2. Statistical Seasonal Sea Surface based Prediction Model

    Science.gov (United States)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  3. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...

  4. Comparing model predictions for ecosystem-based management

    DEFF Research Database (Denmark)

    Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste

    2016-01-01

    Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...

  5. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “any fall” and “recurrent falls.” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  6. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  7. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  8. Noncausal spatial prediction filtering based on an ARMA model

    Institute of Scientific and Technical Information of China (English)

    Liu Zhipeng; Chen Xiaohong; Li Jingye

    2009-01-01

    Conventional f-x prediction filtering methods are based on an autoregressive model. The error section is first computed as a source noise but is removed as additive noise to obtain the signal, which results in an assumption inconsistency before and after filtering. In this paper, an autoregressive, moving-average model is employed to avoid the model inconsistency. Based on the ARMA model, a noncasual prediction filter is computed and a self-deconvolved projection filter is used for estimating additive noise in order to suppress random noise. The 1-D ARMA model is also extended to the 2-D spatial domain, which is the basis for noncasual spatial prediction filtering for random noise attenuation on 3-D seismic data. Synthetic and field data processing indicate this method can suppress random noise more effectively and preserve the signal simultaneously and does much better than other conventional prediction filtering methods.

  9. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel;

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...... day (using the area under the receiver operating characteristic curve (AUC) and kappa statistics) and by assessing consistency in predictions of range size changes under future climate (using cluster analysis). Results Our analyses show significant differences between predictions from different models......, with predicted changes in range size by 2030 differing in both magnitude and direction (e.g. from 92% loss to 322% gain). We explain differences with reference to two characteristics of the modelling techniques: data input requirements (presence/absence vs. presence-only approaches) and assumptions made by each...

  10. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  11. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  12. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  13. New Approaches for Channel Prediction Based on Sinusoidal Modeling

    Directory of Open Access Journals (Sweden)

    Ekman Torbjörn

    2007-01-01

    Full Text Available Long-range channel prediction is considered to be one of the most important enabling technologies to future wireless communication systems. The prediction of Rayleigh fading channels is studied in the frame of sinusoidal modeling in this paper. A stochastic sinusoidal model to represent a Rayleigh fading channel is proposed. Three different predictors based on the statistical sinusoidal model are proposed. These methods outperform the standard linear predictor (LP in Monte Carlo simulations, but underperform with real measurement data, probably due to nonstationary model parameters. To mitigate these modeling errors, a joint moving average and sinusoidal (JMAS prediction model and the associated joint least-squares (LS predictor are proposed. It combines the sinusoidal model with an LP to handle unmodeled dynamics in the signal. The joint LS predictor outperforms all the other sinusoidal LMMSE predictors in suburban environments, but still performs slightly worse than the standard LP in urban environments.

  14. Traffic Prediction Scheme based on Chaotic Models in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Xiangrong Feng

    2013-09-01

    Full Text Available Based on the local support vector algorithm of chaotic time series analysis, the Hannan-Quinn information criterion and SAX symbolization are introduced. Then a novel prediction algorithm is proposed, which is successfully applied to the prediction of wireless network traffic. For the correct prediction problems of short-term flow with smaller data set size, the weakness of the algorithms during model construction is analyzed by study and comparison to LDK prediction algorithm. It is verified the Hannan-Quinn information principle can be used to calculate the number of neighbor points to replace pervious empirical method, which uses the number of neighbor points to acquire more accurate prediction model. Finally, actual flow data is applied to confirm the accuracy rate of the proposed algorithm LSDHQ. It is testified by our experiments that it also has higher performance in adaptability than that of LSDHQ algorithm.

  15. Signature prediction for model-based automatic target recognition

    Science.gov (United States)

    Keydel, Eric R.; Lee, Shung W.

    1996-06-01

    The moving and stationary target recognition (MSTAR) model- based automatic target recognition (ATR) system utilizes a paradigm which matches features extracted form an unknown SAR target signature against predictions of those features generated from models of the sensing process and candidate target geometries. The candidate target geometry yielding the best match between predicted and extracted features defines the identify of the unknown target. MSTAR will extend the current model-based ATR state-of-the-art in a number of significant directions. These include: use of Bayesian techniques for evidence accrual, reasoning over target subparts, coarse-to-fine hypothesis search strategies, and explicit reasoning over target articulation, configuration, occlusion, and lay-over. These advances also imply significant technical challenges, particularly for the MSTAR feature prediction module (MPM). In addition to accurate electromagnetics, the MPM must provide traceback between input target geometry and output features, on-line target geometry manipulation, target subpart feature prediction, explicit models for local scene effects, and generation of sensitivity and uncertainty measures for the predicted features. This paper describes the MPM design which is being developed to satisfy these requirements. The overall module structure is presented, along with the specific deign elements focused on MSTAR requirements. Particular attention is paid to design elements that enable on-line prediction of features within the time constraints mandated by model-driven ATR. Finally, the current status, development schedule, and further extensions in the module design are described.

  16. Support vector machine-based multi-model predictive control

    Institute of Scientific and Technical Information of China (English)

    Zhejing BA; Youxian SUN

    2008-01-01

    In this paper,a support vector machine-based multi-model predictive control is proposed,in which SVM classification combines well with SVM regression.At first,each working environment is modeled by SVM regression and the support vector machine network-based model predictive control(SVMN-MPC)algorithm corresponding to each environment is developed,and then a multi-class SVM model is established to recognize multiple operating conditions.As for control,the current environment is identified by the multi-class SVM model and then the corresponding SVMN.MPCcontroller is activated at each sampling instant.The proposed modeling,switching and controller design is demonstrated in simulation results.

  17. Cloud Based Metalearning System for Predictive Modeling of Biomedical Data

    Directory of Open Access Journals (Sweden)

    Milan Vukićević

    2014-01-01

    Full Text Available Rapid growth and storage of biomedical data enabled many opportunities for predictive modeling and improvement of healthcare processes. On the other side analysis of such large amounts of data is a difficult and computationally intensive task for most existing data mining algorithms. This problem is addressed by proposing a cloud based system that integrates metalearning framework for ranking and selection of best predictive algorithms for data at hand and open source big data technologies for analysis of biomedical data.

  18. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared to the p......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from...

  19. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...... preprocessing [Dau et al., 1997. J. Acoust. Soc. Am. 102, 2892-2905] with a simple central stage that describes the similarity of the test signal with the corresponding reference signal at a level of the internal representation of the signals. The model was compared with previous approaches, whereby a speech...... in noise experiment was used for training and an ideal binary mask experiment was used for evaluation. All three models were able to capture the trends in the speech in noise training data well, but the proposed model provides a better prediction of the binary mask test data, particularly when the binary...

  20. Fuzzy-Based Trust Prediction Model for Routing in WSNs

    Directory of Open Access Journals (Sweden)

    X. Anita

    2014-01-01

    Full Text Available The cooperative nature of multihop wireless sensor networks (WSNs makes it vulnerable to varied types of attacks. The sensitive application environments and resource constraints of WSNs mandate the requirement of lightweight security scheme. The earlier security solutions were based on historical behavior of neighbor but the security can be enhanced by predicting the future behavior of the nodes in the network. In this paper, we proposed a fuzzy-based trust prediction model for routing (FTPR in WSNs with minimal overhead in regard to memory and energy consumption. FTPR incorporates a trust prediction model that predicts the future behavior of the neighbor based on the historical behavior, fluctuations in trust value over a period of time, and recommendation inconsistency. In order to reduce the control overhead, FTPR received recommendations from a subset of neighbors who had maximum number of interactions with the requestor. Theoretical analysis and simulation results of FTPR protocol demonstrate higher packet delivery ratio, higher network lifetime, lower end-to-end delay, and lower memory and energy consumption than the traditional and existing trust-based routing schemes.

  1. Neural Network Based Model for Predicting Housing Market Performance

    Institute of Scientific and Technical Information of China (English)

    Ahmed Khalafallah

    2008-01-01

    The United States real estate market is currently facing its worst hit in two decades due to the slowdown of housing sales. The most affected by this decline are real estate investors and home develop-ers who are currently struggling to break-even financially on their investments. For these investors, it is of utmost importance to evaluate the current status of the market and predict its performance over the short-term in order to make appropriate financial decisions. This paper presents the development of artificial neu-ral network based models to support real estate investors and home developers in this critical task. The pa-per describes the decision variables, design methodology, and the implementation of these models. The models utilize historical market performance data sets to train the artificial neural networks in order to pre-dict unforeseen future performances. An application example is analyzed to demonstrate the model capabili-ties in analyzing and predicting the market performance. The model testing and validation showed that the error in prediction is in the range between -2% and +2%.

  2. Ac Synchronous Servo Based On The Armature Voltage Prediction Model

    Science.gov (United States)

    Hoshino, Akihiro; Kuromaru, Hiroshi; Kobayashi, Shinichi

    1987-10-01

    A new control method of the AC synchro-nous servo-system (Brushless DC servo-system) is discussed. The new system is based on the armature voltage prediction model. Without a resolver-digital-conver-ter nor a tachometer-generator, the resolver provides following three signals to the system immediately, they are the current command, the induced voltage, and the rotor speed. The new method realizes a simple hardware configuration. Experimental results show a good performance of the system.

  3. Network Based Prediction Model for Genomics Data Analysis*

    OpenAIRE

    Huang, Ying; Wang, Pei

    2012-01-01

    Biological networks, such as genetic regulatory networks and protein interaction networks, provide important information for studying gene/protein activities. In this paper, we propose a new method, NetBoosting, for incorporating a priori biological network information in analyzing high dimensional genomics data. Specially, we are interested in constructing prediction models for disease phenotypes of interest based on genomics data, and at the same time identifying disease susceptible genes. ...

  4. Adaptive quality prediction of batch processes based on PLS model

    Institute of Scientific and Technical Information of China (English)

    LI Chun-fu; ZHANG Jie; WANG Gui-zeng

    2006-01-01

    There are usually no on-line product quality measurements in batch and semi-batch processes,which make the process control task very difficult.In this paper,a model for predicting the end-product quality from the available on-line process variables at the early stage of a batch is developed using partial least squares (PLS)method.Furthermore,some available mid-course quality measurements are used to rectify the final prediction results.To deal with the problem that the process may change with time,recursive PLS (RPLS) algorithm is used to update the model based on the new batch data and the old model parameters after each batch.An application to a simulated batch MMA polymerization process demonstrates the effectiveness of the proposed method.

  5. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    Science.gov (United States)

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  6. Factors influencing protein tyrosine nitration--structure-based predictive models.

    Science.gov (United States)

    Bayden, Alexander S; Yakovlev, Vasily A; Graves, Paul R; Mikkelsen, Ross B; Kellogg, Glen E

    2011-03-15

    Models for exploring tyrosine nitration in proteins have been created based on 3D structural features of 20 proteins for which high-resolution X-ray crystallographic or NMR data are available and for which nitration of 35 total tyrosines has been experimentally proven under oxidative stress. Factors suggested in previous work to enhance nitration were examined with quantitative structural descriptors. The role of neighboring acidic and basic residues is complex: for the majority of tyrosines that are nitrated the distance to the heteroatom of the closest charged side chain corresponds to the distance needed for suspected nitrating species to form hydrogen bond bridges between the tyrosine and that charged amino acid. This suggests that such bridges play a very important role in tyrosine nitration. Nitration is generally hindered for tyrosines that are buried and for those tyrosines for which there is insufficient space for the nitro group. For in vitro nitration, closed environments with nearby heteroatoms or unsaturated centers that can stabilize radicals are somewhat favored. Four quantitative structure-based models, depending on the conditions of nitration, have been developed for predicting site-specific tyrosine nitration. The best model, relevant for both in vitro and in vivo cases, predicts 30 of 35 tyrosine nitrations (positive predictive value) and has a sensitivity of 60/71 (11 false positives). Copyright © 2010 Elsevier Inc. All rights reserved.

  7. Purely optical navigation with model-based state prediction

    Science.gov (United States)

    Sendobry, Alexander; Graber, Thorsten; Klingauf, Uwe

    2010-10-01

    State-of-the-art Inertial Navigation Systems (INS) based on Micro-Electro-Mechanical Systems (MEMS) have a lack of precision especially in GPS denied environments like urban canyons or in pure indoor missions. The proposed Optical Navigation System (ONS) provides bias free ego-motion estimates using triple redundant sensor information. In combination with a model based state prediction our system is able to estimate velocity, position and attitude of an arbitrary aircraft. Simulating a high performance flow-field estimator the algorithm can compete with conventional low-cost INS. By using measured velocities instead of accelerations the system states drift behavior is not as distinctive as for an INS.

  8. Human Posture and Movement Prediction based on Musculoskeletal Modeling

    DEFF Research Database (Denmark)

    Farahani, Saeed Davoudabadi

    2014-01-01

    Abstract This thesis explores an optimization-based formulation, so-called inverse-inverse dynamics, for the prediction of human posture and motion dynamics performing various tasks. It is explained how this technique enables us to predict natural kinematic and kinetic patterns for human posture...... and motion using AnyBody Modeling System (AMS). AMS uses inverse dynamics to analyze musculoskeletal systems and is, therefore, limited by its dependency on input kinematics. We propose to alleviate this dependency by assuming that voluntary postures and movement strategies in humans are guided by a desire...... investigated, a scaling to the mean height and body mass may be sufficient, while other questions require subject-specific models. The movement is parameterized by means of time functions controlling selected degrees-of-freedom (DOF). Subsequently, the parameters of these functions, usually referred...

  9. Prediction model based on decision tree analysis for laccase mediators.

    Science.gov (United States)

    Medina, Fabiola; Aguila, Sergio; Baratto, Maria Camilla; Martorana, Andrea; Basosi, Riccardo; Alderete, Joel B; Vazquez-Duhalt, Rafael

    2013-01-10

    A Structure Activity Relationship (SAR) study for laccase mediator systems was performed in order to correctly classify different natural phenolic mediators. Decision tree (DT) classification models with a set of five quantum-chemical calculated molecular descriptors were used. These descriptors included redox potential (ɛ°), ionization energy (E(i)), pK(a), enthalpy of formation of radical (Δ(f)H), and OH bond dissociation energy (D(O-H)). The rationale for selecting these descriptors is derived from the laccase-mediator mechanism. To validate the DT predictions, the kinetic constants of different compounds as laccase substrates, their ability for pesticide transformation as laccase-mediators, and radical stability were experimentally determined using Coriolopsis gallica laccase and the pesticide dichlorophen. The prediction capability of the DT model based on three proposed descriptors showed a complete agreement with the obtained experimental results. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  11. Provably Safe and Robust Learning-Based Model Predictive Control

    CERN Document Server

    Aswani, Anil; Sastry, S Shankar; Tomlin, Claire

    2011-01-01

    Controller design for systems typically faces a trade-off between robustness and performance, and the reliability of linear controllers has caused many control practitioners to focus on the former. However, there is a renewed interest in improving system performance to deal with growing energy and pollution constraints. This paper describes a learning-based model predictive control (MPC) scheme. The MPC provides deterministic guarantees on robustness and safety, and the learning is used to identify richer models of the system to improve controller performance. Our scheme uses a linear model with bounds on its uncertainty to construct invariant sets which help to provide the guarantees, and it can be generalized to other classes of models and to pseudo-spectral methods. This framework allows us to handle state and input constraints and optimize system performance with respect to a cost function. The learning occurs through the use of an oracle which returns the value and gradient of unmodeled dynamics at discr...

  12. Model Predictive Control-Based Fast Charging for Vehicular Batteries

    Directory of Open Access Journals (Sweden)

    Zhibin Song

    2011-08-01

    Full Text Available Battery fast charging is one of the most significant and difficult techniques affecting the commercialization of electric vehicles (EVs. In this paper, we propose a fast charge framework based on model predictive control, with the aim of simultaneously reducing the charge duration, which represents the out-of-service time of vehicles, and the increase in temperature, which represents safety and energy efficiency during the charge process. The RC model is employed to predict the future State of Charge (SOC. A single mode lumped-parameter thermal model and a neural network trained by real experimental data are also applied to predict the future temperature in simulations and experiments respectively. A genetic algorithm is then applied to find the best charge sequence under a specified fitness function, which consists of two objectives: minimizing the charging duration and minimizing the increase in temperature. Both simulation and experiment demonstrate that the Pareto front of the proposed method dominates that of the most popular constant current constant voltage (CCCV charge method.

  13. Quality guaranteed aggregation based model predictive control and stability analysis

    Institute of Scientific and Technical Information of China (English)

    LI DeWei; XI YuGeng

    2009-01-01

    The input aggregation strategy can reduce the online computational burden of the model predictive controller. But generally aggregation based MPC controller may lead to poor control quality. Therefore, a new concept, equivalent aggregation, is proposed to guarantee the control quality of aggregation based MPC. From the general framework of input linear aggregation, the design methods of equivalent aggregation are developed for unconstrained and terminal zero constrained MPC, which guarantee the actual control inputs exactly to be equal to that of the original MPC. For constrained MPC, quasi-equivalent aggregation strategies are also discussed, aiming to make the difference between the control inputs of aggregation based MPC and original MPC as small as possible. The stability conditions are given for the quasi-equivalent aggregation based MPC as well.

  14. [Predicting suicide or predicting the unpredictable in an uncertain world: Reinforcement Learning Model-Based analysis].

    Science.gov (United States)

    Desseilles, Martin

    2012-01-01

    In general, it appears that the suicidal act is highly unpredictable with the current scientific means available. In this article, the author submits the hypothesis that predicting suicide is complex because it results in predicting a choice, in itself unpredictable. The article proposes a Reinforcement learning model-based analysis. In this model, we integrate on the one hand, four ascending modulatory neurotransmitter systems (acetylcholine, noradrenalin, serotonin, and dopamine) with their regions of respective projections and afferences, and on the other hand, various observations of brain imaging identified until now in the suicidal process.

  15. Using connectome-based predictive modeling to predict individual behavior from brain connectivity.

    Science.gov (United States)

    Shen, Xilin; Finn, Emily S; Scheinost, Dustin; Rosenberg, Monica D; Chun, Marvin M; Papademetris, Xenophon; Constable, R Todd

    2017-03-01

    Neuroimaging is a fast-developing research area in which anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale data sets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: (i) feature selection, (ii) feature summarization, (iii) model building, and (iv) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a considerable amount of the variance in these measures. It has been demonstrated that the CPM protocol performs as well as or better than many of the existing approaches in brain-behavior prediction. As CPM focuses on linear modeling and a purely data-driven approach, neuroscientists with limited or no experience in machine learning or optimization will find it easy to implement these protocols. Depending on the volume of data to be processed, the protocol can take 10-100 min for model building, 1-48 h for permutation testing, and 10-20 min for visualization of results.

  16. A nonlinear regression model-based predictive control algorithm.

    Science.gov (United States)

    Dubay, R; Abu-Ayyad, M; Hernandez, J M

    2009-04-01

    This paper presents a unique approach for designing a nonlinear regression model-based predictive controller (NRPC) for single-input-single-output (SISO) and multi-input-multi-output (MIMO) processes that are common in industrial applications. The innovation of this strategy is that the controller structure allows nonlinear open-loop modeling to be conducted while closed-loop control is executed every sampling instant. Consequently, the system matrix is regenerated every sampling instant using a continuous function providing a more accurate prediction of the plant. Computer simulations are carried out on nonlinear plants, demonstrating that the new approach is easily implemented and provides tight control. Also, the proposed algorithm is implemented on two real time SISO applications; a DC motor, a plastic injection molding machine and a nonlinear MIMO thermal system comprising three temperature zones to be controlled with interacting effects. The experimental closed-loop responses of the proposed algorithm were compared to a multi-model dynamic matrix controller (MPC) with improved results for various set point trajectories. Good disturbance rejection was attained, resulting in improved tracking of multi-set point profiles in comparison to multi-model MPC.

  17. Predicting chick body mass by artificial intelligence-based models

    Directory of Open Access Journals (Sweden)

    Patricia Ferreira Ponciano Ferraz

    2014-07-01

    Full Text Available The objective of this work was to develop, validate, and compare 190 artificial intelligence-based models for predicting the body mass of chicks from 2 to 21 days of age subjected to different duration and intensities of thermal challenge. The experiment was conducted inside four climate-controlled wind tunnels using 210 chicks. A database containing 840 datasets (from 2 to 21-day-old chicks - with the variables dry-bulb air temperature, duration of thermal stress (days, chick age (days, and the daily body mass of chicks - was used for network training, validation, and tests of models based on artificial neural networks (ANNs and neuro-fuzzy networks (NFNs. The ANNs were most accurate in predicting the body mass of chicks from 2 to 21 days of age after they were subjected to the input variables, and they showed an R² of 0.9993 and a standard error of 4.62 g. The ANNs enable the simulation of different scenarios, which can assist in managerial decision-making, and they can be embedded in the heating control systems.

  18. Predictive Multiscale Modeling of Nanocellulose Based Materials and Systems

    Science.gov (United States)

    Kovalenko, Andriy

    2014-08-01

    Cellulose Nanocrysals (CNC) is a renewable biodegradable biopolymer with outstanding mechanical properties made from highly abundant natural source, and therefore is very attractive as reinforcing additive to replace petroleum-based plastics in biocomposite materials, foams, and gels. Large-scale applications of CNC are currently limited due to its low solubility in non-polar organic solvents used in existing polymerization technologies. The solvation properties of CNC can be improved by chemical modification of its surface. Development of effective surface modifications has been rather slow because extensive chemical modifications destabilize the hydrogen bonding network of cellulose and deteriorate the mechanical properties of CNC. We employ predictive multiscale theory, modeling, and simulation to gain a fundamental insight into the effect of CNC surface modifications on hydrogen bonding, CNC crystallinity, solvation thermodynamics, and CNC compatibilization with the existing polymerization technologies, so as to rationally design green nanomaterials with improved solubility in non-polar solvents, controlled liquid crystal ordering and optimized extrusion properties. An essential part of this multiscale modeling approach is the statistical- mechanical 3D-RISM-KH molecular theory of solvation, coupled with quantum mechanics, molecular mechanics, and multistep molecular dynamics simulation. The 3D-RISM-KH theory provides predictive modeling of both polar and non-polar solvents, solvent mixtures, and electrolyte solutions in a wide range of concentrations and thermodynamic states. It properly accounts for effective interactions in solution such as steric effects, hydrophobicity and hydrophilicity, hydrogen bonding, salt bridges, buffer, co-solvent, and successfully predicts solvation effects and processes in bulk liquids, solvation layers at solid surface, and in pockets and other inner spaces of macromolecules and supramolecular assemblies. This methodology

  19. Stand diameter distribution modelling and prediction based on Richards function.

    Directory of Open Access Journals (Sweden)

    Ai-guo Duan

    Full Text Available The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM or maximum likelihood estimates method (MLEM were applied to estimate the parameters of models, and the parameter prediction method (PPM and parameter recovery method (PRM were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1 R distribution presented a more accurate simulation than three-parametric Weibull function; (2 the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3 the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4 the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.

  20. Stand diameter distribution modelling and prediction based on Richards function.

    Science.gov (United States)

    Duan, Ai-guo; Zhang, Jian-guo; Zhang, Xiong-qing; He, Cai-yun

    2013-01-01

    The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata) plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM) or maximum likelihood estimates method (MLEM) were applied to estimate the parameters of models, and the parameter prediction method (PPM) and parameter recovery method (PRM) were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1) R distribution presented a more accurate simulation than three-parametric Weibull function; (2) the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3) the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4) the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.

  1. Predictive Model of Graphene Based Polymer Nanocomposites: Electrical Performance

    Science.gov (United States)

    Manta, Asimina; Gresil, Matthieu; Soutis, Constantinos

    2017-04-01

    In this computational work, a new simulation tool on the graphene/polymer nanocomposites electrical response is developed based on the finite element method (FEM). This approach is built on the multi-scale multi-physics format, consisting of a unit cell and a representative volume element (RVE). The FE methodology is proven to be a reliable and flexible tool on the simulation of the electrical response without inducing the complexity of raw programming codes, while it is able to model any geometry, thus the response of any component. This characteristic is supported by its ability in preliminary stage to predict accurately the percolation threshold of experimental material structures and its sensitivity on the effect of different manufacturing methodologies. Especially, the percolation threshold of two material structures of the same constituents (PVDF/Graphene) prepared with different methods was predicted highlighting the effect of the material preparation on the filler distribution, percolation probability and percolation threshold. The assumption of the random filler distribution was proven to be efficient on modelling material structures obtained by solution methods, while the through-the -thickness normal particle distribution was more appropriate for nanocomposites constructed by film hot-pressing. Moreover, the parametrical analysis examine the effect of each parameter on the variables of the percolation law. These graphs could be used as a preliminary design tool for more effective material system manufacturing.

  2. Modeling Morphogenesis in silico and in vitro: Towards Quantitative, Predictive, Cell-based Modeling

    NARCIS (Netherlands)

    R.M.H. Merks (Roeland); P. Koolwijk

    2009-01-01

    htmlabstractCell-based, mathematical models help make sense of morphogenesis—i.e. cells organizing into shape and pattern—by capturing cell behavior in simple, purely descriptive models. Cell-based models then predict the tissue-level patterns the cells produce collectively. The first

  3. Scanpath Based N-Gram Models for Predicting Reading Behavior

    DEFF Research Database (Denmark)

    Mishra, Abhijit; Bhattacharyya, Pushpak; Carl, Michael

    2013-01-01

    Predicting reading behavior is a difficult task. Reading behavior depends on various linguistic factors (e.g. sentence length, structural complexity etc.) and other factors (e.g individual's reading style, age etc.). Ideally, a reading model should be similar to a language model where the model i...

  4. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  5. Prediction of blast-induced air overpressure: a hybrid AI-based predictive model.

    Science.gov (United States)

    Jahed Armaghani, Danial; Hajihassani, Mohsen; Marto, Aminaton; Shirani Faradonbeh, Roohollah; Mohamad, Edy Tonnizam

    2015-11-01

    Blast operations in the vicinity of residential areas usually produce significant environmental problems which may cause severe damage to the nearby areas. Blast-induced air overpressure (AOp) is one of the most important environmental impacts of blast operations which needs to be predicted to minimize the potential risk of damage. This paper presents an artificial neural network (ANN) optimized by the imperialist competitive algorithm (ICA) for the prediction of AOp induced by quarry blasting. For this purpose, 95 blasting operations were precisely monitored in a granite quarry site in Malaysia and AOp values were recorded in each operation. Furthermore, the most influential parameters on AOp, including the maximum charge per delay and the distance between the blast-face and monitoring point, were measured and used to train the ICA-ANN model. Based on the generalized predictor equation and considering the measured data from the granite quarry site, a new empirical equation was developed to predict AOp. For comparison purposes, conventional ANN models were developed and compared with the ICA-ANN results. The results demonstrated that the proposed ICA-ANN model is able to predict blast-induced AOp more accurately than other presented techniques.

  6. CLINICAL DATABASE ANALYSIS USING DMDT BASED PREDICTIVE MODELLING

    Directory of Open Access Journals (Sweden)

    Srilakshmi Indrasenan

    2013-04-01

    Full Text Available In recent years, predictive data mining techniques play a vital role in the field of medical informatics. These techniques help the medical practitioners in predicting various classes which is useful in prediction treatment. One of such major difficulty is prediction of survival rate in breast cancer patients. Breast cancer is a common disease these days and fighting against it is a tough battle for both the surgeons and the patients. To predict the survivability rate in breast cancer patients which helps the medical practitioner to select the type of treatment a predictive data mining technique called Diversified Multiple Decision Tree (DMDT classification is used. Additionally, to avoid difficulties from the outlier and skewed data, it is also proposed to perform the improvement of training space by outlier filtering and over sampling. As a result, this novel approach gives the survivability rate of the cancer patients based on which the medical practitioners can choose the type of treatment.

  7. Biomass prediction model in maize based on satellite images

    Science.gov (United States)

    Mihai, Herbei; Florin, Sala

    2016-06-01

    Monitoring of crops by satellite techniques is very useful in the context of precision agriculture, regarding crops management and agricultural production. The present study has evaluated the interrelationship between maize biomass production and satellite indices (NDVI and NDBR) during five development stages (BBCH code), highlighting different levels of correlation. Biomass production recorded was between 2.39±0.005 t ha-1 (12-13 BBCH code) and 51.92±0.028 t ha-1 (83-85 BBCH code), in relation to vegetation stages studied. Values of chlorophyll content ranged from 24.1±0.25 SPAD unit (12-13 BBCH code) to 58.63±0.47 SPAD unit (71-73 BBCH code), and the obtained satellite indices ranged from 0.035641±0.002 and 0.320839±0.002 for NDVI indices respectively 0.035095±0.034 and 0.491038±0.018 in the case of NDBR indices. By regression analysis it was possible to obtain predictive models of biomass in maize based on the satellite indices, in statistical accurate conditions. The most accurate prediction was possible based on NDBR index (R2 = 0.986, F = 144.23, p<0.001, RMSE = 1.446), then based on chlorophyll content (R2 = 0.834, F = 16.14, p = 0.012, RMSE = 6.927) and NDVI index (R2 = 0.682, F = 3.869, p = 0.116, RMSE = 12.178).

  8. Validation of a base deficit-based trauma prediction model and comparison with TRISS and ASCOT

    NARCIS (Netherlands)

    Lam, S. W.; Lingsma, H. F.; van Beeck, E. F.; Leenen, L. P H

    2016-01-01

    Background: Base deficit provides a more objective indicator of physiological stress following injury as compared with vital signs constituting the revised trauma score (RTS). We have previously developed a base deficit-based trauma survival prediction model [base deficit and injury severity score m

  9. Validation of a base deficit-based trauma prediction model and comparison with TRISS and ASCOT

    NARCIS (Netherlands)

    Lam, S. W.; Lingsma, H. F.; van Beeck, E. F.; Leenen, L. P H

    2016-01-01

    Background: Base deficit provides a more objective indicator of physiological stress following injury as compared with vital signs constituting the revised trauma score (RTS). We have previously developed a base deficit-based trauma survival prediction model [base deficit and injury severity score

  10. Embryo quality predictive models based on cumulus cells gene expression

    Directory of Open Access Journals (Sweden)

    Devjak R

    2016-06-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in clinical practice of infertility treatment, the indicators for high quality embryos were investigated. Cumulus cells (CC have a specific gene expression profile according to the developmental potential of the oocyte they are surrounding, and therefore, specific gene expression could be used as a biomarker. The aim of our study was to combine more than one biomarker to observe improvement in prediction value of embryo development. In this study, 58 CC samples from 17 IVF patients were analyzed. This study was approved by the Republic of Slovenia National Medical Ethics Committee. Gene expression analysis [quantitative real time polymerase chain reaction (qPCR] for five genes, analyzed according to embryo quality level, was performed. Two prediction models were tested for embryo quality prediction: a binary logistic and a decision tree model. As the main outcome, gene expression levels for five genes were taken and the area under the curve (AUC for two prediction models were calculated. Among tested genes, AMHR2 and LIF showed significant expression difference between high quality and low quality embryos. These two genes were used for the construction of two prediction models: the binary logistic model yielded an AUC of 0.72 ± 0.08 and the decision tree model yielded an AUC of 0.73 ± 0.03. Two different prediction models yielded similar predictive power to differentiate high and low quality embryos. In terms of eventual clinical decision making, the decision tree model resulted in easy-to-interpret rules that are highly applicable in clinical practice.

  11. Validation of Biomarker-based risk prediction models

    OpenAIRE

    Taylor, Jeremy M.G.; Ankerst, Donna P.; Andridge, Rebecca R.

    2008-01-01

    The increasing availability and use of predictive models to facilitate informed decision making highlights the need for careful assessment of the validity of these models. In particular, models involving biomarkers require careful validation for two reasons: issues with overfitting when complex models involve a large number of biomarkers, and inter-laboratory variation in assays used to measure biomarkers. In this paper we distinguish between internal and external statistical validation. Inte...

  12. A CHAID Based Performance Prediction Model in Educational Data Mining

    Directory of Open Access Journals (Sweden)

    R. Bhaskaran

    2010-01-01

    Full Text Available The performance in higher secondary school education in India is a turning point in the academic lives of all students. As this academic performance is influenced by many factors, it is essential to develop predictive data mining model for students' performance so as to identify the slow learners and study the influence of the dominant factors on their academic performance. In the present investigation, a survey cum experimental methodology was adopted to generate a database and it was constructed from a primary and a secondary source. While the primary data was collected from the regular students, the secondary data was gathered from the school and office of the Chief Educational Officer (CEO. A total of 1000 datasets of the year 2006 from five different schools in three different districts of Tamilnadu were collected. The raw data was preprocessed in terms of filling up missing values, transforming values in one form into another and relevant attribute/ variable selection. As a result, we had 772 student records, which were used for CHAID prediction model construction. A set of prediction rules were extracted from CHIAD prediction model and the efficiency of the generated CHIAD prediction model was found. The accuracy of the present model was compared with other model and it has been found to be satisfactory.

  13. Experience-based model predictive control using reinforcement learning

    NARCIS (Netherlands)

    Negenborn, R.R.; De Schutter, B.; Wiering, M.A.; Hellendoorn, J.

    2004-01-01

    Model predictive control (MPC) is becoming an increasingly popular method to select actions for controlling dynamic systems. TraditionallyMPC uses a model of the system to be controlled and a performance function to characterize the desired behavior of the system. The MPC agent finds actions over a

  14. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    in noise experiment was used for training and an ideal binary mask experiment was used for evaluation. All three models were able to capture the trends in the speech in noise training data well, but the proposed model provides a better prediction of the binary mask test data, particularly when the binary...... masks degenerate to a noise vocoder....

  15. UAV Formation Flight Based on Nonlinear Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Zhou Chao

    2012-01-01

    Full Text Available We designed a distributed collision-free formation flight control law in the framework of nonlinear model predictive control. Formation configuration is determined in the virtual reference point coordinate system. Obstacle avoidance is guaranteed by cost penalty, and intervehicle collision avoidance is guaranteed by cost penalty combined with a new priority strategy.

  16. Predictability of Shanghai Stock Market by Agent-based Mix-game Model

    CERN Document Server

    Gou, C

    2005-01-01

    This paper reports the effort of using agent-based mix-game model to predict financial time series. It introduces the prediction methodology by means of mix-game model and gives an example of its application to forecasting Shanghai Index. The results show that this prediction methodology is effective and agent-based mix-game model is a potential good model to predict time series of financial markets.

  17. Prediction of Geological Subsurfaces Based on Gaussian Random Field Models

    Energy Technology Data Exchange (ETDEWEB)

    Abrahamsen, Petter

    1997-12-31

    During the sixties, random functions became practical tools for predicting ore reserves with associated precision measures in the mining industry. This was the start of the geostatistical methods called kriging. These methods are used, for example, in petroleum exploration. This thesis reviews the possibilities for using Gaussian random functions in modelling of geological subsurfaces. It develops methods for including many sources of information and observations for precise prediction of the depth of geological subsurfaces. The simple properties of Gaussian distributions make it possible to calculate optimal predictors in the mean square sense. This is done in a discussion of kriging predictors. These predictors are then extended to deal with several subsurfaces simultaneously. It is shown how additional velocity observations can be used to improve predictions. The use of gradient data and even higher order derivatives are also considered and gradient data are used in an example. 130 refs., 44 figs., 12 tabs.

  18. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  19. Structure-Based Predictive model for Coal Char Combustion.

    Energy Technology Data Exchange (ETDEWEB)

    Hurt, R.; Colo, J [Brown Univ., Providence, RI (United States). Div. of Engineering; Essenhigh, R.; Hadad, C [Ohio State Univ., Columbus, OH (United States). Dept. of Chemistry; Stanley, E. [Boston Univ., MA (United States). Dept. of Physics

    1997-09-24

    During the third quarter of this project, progress was made on both major technical tasks. Progress was made in the chemistry department at OSU on the calculation of thermodynamic properties for a number of model organic compounds. Modelling work was carried out at Brown to adapt a thermodynamic model of carbonaceous mesophase formation, originally applied to pitch carbonization, to the prediction of coke texture in coal combustion. This latter work makes use of the FG-DVC model of coal pyrolysis developed by Advanced Fuel Research to specify the pool of aromatic clusters that participate in the order/disorder transition. This modelling approach shows promise for the mechanistic prediction of the rank dependence of char structure and will therefore be pursued further. Crystalline ordering phenomena were also observed in a model char prepared from phenol-formaldehyde carbonized at 900{degrees}C and 1300{degrees}C using high-resolution TEM fringe imaging. Dramatic changes occur in the structure between 900 and 1300{degrees}C, making this char a suitable candidate for upcoming in situ work on the hot stage TEM. Work also proceeded on molecular dynamics simulations at Boston University and on equipment modification and testing for the combustion experiments with widely varying flame types at Ohio State.

  20. Optimality principles for model-based prediction of human gait.

    Science.gov (United States)

    Ackermann, Marko; van den Bogert, Antonie J

    2010-04-19

    Although humans have a large repertoire of potential movements, gait patterns tend to be stereotypical and appear to be selected according to optimality principles such as minimal energy. When applied to dynamic musculoskeletal models such optimality principles might be used to predict how a patient's gait adapts to mechanical interventions such as prosthetic devices or surgery. In this paper we study the effects of different performance criteria on predicted gait patterns using a 2D musculoskeletal model. The associated optimal control problem for a family of different cost functions was solved utilizing the direct collocation method. It was found that fatigue-like cost functions produced realistic gait, with stance phase knee flexion, as opposed to energy-related cost functions which avoided knee flexion during the stance phase. We conclude that fatigue minimization may be one of the primary optimality principles governing human gait.

  1. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    Science.gov (United States)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  2. Evaluation of Artificial Intelligence Based Models for Chemical Biodegradability Prediction

    Directory of Open Access Journals (Sweden)

    Aleksandar Sabljic

    2004-12-01

    Full Text Available This study presents a review of biodegradability modeling efforts including a detailed assessment of two models developed using an artificial intelligence based methodology. Validation results for these models using an independent, quality reviewed database, demonstrate that the models perform well when compared to another commonly used biodegradability model, against the same data. The ability of models induced by an artificial intelligence methodology to accommodate complex interactions in detailed systems, and the demonstrated reliability of the approach evaluated by this study, indicate that the methodology may have application in broadening the scope of biodegradability models. Given adequate data for biodegradability of chemicals under environmental conditions, this may allow for the development of future models that include such things as surface interface impacts on biodegradability for example.

  3. Application of Nonlinear Predictive Control Based on RBF Network Predictive Model in MCFC Plant

    Institute of Scientific and Technical Information of China (English)

    CHEN Yue-hua; CAO Guang-yi; ZHU Xin-jian

    2007-01-01

    This paper described a nonlinear model predictive controller for regulating a molten carbonate fuel cell (MCFC). A detailed mechanism model of output voltage of a MCFC was presented at first. However, this model was too complicated to be used in a control system. Consequently, an off line radial basis function (RBF) network was introduced to build a nonlinear predictive model. And then, the optimal control sequences were obtained by applying golden mean method. The models and controller have been realized in the MATLAB environment. Simulation results indicate the proposed algorithm exhibits satisfying control effect even when the current densities vary largely.

  4. Multi-Variable Model-Based Parameter Estimation Model for Antenna Radiation Pattern Prediction

    Science.gov (United States)

    Deshpande, Manohar D.; Cravey, Robin L.

    2002-01-01

    A new procedure is presented to develop multi-variable model-based parameter estimation (MBPE) model to predict far field intensity of antenna. By performing MBPE model development procedure on a single variable at a time, the present method requires solution of smaller size matrices. The utility of the present method is demonstrated by determining far field intensity due to a dipole antenna over a frequency range of 100-1000 MHz and elevation angle range of 0-90 degrees.

  5. Construct Method of Predicting Satisfaction Model Based on Technical Characteristics

    Institute of Scientific and Technical Information of China (English)

    YANG Xiao-an; DENG Qian; SUN Guan-long; ZHANG Wei-she

    2011-01-01

    In order to construct objective relatively mapping relationship model between customer requirements and product technical characteristics, a novel approach based on customer satisfactions information digging from case products and satisfaction information of expert technical characteristics was put forward in this paper. Technical characteristics evaluation values were expressed by rough number, and technical characteristics target sequence was determined on the basis of efficiency, cost type and middle type in this method. Use each calculated satisfactions of customers and technical characteristics as input and output elements to construct BP network model. And we use MATLAB software to simulate this BP network model based on the case of electric bicycles.

  6. Ontology-based tools to expedite predictive model construction.

    Science.gov (United States)

    Haug, Peter; Holmen, John; Wu, Xinzi; Mynam, Kumar; Ebert, Matthew; Ferraro, Jeffrey

    2014-01-01

    Large amounts of medical data are collected electronically during the course of caring for patients using modern medical information systems. This data presents an opportunity to develop clinically useful tools through data mining and observational research studies. However, the work necessary to make sense of this data and to integrate it into a research initiative can require substantial effort from medical experts as well as from experts in medical terminology, data extraction, and data analysis. This slows the process of medical research. To reduce the effort required for the construction of computable, diagnostic predictive models, we have developed a system that hybridizes a medical ontology with a large clinical data warehouse. Here we describe components of this system designed to automate the development of preliminary diagnostic models and to provide visual clues that can assist the researcher in planning for further analysis of the data behind these models.

  7. SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating.

    Science.gov (United States)

    Lee, Young-Joo; Cho, Soojin

    2016-03-02

    Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed.

  8. Improving Computational Efficiency of Prediction in Model-based Prognostics Using the Unscented Transform

    Data.gov (United States)

    National Aeronautics and Space Administration — Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of...

  9. The Prediction Model of Dam Uplift Pressure Based on Random Forest

    Science.gov (United States)

    Li, Xing; Su, Huaizhi; Hu, Jiang

    2017-09-01

    The prediction of the dam uplift pressure is of great significance in the dam safety monitoring. Based on the comprehensive consideration of various factors, 18 parameters are selected as the main factors affecting the prediction of uplift pressure, use the actual monitoring data of uplift pressure as the evaluation factors for the prediction model, based on the random forest algorithm and support vector machine to build the dam uplift pressure prediction model to predict the uplift pressure of the dam, and the predict performance of the two models were compared and analyzed. At the same time, based on the established random forest prediction model, the significance of each factor is analyzed, and the importance of each factor of the prediction model is calculated by the importance function. Results showed that: (1) RF prediction model can quickly and accurately predict the uplift pressure value according to the influence factors, the average prediction accuracy is above 96%, compared with the support vector machine (SVM) model, random forest model has better robustness, better prediction precision and faster convergence speed, and the random forest model is more robust to missing data and unbalanced data. (2) The effect of water level on uplift pressure is the largest, and the influence of rainfall on the uplift pressure is the smallest compared with other factors.

  10. Optimization of condition-based asset management using a predictive health model

    NARCIS (Netherlands)

    Bajracharya, G.; Koltunowicz, T.; Negenborn, R.R.; Papp, Z.; Djairam, D.; De Schutter, B.; Smit, J.J.

    2009-01-01

    In this paper, a model predictive framework is used to optimize the operation and maintenance actions of power system equipment based on the predicted health sate of this equipment. In particular, this framework is used to predict the health state of transformers based on their usage. The health sta

  11. Nonlinear model predictive control based on collective neurodynamic optimization.

    Science.gov (United States)

    Yan, Zheng; Wang, Jun

    2015-04-01

    In general, nonlinear model predictive control (NMPC) entails solving a sequential global optimization problem with a nonconvex cost function or constraints. This paper presents a novel collective neurodynamic optimization approach to NMPC without linearization. Utilizing a group of recurrent neural networks (RNNs), the proposed collective neurodynamic optimization approach searches for optimal solutions to global optimization problems by emulating brainstorming. Each RNN is guaranteed to converge to a candidate solution by performing constrained local search. By exchanging information and iteratively improving the starting and restarting points of each RNN using the information of local and global best known solutions in a framework of particle swarm optimization, the group of RNNs is able to reach global optimal solutions to global optimization problems. The essence of the proposed collective neurodynamic optimization approach lies in the integration of capabilities of global search and precise local search. The simulation results of many cases are discussed to substantiate the effectiveness and the characteristics of the proposed approach.

  12. SVM with Quadratic Polynomial Kernel Function Based Nonlinear Model One-step-ahead Predictive Control

    Institute of Scientific and Technical Information of China (English)

    钟伟民; 何国龙; 皮道映; 孙优贤

    2005-01-01

    A support vector machine (SVM) with quadratic polynomial kernel function based nonlinear model one-step-ahead predictive controller is presented. The SVM based predictive model is established with black-box identification method. By solving a cubic equation in the feature space, an explicit predictive control law is obtained through the predictive control mechanism. The effect of controller is demonstrated on a recognized benchmark problem and on the control of continuous-stirred tank reactor (CSTR). Simulation results show that SVM with quadratic polynomial kernel function based predictive controller can be well applied to nonlinear systems, with good performance in following reference trajectory as well as in disturbance-rejection.

  13. A physiologically based in silico kinetic model predicting plasma cholesterol concentrations in humans

    NARCIS (Netherlands)

    Pas, van de N.C.A.; Woutersen, R.A.; Ommen, van B.; Rietjens, I.M.C.M.; Graaf, de A.A.

    2012-01-01

    Increased plasma cholesterol concentration is associated with increased risk of cardiovascular disease. This study describes the development, validation, and analysis of a physiologically based kinetic (PBK) model for the prediction of plasma cholesterol concentrations in humans. This model was dire

  14. A physiologically based in silico kinetic model predicting plasma cholesterol concentrations in humans

    NARCIS (Netherlands)

    Pas, N.C.A. van de; Woutersen, R.A.; Ommen, B. van; Rietjens, I.M.C.M.; Graaf, A.A. de

    2012-01-01

    Increased plasma cholesterol concentration is associated with increased risk of cardiovascular disease. This study describes the development, validation, and analysis of a physiologically based kinetic (PBK) model for the prediction of plasma cholesterol concentrations in humans. This model was

  15. A CHAID Based Performance Prediction Model in Educational Data Mining

    CERN Document Server

    Ramaswami, M

    2010-01-01

    The performance in higher secondary school education in India is a turning point in the academic lives of all students. As this academic performance is influenced by many factors, it is essential to develop predictive data mining model for students' performance so as to identify the slow learners and study the influence of the dominant factors on their academic performance. In the present investigation, a survey cum experimental methodology was adopted to generate a database and it was constructed from a primary and a secondary source. While the primary data was collected from the regular students, the secondary data was gathered from the school and office of the Chief Educational Officer (CEO). A total of 1000 datasets of the year 2006 from five different schools in three different districts of Tamilnadu were collected. The raw data was preprocessed in terms of filling up missing values, transforming values in one form into another and relevant attribute/ variable selection. As a result, we had 772 student r...

  16. State Prediction of Chaotic System Based on ANN Model

    Institute of Scientific and Technical Information of China (English)

    YUE Yi-hong; HAN Wen-xiu

    2002-01-01

    The choice of time delay and embedding dimension is very important to the phase space reconstruction of any chaotic time series. In this paper, we determine optimal time delay by computing autocorrelation function of time series. Optimal embedding dimension is given by means of the relation between embedding dimension and correlation dimension of chaotic time series. Based on the methods above,we choose ANN model to appoximate the given true system. At the same time, a new algorithm is applied to determine the network weights. At the end of this paper, the theory above is demonstrated through the research of time series generated by Logistic map.

  17. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  18. Multi-model ensemble-based probabilistic prediction of tropical cyclogenesis using TIGGE model forecasts

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati; Pal, P. K.

    2016-10-01

    An extended range tropical cyclogenesis forecast model has been developed using the forecasts of global models available from TIGGE portal. A scheme has been developed to detect the signatures of cyclogenesis in the global model forecast fields [i.e., the mean sea level pressure and surface winds (10 m horizontal winds)]. For this, a wind matching index was determined between the synthetic cyclonic wind fields and the forecast wind fields. The thresholds of 0.4 for wind matching index and 1005 hpa for pressure were determined to detect the cyclonic systems. These detected cyclonic systems in the study region are classified into different cyclone categories based on their intensity (maximum wind speed). The forecasts of up to 15 days from three global models viz., ECMWF, NCEP and UKMO have been used to predict cyclogenesis based on multi-model ensemble approach. The occurrence of cyclonic events of different categories in all the forecast steps in the grided region (10 × 10 km2) was used to estimate the probability of the formation of cyclogenesis. The probability of cyclogenesis was estimated by computing the grid score using the wind matching index by each model and at each forecast step and convolving it with Gaussian filter. The proposed method is used to predict the cyclogenesis of five named tropical cyclones formed during the year 2013 in the north Indian Ocean. The 6-8 days advance cyclogenesis of theses systems were predicted using the above approach. The mean lead prediction time for the cyclogenesis event of the proposed model has been found as 7 days.

  19. STRUCTURE-BASED PREDICTIVE MODEL FOR COAL CHAR COMBUSTION

    Energy Technology Data Exchange (ETDEWEB)

    CHRISTOPHER M. HADAD; JOSEPH M. CALO; ROBERT H. ESSENHIGH; ROBERT H. HURT

    1998-06-04

    During the past quarter of this project, significant progress continued was made on both major technical tasks. Progress was made at OSU on advancing the application of computational chemistry to oxidative attack on model polyaromatic hydrocarbons (PAHs) and graphitic structures. This work is directed at the application of quantitative ab initio molecular orbital theory to address the decomposition products and mechanisms of coal char reactivity. Previously, it was shown that the �hybrid� B3LYP method can be used to provide quantitative information concerning the stability of the corresponding radicals that arise by hydrogen atom abstraction from monocyclic aromatic rings. In the most recent quarter, these approaches have been extended to larger carbocyclic ring systems, such as coronene, in order to compare the properties of a large carbonaceous PAH to that of the smaller, monocyclic aromatic systems. It was concluded that, at least for bond dissociation energy considerations, the properties of the large PAHs can be modeled reasonably well by smaller systems. In addition to the preceding work, investigations were initiated on the interaction of selected radicals in the �radical pool� with the different types of aromatic structures. In particular, the different pathways for addition vs. abstraction to benzene and furan by H and OH radicals were examined. Thus far, the addition channel appears to be significantly favored over abstraction on both kinetic and thermochemical grounds. Experimental work at Brown University in support of the development of predictive structural models of coal char combustion was focused on elucidating the role of coal mineral matter impurities on reactivity. An �inverse� approach was used where a carbon material was doped with coal mineral matter. The carbon material was derived from a high carbon content fly ash (Fly Ash 23 from the Salem Basin Power Plant. The ash was obtained from Pittsburgh #8 coal (PSOC 1451). Doped

  20. A Comprehensive Propagation Prediction Model Comprising Microfacet Based Scattering and Probability Based Coverage Optimization Algorithm

    OpenAIRE

    A. S. M. Zahid Kausar; Ahmed Wasif Reza; Lau Chun Wo; Harikrishnan Ramiah

    2014-01-01

    Although ray tracing based propagation prediction models are popular for indoor radio wave propagation characterization, most of them do not provide an integrated approach for achieving the goal of optimum coverage, which is a key part in designing wireless network. In this paper, an accelerated technique of three-dimensional ray tracing is presented, where rough surface scattering is included for making a more accurate ray tracing technique. Here, the rough surface scattering is represented...

  1. A Comprehensive Propagation Prediction Model Comprising Microfacet Based Scattering and Probability Based Coverage Optimization Algorithm

    OpenAIRE

    Kausar, A. S. M. Zahid; Reza, Ahmed Wasif; Wo, Lau Chun; Ramiah, Harikrishnan

    2014-01-01

    Although ray tracing based propagation prediction models are popular for indoor radio wave propagation characterization, most of them do not provide an integrated approach for achieving the goal of optimum coverage, which is a key part in designing wireless network. In this paper, an accelerated technique of three-dimensional ray tracing is presented, where rough surface scattering is included for making a more accurate ray tracing technique. Here, the rough surface scattering is represented ...

  2. Structure Based Predictive Model for Coal Char Combustion

    Energy Technology Data Exchange (ETDEWEB)

    Robert Hurt; Joseph Calo; Robert Essenhigh; Christopher Hadad

    2000-12-30

    This unique collaborative project has taken a very fundamental look at the origin of structure, and combustion reactivity of coal chars. It was a combined experimental and theoretical effort involving three universities and collaborators from universities outside the U.S. and from U.S. National Laboratories and contract research companies. The project goal was to improve our understanding of char structure and behavior by examining the fundamental chemistry of its polyaromatic building blocks. The project team investigated the elementary oxidative attack on polyaromatic systems, and coupled with a study of the assembly processes that convert these polyaromatic clusters to mature carbon materials (or chars). We believe that the work done in this project has defined a powerful new science-based approach to the understanding of char behavior. The work on aromatic oxidation pathways made extensive use of computational chemistry, and was led by Professor Christopher Hadad in the Department of Chemistry at Ohio State University. Laboratory experiments on char structure, properties, and combustion reactivity were carried out at both OSU and Brown, led by Principle Investigators Joseph Calo, Robert Essenhigh, and Robert Hurt. Modeling activities were divided into two parts: first unique models of crystal structure development were formulated by the team at Brown (PI'S Hurt and Calo) with input from Boston University and significant collaboration with Dr. Alan Kerstein at Sandia and with Dr. Zhong-Ying chen at SAIC. Secondly, new combustion models were developed and tested, led by Professor Essenhigh at OSU, Dieter Foertsch (a collaborator at the University of Stuttgart), and Professor Hurt at Brown. One product of this work is the CBK8 model of carbon burnout, which has already found practical use in CFD codes and in other numerical models of pulverized fuel combustion processes, such as EPRI's NOxLOI Predictor. The remainder of the report consists of detailed

  3. Predictive Models of Alcohol Use Based on Attitudes and Individual Values

    Science.gov (United States)

    Del Castillo Rodríguez, José A. García; López-Sánchez, Carmen; Soler, M. Carmen Quiles; Del Castillo-López, Álvaro García; Pertusa, Mónica Gázquez; Campos, Juan Carlos Marzo; Inglés, Cándido J.

    2013-01-01

    Two predictive models are developed in this article: the first is designed to predict people' attitudes to alcoholic drinks, while the second sets out to predict the use of alcohol in relation to selected individual values. University students (N = 1,500) were recruited through stratified sampling based on sex and academic discipline. The…

  4. Predictive Models of Alcohol Use Based on Attitudes and Individual Values

    Science.gov (United States)

    Del Castillo Rodríguez, José A. García; López-Sánchez, Carmen; Soler, M. Carmen Quiles; Del Castillo-López, Álvaro García; Pertusa, Mónica Gázquez; Campos, Juan Carlos Marzo; Inglés, Cándido J.

    2013-01-01

    Two predictive models are developed in this article: the first is designed to predict people' attitudes to alcoholic drinks, while the second sets out to predict the use of alcohol in relation to selected individual values. University students (N = 1,500) were recruited through stratified sampling based on sex and academic discipline. The…

  5. Predictive analytics technology review: Similarity-based modeling and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, James; Doan, Don; Gandhi, Devang; Nieman, Bill

    2010-09-15

    Over 11 years ago, SmartSignal introduced Predictive Analytics for eliminating equipment failures, using its patented SBM technology. SmartSignal continues to lead and dominate the market and, in 2010, went one step further and introduced Predictive Diagnostics. Now, SmartSignal is combining Predictive Diagnostics with RCM methodology and industry expertise. FMEA logic reengineers maintenance work management, eliminates unneeded inspections, and focuses efforts on the real issues. This integrated solution significantly lowers maintenance costs, protects against critical asset failures, and improves commercial availability, and reduces work orders 20-40%. Learn how.

  6. An Efficient Deterministic Approach to Model-based Prediction Uncertainty

    Data.gov (United States)

    National Aeronautics and Space Administration — Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the...

  7. BAYESIAN FORECASTS COMBINATION TO IMPROVE THE ROMANIAN INFLATION PREDICTIONS BASED ON ECONOMETRIC MODELS

    Directory of Open Access Journals (Sweden)

    Mihaela Simionescu

    2014-12-01

    Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.

  8. Bayesian model-based cluster analysis for predicting macrofaunal communities

    NARCIS (Netherlands)

    Braak, ter C.J.F.; Hoijtink, H.; Akkermans, W.; Verdonschot, P.F.M.

    2003-01-01

    To predict macrofaunal community composition from environmental data a two-step approach is often followed: (1) the water samples are clustered into groups on the basis of the macrofauna data and (2) the groups are related to the environmental data, e.g. by discriminant analysis. For the cluster ana

  9. Prediction Uncertainty Analyses for the Combined Physically-Based and Data-Driven Models

    Science.gov (United States)

    Demissie, Y. K.; Valocchi, A. J.; Minsker, B. S.; Bailey, B. A.

    2007-12-01

    The unavoidable simplification associated with physically-based mathematical models can result in biased parameter estimates and correlated model calibration errors, which in return affect the accuracy of model predictions and the corresponding uncertainty analyses. In this work, a physically-based groundwater model (MODFLOW) together with error-correcting artificial neural networks (ANN) are used in a complementary fashion to obtain an improved prediction (i.e. prediction with reduced bias and error correlation). The associated prediction uncertainty of the coupled MODFLOW-ANN model is then assessed using three alternative methods. The first method estimates the combined model confidence and prediction intervals using first-order least- squares regression approximation theory. The second method uses Monte Carlo and bootstrap techniques for MODFLOW and ANN, respectively, to construct the combined model confidence and prediction intervals. The third method relies on a Bayesian approach that uses analytical or Monte Carlo methods to derive the intervals. The performance of these approaches is compared with Generalized Likelihood Uncertainty Estimation (GLUE) and Calibration-Constrained Monte Carlo (CCMC) intervals of the MODFLOW predictions alone. The results are demonstrated for a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study comprises structural, parameter, and measurement uncertainties. The preliminary results indicate that the proposed three approaches yield comparable confidence and prediction intervals, thus making the computationally efficient first-order least-squares regression approach attractive for estimating the coupled model uncertainty. These results will be compared with GLUE and CCMC results.

  10. A CBR-Based and MAHP-Based Customer Value Prediction Model for New Product Development

    Directory of Open Access Journals (Sweden)

    Yu-Jie Zhao

    2014-01-01

    Full Text Available In the fierce market environment, the enterprise which wants to meet customer needs and boost its market profit and share must focus on the new product development. To overcome the limitations of previous research, Chan et al. proposed a dynamic decision support system to predict the customer lifetime value (CLV for new product development. However, to better meet the customer needs, there are still some deficiencies in their model, so this study proposes a CBR-based and MAHP-based customer value prediction model for a new product (C&M-CVPM. CBR (case based reasoning can reduce experts’ workload and evaluation time, while MAHP (multiplicative analytic hierarchy process can use actual but average influencing factor’s effectiveness in stimulation, and at same time C&M-CVPM uses dynamic customers’ transition probability which is more close to reality. This study not only introduces the realization of CBR and MAHP, but also elaborates C&M-CVPM’s three main modules. The application of the proposed model is illustrated and confirmed to be sensible and convincing through a stimulation experiment.

  11. A CBR-Based and MAHP-Based Customer Value Prediction Model for New Product Development

    Science.gov (United States)

    Zhao, Yu-Jie; Luo, Xin-xing; Deng, Li

    2014-01-01

    In the fierce market environment, the enterprise which wants to meet customer needs and boost its market profit and share must focus on the new product development. To overcome the limitations of previous research, Chan et al. proposed a dynamic decision support system to predict the customer lifetime value (CLV) for new product development. However, to better meet the customer needs, there are still some deficiencies in their model, so this study proposes a CBR-based and MAHP-based customer value prediction model for a new product (C&M-CVPM). CBR (case based reasoning) can reduce experts' workload and evaluation time, while MAHP (multiplicative analytic hierarchy process) can use actual but average influencing factor's effectiveness in stimulation, and at same time C&M-CVPM uses dynamic customers' transition probability which is more close to reality. This study not only introduces the realization of CBR and MAHP, but also elaborates C&M-CVPM's three main modules. The application of the proposed model is illustrated and confirmed to be sensible and convincing through a stimulation experiment. PMID:25162050

  12. A CBR-based and MAHP-based customer value prediction model for new product development.

    Science.gov (United States)

    Zhao, Yu-Jie; Luo, Xin-xing; Deng, Li

    2014-01-01

    In the fierce market environment, the enterprise which wants to meet customer needs and boost its market profit and share must focus on the new product development. To overcome the limitations of previous research, Chan et al. proposed a dynamic decision support system to predict the customer lifetime value (CLV) for new product development. However, to better meet the customer needs, there are still some deficiencies in their model, so this study proposes a CBR-based and MAHP-based customer value prediction model for a new product (C&M-CVPM). CBR (case based reasoning) can reduce experts' workload and evaluation time, while MAHP (multiplicative analytic hierarchy process) can use actual but average influencing factor's effectiveness in stimulation, and at same time C&M-CVPM uses dynamic customers' transition probability which is more close to reality. This study not only introduces the realization of CBR and MAHP, but also elaborates C&M-CVPM's three main modules. The application of the proposed model is illustrated and confirmed to be sensible and convincing through a stimulation experiment.

  13. An Integrative Pathway-based Clinical-genomic Model for Cancer Survival Prediction.

    Science.gov (United States)

    Chen, Xi; Wang, Lily; Ishwaran, Hemant

    2010-09-01

    Prediction models that use gene expression levels are now being proposed for personalized treatment of cancer, but building accurate models that are easy to interpret remains a challenge. In this paper, we describe an integrative clinical-genomic approach that combines both genomic pathway and clinical information. First, we summarize information from genes in each pathway using Supervised Principal Components (SPCA) to obtain pathway-based genomic predictors. Next, we build a prediction model based on clinical variables and pathway-based genomic predictors using Random Survival Forests (RSF). Our rationale for this two-stage procedure is that the underlying disease process may be influenced by environmental exposure (measured by clinical variables) and perturbations in different pathways (measured by pathway-based genomic variables), as well as their interactions. Using two cancer microarray datasets, we show that the pathway-based clinical-genomic model outperforms gene-based clinical-genomic models, with improved prediction accuracy and interpretability.

  14. Prediction model for permeability index by integrating case-based reasoning with adaptive particle swarm optimization

    Institute of Scientific and Technical Information of China (English)

    Zhu Hongqiu; Yang Chunhua; Gui Weihua

    2009-01-01

    To effectively predict the permeability index of smelting process in the imperial smelting furnace, an intelligent prediction model is proposed. It integrates the case-based reasoning (CBR) with adaptive particle swarm optimization (PSO). The number of nearest neighbors and the weighted features vector are optimized online using the adaptive PSO to improve the prediction accuracy of CBR. The adaptive inertia weight and mutation operation are used to overcome the premature convergence of the PSO. The proposed method is validated a compared with the basic weighted CBR. The results show that the proposed model has higher prediction accuracy and better performance than the basic CBR model.

  15. Predicting the acute neurotoxicity of diverse organic solvents using probabilistic neural networks based QSTR modeling approaches.

    Science.gov (United States)

    Basant, Nikita; Gupta, Shikha; Singh, Kunwar P

    2016-03-01

    Organic solvents are widely used chemicals and the neurotoxic properties of some are well established. In this study, we established nonlinear qualitative and quantitative structure-toxicity relationship (STR) models for predicting neurotoxic classes and neurotoxicity of structurally diverse solvents in rodent test species following OECD guideline principles for model development. Probabilistic neural network (PNN) based qualitative and generalized regression neural network (GRNN) based quantitative STR models were constructed using neurotoxicity data from rat and mouse studies. Further, interspecies correlation based quantitative activity-activity relationship (QAAR) and global QSTR models were also developed using the combined data set of both rodent species for predicting the neurotoxicity of solvents. The constructed models were validated through deriving several statistical coefficients for the test data and the prediction and generalization abilities of these models were evaluated. The qualitative STR models (rat and mouse) yielded classification accuracies of 92.86% in the test data sets, whereas, the quantitative STRs yielded correlation (R(2)) of >0.93 between the measured and model predicted toxicity values in both the test data (rat and mouse). The prediction accuracies of the QAAR (R(2) 0.859) and global STR (R(2) 0.945) models were comparable to those of the independent local STR models. The results suggest the ability of the developed QSTR models to reliably predict binary neurotoxicity classes and the endpoint neurotoxicities of the structurally diverse organic solvents.

  16. A Method for Driving Route Predictions Based on Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Ning Ye

    2015-01-01

    Full Text Available We present a driving route prediction method that is based on Hidden Markov Model (HMM. This method can accurately predict a vehicle’s entire route as early in a trip’s lifetime as possible without inputting origins and destinations beforehand. Firstly, we propose the route recommendation system architecture, where route predictions play important role in the system. Secondly, we define a road network model, normalize each of driving routes in the rectangular coordinate system, and build the HMM to make preparation for route predictions using a method of training set extension based on K-means++ and the add-one (Laplace smoothing technique. Thirdly, we present the route prediction algorithm. Finally, the experimental results of the effectiveness of the route predictions that is based on HMM are shown.

  17. Physics-based Modeling Tools for Life Prediction and Durability Assessment of Advanced Materials Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The technical objectives of this program are: (1) to develop a set of physics-based modeling tools to predict the initiation of hot corrosion and to address pit and...

  18. Prediction and Research on Vegetable Price Based on Genetic Algorithm and Neural Network Model

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Considering the complexity of vegetables price forecast,the prediction model of vegetables price was set up by applying the neural network based on genetic algorithm and using the characteristics of genetic algorithm and neural work.Taking mushrooms as an example,the parameters of the model are analyzed through experiment.In the end,the results of genetic algorithm and BP neural network are compared.The results show that the absolute error of prediction data is in the scale of 10%;in the scope that the absolute error in the prediction data is in the scope of 20% and 15%.The accuracy of genetic algorithm based on neutral network is higher than the BP neutral network model,especially the absolute error of prediction data is within the scope of 20%.The accuracy of genetic algorithm based on neural network is obviously better than BP neural network model,which represents the favorable generalization capability of the model.

  19. Predictive functional control based on fuzzy T-S model for HVAC systems temperature control

    Institute of Scientific and Technical Information of China (English)

    Hongli L(U); Lei JIA; Shulan KONG; Zhaosheng ZHANG

    2007-01-01

    In heating,ventilating and air-conditioning(HVAC)systems,there exist severe nonlinearity,time-varying nature,disturbances and uncertainties.A new predictive functional control based on Takagi-Sugeno(T-S)fuzzy model was proposed to control HVAC systems.The T-S fuzzy model of stabilized controlled process was obtained using the least squares method,then on the basis of global linear predictive model from T-S fuzzy model,the process was controlled by the predictive functional controller.Especially the feedback regulation part was developed to compensate uncertainties of fuzzy predictive model.Finally simulation test results in HVAC systems control applications showed that the proposed fuzzy model predictive functional control improves tracking effect and robustness.Compared with the conventional PID controller,this control strategy has the advantages of less overshoot and shorter setting time,etc.

  20. Thematic and spatial resolutions affect model-based predictions of tree species distribution.

    Directory of Open Access Journals (Sweden)

    Yu Liang

    Full Text Available Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance. We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution.

  1. Thematic and spatial resolutions affect model-based predictions of tree species distribution.

    Science.gov (United States)

    Liang, Yu; He, Hong S; Fraser, Jacob S; Wu, ZhiWei

    2013-01-01

    Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution.

  2. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  3. A model of urban rational growth based on grey prediction

    Science.gov (United States)

    Xiao, Wenjing

    2017-04-01

    Smart growth focuses on building sustainable cities, using compact development to prevent urban sprawl. This paper establishes a series of models to implement smart growth theories into city design. Besides two specific city design cases are shown. Firstly, We establishes Smart Growth Measure Model to measure the success of smart growth of a city. And we use Full Permutation Polygon Synthetic Indicator Method to calculate the Comprehensive Indicator (CI) which is used to measure the success of smart growth. Secondly, this paper uses the principle of smart growth to develop a new growth plan for two cities. We establish an optimization model to maximum CI value. The Particle Swarm Optimization (PSO) algorithm is used to solve the model. Combined with the calculation results and the specific circumstances of cities, we make their the smart growth plan respectively.

  4. Camera-based model to predict the total difference between effect coatings under directional illumination

    Institute of Scientific and Technical Information of China (English)

    Zhongning Huang; Haisong Xu; M.Rounier Luo

    2011-01-01

    @@ A camera-based model is established to predict the total difference for samples of metallic panels with effect coatings under directional illumination,and the testing results indicate that the model can precisely predict the total difference between samples with metallic coatings with satisfactory consistency to the visual data.Due to the limited amount of testing samples,the model performance should be further developed by increasing the training and testing samples.%A camera-based model is established to predict the total difference for samples of metallic panels with effect coatings under directional illumination, and the testing results indicate that the model can precisely predict the total difference between samples with metallic coatings with satisfactory consistency to the visual data. Due to the limited amount of testing samples, the model performance should be further developed by increasing the training and testing samples.

  5. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    Science.gov (United States)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  6. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  7. A Physics-Informed Machine Learning Framework for RANS-based Predictive Turbulence Modeling

    Science.gov (United States)

    Xiao, Heng; Wu, Jinlong; Wang, Jianxun; Ling, Julia

    2016-11-01

    Numerical models based on the Reynolds-averaged Navier-Stokes (RANS) equations are widely used in turbulent flow simulations in support of engineering design and optimization. In these models, turbulence modeling introduces significant uncertainties in the predictions. In light of the decades-long stagnation encountered by the traditional approach of turbulence model development, data-driven methods have been proposed as a promising alternative. We will present a data-driven, physics-informed machine-learning framework for predictive turbulence modeling based on RANS models. The framework consists of three components: (1) prediction of discrepancies in RANS modeled Reynolds stresses based on machine learning algorithms, (2) propagation of improved Reynolds stresses to quantities of interests with a modified RANS solver, and (3) quantitative, a priori assessment of predictive confidence based on distance metrics in the mean flow feature space. Merits of the proposed framework are demonstrated in a class of flows featuring massive separations. Significant improvements over the baseline RANS predictions are observed. The favorable results suggest that the proposed framework is a promising path toward RANS-based predictive turbulence in the era of big data. (SAND2016-7435 A).

  8. Can multivariate models based on MOAKS predict OA knee pain? Data from the Osteoarthritis Initiative

    Science.gov (United States)

    Luna-Gómez, Carlos D.; Zanella-Calzada, Laura A.; Galván-Tejada, Jorge I.; Galván-Tejada, Carlos E.; Celaya-Padilla, José M.

    2017-03-01

    Osteoarthritis is the most common rheumatic disease in the world. Knee pain is the most disabling symptom in the disease, the prediction of pain is one of the targets in preventive medicine, this can be applied to new therapies or treatments. Using the magnetic resonance imaging and the grading scales, a multivariate model based on genetic algorithms is presented. Using a predictive model can be useful to associate minor structure changes in the joint with the future knee pain. Results suggest that multivariate models can be predictive with future knee chronic pain. All models; T0, T1 and T2, were statistically significant, all p values were 0.60.

  9. DEVELOPMENT OF A CRASH RISK PROBABILITY MODEL FOR FREEWAYS BASED ON HAZARD PREDICTION INDEX

    Directory of Open Access Journals (Sweden)

    Md. Mahmud Hasan

    2014-12-01

    Full Text Available This study presents a method for the identification of hazardous situations on the freeways. The hazard identification is done using a crash risk probability model. For this study, about 18 km long section of Eastern Freeway in Melbourne (Australia is selected as a test bed. Two categories of data i.e. traffic and accident record data are used for the analysis and modelling. In developing the crash risk probability model, Hazard Prediction Index is formulated in this study by the differences of traffic parameters with threshold values. Seven different prediction indices are examined and the best one is selected as crash risk probability model based on prediction error minimisation.

  10. Structure-Based Predictive Model for Coal Char Combustion

    Energy Technology Data Exchange (ETDEWEB)

    Christopher Hadad; Joseph Calo; Robert Essenhigh; Robert Hurt

    1998-04-08

    Progress was made this period on a number of separate experimental and modelling activities. At Brown, the models of carbon nanostructure evolution were expanded to consider high-rank materials with initial anisotropy. The report presents detailed results of Monte Carlo simulations with non-zero initial layer length and with statistically oriented initial states. The expanded simulations are now capable of describing the development of nanostructure during carbonization of most coals. Work next quarter will address the remaining challenge of isotropic coke-forming coals. Experiments at Brown yielded important data on the "memory loss" phenomenon in carbon annealing, and on the effect of mineral matter on high-temperature reactivity. The experimental aspects of the Brown work will be discussed in detail in the next report.

  11. Online prediction model based on the SVD-KPCA method.

    Science.gov (United States)

    Elaissi, Ilyes; Jaffel, Ines; Taouali, Okba; Messaoud, Hassani

    2013-01-01

    This paper proposes a new method for online identification of a nonlinear system modelled on Reproducing Kernel Hilbert Space (RKHS). The proposed SVD-KPCA method uses the Singular Value Decomposition (SVD) technique to update the principal components. Then we use the Reduced Kernel Principal Component Analysis (RKPCA) to approach the principal components which represent the observations selected by the KPCA method. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  12. A Rule-Based Model for Bankruptcy Prediction Based on an Improved Genetic Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2013-01-01

    Full Text Available In this paper, we proposed a hybrid system to predict corporate bankruptcy. The whole procedure consists of the following four stages: first, sequential forward selection was used to extract the most important features; second, a rule-based model was chosen to fit the given dataset since it can present physical meaning; third, a genetic ant colony algorithm (GACA was introduced; the fitness scaling strategy and the chaotic operator were incorporated with GACA, forming a new algorithm—fitness-scaling chaotic GACA (FSCGACA, which was used to seek the optimal parameters of the rule-based model; and finally, the stratified K-fold cross-validation technique was used to enhance the generalization of the model. Simulation experiments of 1000 corporations’ data collected from 2006 to 2009 demonstrated that the proposed model was effective. It selected the 5 most important factors as “net income to stock broker’s equality,” “quick ratio,” “retained earnings to total assets,” “stockholders’ equity to total assets,” and “financial expenses to sales.” The total misclassification error of the proposed FSCGACA was only 7.9%, exceeding the results of genetic algorithm (GA, ant colony algorithm (ACA, and GACA. The average computation time of the model is 2.02 s.

  13. Settlement prediction model of slurry suspension based on sedimentation rate attenuation

    Directory of Open Access Journals (Sweden)

    Shuai-jie GUO

    2012-03-01

    Full Text Available This paper introduces a slurry suspension settlement prediction model for cohesive sediment in a still water environment. With no sediment input and a still water environment condition, control forces between settling particles are significantly different in the process of sedimentation rate attenuation, and the settlement process includes the free sedimentation stage, the log-linear attenuation stage, and the stable consolidation stage according to sedimentation rate attenuation. Settlement equations for sedimentation height and time were established based on sedimentation rate attenuation properties of different sedimentation stages. Finally, a slurry suspension settlement prediction model based on slurry parameters was set up with a foundation being that the model parameters were determined by the basic parameters of slurry. The results of the settlement prediction model show good agreement with those of the settlement column experiment and reflect the main characteristics of cohesive sediment. The model can be applied to the prediction of cohesive soil settlement in still water environments.

  14. Bearing Degradation Process Prediction Based on the Support Vector Machine and Markov Model

    Directory of Open Access Journals (Sweden)

    Shaojiang Dong

    2014-01-01

    Full Text Available Predicting the degradation process of bearings before they reach the failure threshold is extremely important in industry. This paper proposed a novel method based on the support vector machine (SVM and the Markov model to achieve this goal. Firstly, the features are extracted by time and time-frequency domain methods. However, the extracted original features are still with high dimensional and include superfluous information, and the nonlinear multifeatures fusion technique LTSA is used to merge the features and reduces the dimension. Then, based on the extracted features, the SVM model is used to predict the bearings degradation process, and the CAO method is used to determine the embedding dimension of the SVM model. After the bearing degradation process is predicted by SVM model, the Markov model is used to improve the prediction accuracy. The proposed method was validated by two bearing run-to-failure experiments, and the results proved the effectiveness of the methodology.

  15. Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model

    Directory of Open Access Journals (Sweden)

    C. Narendra Babu

    2015-07-01

    Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.

  16. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  17. Nonlinear model predictive control with guaraneed stability based on pesudolinear neural networks

    Institute of Scientific and Technical Information of China (English)

    WANG Yongji; WANG Hong

    2004-01-01

    A nonlinear model predictive control problem based on pseudo-linear neural network (PNN) is discussed, in which the second order on-line optimization method is adopted. The recursive computation of Jacobian matrix is investigated. The stability of the closed loop model predictive control system is analyzed based on Lyapunov theory to obtain the sufficient condition for the asymptotical stability of the neural predictive control system. A simulation was carried out for an exothermic first-order reaction in a continuous stirred tank reactor. It is demonstrated that the proposed control strategy is applicable to some of nonlinear systems.

  18. Likelihood based observability analysis and confidence intervals for predictions of dynamic models

    CERN Document Server

    Kreutz, Clemens; Timmer, Jens

    2011-01-01

    Mechanistic dynamic models of biochemical networks such as Ordinary Differential Equations (ODEs) contain unknown parameters like the reaction rate constants and the initial concentrations of the compounds. The large number of parameters as well as their nonlinear impact on the model responses hamper the determination of confidence regions for parameter estimates. At the same time, classical approaches translating the uncertainty of the parameters into confidence intervals for model predictions are hardly feasible. In this article it is shown that a so-called prediction profile likelihood yields reliable confidence intervals for model predictions, despite arbitrarily complex and high-dimensional shapes of the confidence regions for the estimated parameters. Prediction confidence intervals of the dynamic states allow a data-based observability analysis. The approach renders the issue of sampling a high-dimensional parameter space into evaluating one-dimensional prediction spaces. The method is also applicable ...

  19. Copula based prediction models: an application to an aortic regurgitation study

    Directory of Open Access Journals (Sweden)

    Shoukri Mohamed M

    2007-06-01

    Full Text Available Abstract Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction; p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808. From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0

  20. Prediction model of interval grey number based on DGM(1,1)

    Institute of Scientific and Technical Information of China (English)

    Bo Zeng; Sifeng Liu; Naiming Xie

    2010-01-01

    In grey system theory,the studies in the field of grey prediction model are focused on real number sequences,rather than grey number ones.Hereby,a prediction model based on interval grey number sequences is proposed.By mining the geometric features of interval grey number sequences on a two-dimensional surface,all the interval grey numbers are converted into real numbers by means of certain algorithm,and then the prediction model is established based on those real number sequences.The entire process avoids the algebraic operations of grey number,and the prediction problem of interval grey number is usefully solved.Ultimately,through an example's program simulation,the validity and practicability of this novel model are verified.

  1. Machine Learning Based Statistical Prediction Model for Improving Performance of Live Virtual Machine Migration

    Directory of Open Access Journals (Sweden)

    Minal Patel

    2016-01-01

    Full Text Available Service can be delivered anywhere and anytime in cloud computing using virtualization. The main issue to handle virtualized resources is to balance ongoing workloads. The migration of virtual machines has two major techniques: (i reducing dirty pages using CPU scheduling and (ii compressing memory pages. The available techniques for live migration are not able to predict dirty pages in advance. In the proposed framework, time series based prediction techniques are developed using historical analysis of past data. The time series is generated with transferring of memory pages iteratively. Here, two different regression based models of time series are proposed. The first model is developed using statistical probability based regression model and it is based on ARIMA (autoregressive integrated moving average model. The second one is developed using statistical learning based regression model and it uses SVR (support vector regression model. These models are tested on real data set of Xen to compute downtime, total number of pages transferred, and total migration time. The ARIMA model is able to predict dirty pages with 91.74% accuracy and the SVR model is able to predict dirty pages with 94.61% accuracy that is higher than ARIMA.

  2. Comparison of short term rainfall forecasts for model based flow prediction in urban drainage systems

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Poulsen, Troels Sander; Bøvith, Thomas;

    2012-01-01

    Forecast based flow prediction in drainage systems can be used to implement real time control of drainage systems. This study compares two different types of rainfall forecasts – a radar rainfall extrapolation based nowcast model and a numerical weather prediction model. The models are applied...... as input to an urban runoff model predicting the inlet flow to a waste water treatment plant. The modelled flows are auto-calibrated against real time flow observations in order to certify the best possible forecast. Results show that it is possible to forecast flows with a lead time of 24 hours. The best...... performance of the system is found using the radar nowcast for the short leadtimes and weather model for larger lead times....

  3. Comparison Of Short Term Rainfall Forecasts For Model Based Flow Prediction In Urban Drainage Systems

    DEFF Research Database (Denmark)

    Thorndahl, Søren Liedtke; Poulsen, Troels Sander; Bøvith, Thomas;

    2012-01-01

    Forecast based flow prediction in drainage systems can be used to implement real time control of drainage systems. This study compares two different types of rainfall forecasts – a radar rainfall extrapolation based nowcast model and a numerical weather prediction model. The models are applied...... as input to an urban runoff model predicting the inlet flow to a waste water treatment plant. The modelled flows are auto-calibrated against real time flow observations in order to certify the best possible forecast. Results show that it is possible to forecast flows with a lead time of 24 hours. The best...... performance of the system is found using the radar nowcast for the short leadtimes and weather model for larger lead times....

  4. Comparison of short term rainfall forecasts for model based flow prediction in urban drainage systems

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Poulsen, Troels Sander; Bøvith, Thomas

    2012-01-01

    Forecast based flow prediction in drainage systems can be used to implement real time control of drainage systems. This study compares two different types of rainfall forecasts – a radar rainfall extrapolation based nowcast model and a numerical weather prediction model. The models are applied...... as input to an urban runoff model predicting the inlet flow to a waste water treatment plant. The modelled flows are auto-calibrated against real time flow observations in order to certify the best possible forecast. Results show that it is possible to forecast flows with a lead time of 24 hours. The best...... performance of the system is found using the radar nowcast for the short leadtimes and weather model for larger lead times....

  5. Comparison Of Short Term Rainfall Forecasts For Model Based Flow Prediction In Urban Drainage Systems

    DEFF Research Database (Denmark)

    Thorndahl, Søren Liedtke; Poulsen, Troels Sander; Bøvith, Thomas

    2012-01-01

    Forecast based flow prediction in drainage systems can be used to implement real time control of drainage systems. This study compares two different types of rainfall forecasts – a radar rainfall extrapolation based nowcast model and a numerical weather prediction model. The models are applied...... as input to an urban runoff model predicting the inlet flow to a waste water treatment plant. The modelled flows are auto-calibrated against real time flow observations in order to certify the best possible forecast. Results show that it is possible to forecast flows with a lead time of 24 hours. The best...... performance of the system is found using the radar nowcast for the short leadtimes and weather model for larger lead times....

  6. Research on power grid loss prediction model based on Granger causality property of time series

    Energy Technology Data Exchange (ETDEWEB)

    Wang, J. [North China Electric Power Univ., Beijing (China); State Grid Corp., Beijing (China); Yan, W.P.; Yuan, J. [North China Electric Power Univ., Beijing (China); Xu, H.M.; Wang, X.L. [State Grid Information and Telecommunications Corp., Beijing (China)

    2009-03-11

    This paper described a method of predicting power transmission line losses using the Granger causality property of time series. The stable property of the time series was investigated using unit root tests. The Granger causality relationship between line losses and other variables was then determined. Granger-caused time series were then used to create the following 3 prediction models: (1) a model based on line loss binomials that used electricity sales to predict variables, (2) a model that considered both power sales and grid capacity, and (3) a model based on autoregressive distributed lag (ARDL) approaches that incorporated both power sales and the square of power sales as variables. A case study of data from China's electric power grid between 1980 and 2008 was used to evaluate model performance. Results of the study showed that the model error rates ranged between 2.7 and 3.9 percent. 6 refs., 3 tabs., 1 fig.

  7. Building a Tax Predictive Model Based on the Cloud Neural Network

    Institute of Scientific and Technical Information of China (English)

    田永青; 李志; 朱仲英

    2003-01-01

    Tax is very important to the whole country, so a scientific tax predictive model is needed. This paper introduces the theory of the cloud model. On this basis, it presents a cloud neural network, and analyzes the main factors which influence the tax revenue. Then if proposes a tax predictive model based on the cloud neural network. The model combines the strongpoints of the cloud model and the neural network. The experiment and simulation results show the effectiveness of the algorithm in this paper.

  8. Multirule Based Diagnostic Approach for the Fog Predictions Using WRF Modelling Tool

    Directory of Open Access Journals (Sweden)

    Swagata Payra

    2014-01-01

    Full Text Available The prediction of fog onset remains difficult despite the progress in numerical weather prediction. It is a complex process and requires adequate representation of the local perturbations in weather prediction models. It mainly depends upon microphysical and mesoscale processes that act within the boundary layer. This study utilizes a multirule based diagnostic (MRD approach using postprocessing of the model simulations for fog predictions. The empiricism involved in this approach is mainly to bridge the gap between mesoscale and microscale variables, which are related to mechanism of the fog formation. Fog occurrence is a common phenomenon during winter season over Delhi, India, with the passage of the western disturbances across northwestern part of the country accompanied with significant amount of moisture. This study implements the above cited approach for the prediction of occurrences of fog and its onset time over Delhi. For this purpose, a high resolution weather research and forecasting (WRF model is used for fog simulations. The study involves depiction of model validation and postprocessing of the model simulations for MRD approach and its subsequent application to fog predictions. Through this approach model identified foggy and nonfoggy days successfully 94% of the time. Further, the onset of fog events is well captured within an accuracy of 30–90 minutes. This study demonstrates that the multirule based postprocessing approach is a useful and highly promising tool in improving the fog predictions.

  9. Addressing issues associated with evaluating prediction models for survival endpoints based on the concordance statistic.

    Science.gov (United States)

    Wang, Ming; Long, Qi

    2016-09-01

    Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations.

  10. Improvement of the Blast Furnace Viscosity Prediction Model Based on Discrete Points Data

    Science.gov (United States)

    Guo, Hongwei; Zhu, Mengyi; Li, Xinyu; Guo, Jian; Du, Shen; Zhang, Jianliang

    2015-02-01

    Viscosity is considered to be a significant indicator of the metallurgical property of blast furnace slag. An improved model for viscosity prediction based on the Chou model was presented in this article. The updated model has optimized the selection strategy of distance algorithm and negative weights at the reference points. Therefore, the extensionality prediction disadvantage in the original model was ameliorated by this approach. The model prediction was compared with viscosity data of slags of compositions typical to BF operations obtained from a domestic steel plant. The results show that the approach can predict the viscosity with average error of 9.23 pct and mean standard deviation of 0.046 Pa s.

  11. Capacity Prediction Model Based on Limited Priority Gap-Acceptance Theory at Multilane Roundabouts

    Directory of Open Access Journals (Sweden)

    Zhaowei Qu

    2014-01-01

    Full Text Available Capacity is an important design parameter for roundabouts, and it is the premise of computing their delay and queue. Roundabout capacity has been studied for decades, and empirical regression model and gap-acceptance model are the two main methods to predict it. Based on gap-acceptance theory, by considering the effect of limited priority, especially the relationship between limited priority factor and critical gap, a modified model was built to predict the roundabout capacity. We then compare the results between Raff’s method and maximum likelihood estimation (MLE method, and the MLE method was used to predict the critical gaps. Finally, the predicted capacities from different models were compared, with the observed capacity by field surveys, which verifies the performance of the proposed model.

  12. Microcellular propagation prediction model based on an improved ray tracing algorithm.

    Science.gov (United States)

    Liu, Z-Y; Guo, L-X; Fan, T-Q

    2013-11-01

    Two-dimensional (2D)/two-and-one-half-dimensional ray tracing (RT) algorithms for the use of the uniform theory of diffraction and geometrical optics are widely used for channel prediction in urban microcellular environments because of their high efficiency and reliable prediction accuracy. In this study, an improved RT algorithm based on the "orientation face set" concept and on the improved 2D polar sweep algorithm is proposed. The goal is to accelerate point-to-point prediction, thereby making RT prediction attractive and convenient. In addition, the use of threshold control of each ray path and the handling of visible grid points for reflection and diffraction sources are adopted, resulting in an improved efficiency of coverage prediction over large areas. Measured results and computed predictions are also compared for urban scenarios. The results indicate that the proposed prediction model works well and is a useful tool for microcellular communication applications.

  13. Traffic chaos and its prediction based on a nonlinear car-following model

    Institute of Scientific and Technical Information of China (English)

    Hui FU; Jianmin XU; Lunhui XU

    2005-01-01

    This paper discusses the dynamic behavior and its predictions for a simulated traffic flow based on the nonlinear response of a vehicle to the leading car's movement in a single lane.Traffic chaos is a promising field,and chaos theory has been applied to identify and predict its chaotic movement.A simulated traffic flow is generated using a car-following model(GM model),and the distance between two cars is investigated for its dynamic properties.A positive Lyapunov exponent confirms the existence of chaotic behavior in the GM model.A new algorithm using a RBF NN (radial basis function neural network) is proposed to predict this traffic chaos.The experiment shows that the chaotic degree and predictable degree are determined by the first Lyapunov exponent.The algorithm proposed in this paper can be generalized to recognize and predict the chaos of short-time traffic flow series.

  14. Accuracy of depolarization and delay spread predictions using advanced ray-based modeling in indoor scenarios

    Directory of Open Access Journals (Sweden)

    Mani Francesco

    2011-01-01

    Full Text Available Abstract This article investigates the prediction accuracy of an advanced deterministic propagation model in terms of channel depolarization and frequency selectivity for indoor wireless propagation. In addition to specular reflection and diffraction, the developed ray tracing tool considers penetration through dielectric blocks and/or diffuse scattering mechanisms. The sensitivity and prediction accuracy analysis is based on two measurement campaigns carried out in a warehouse and an office building. It is shown that the implementation of diffuse scattering into RT significantly increases the accuracy of the cross-polar discrimination prediction, whereas the delay-spread prediction is only marginally improved.

  15. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    Science.gov (United States)

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  16. Understanding uncertainties in model-based predictions of Aedes aegypti population dynamics.

    Directory of Open Access Journals (Sweden)

    Chonggang Xu

    2010-09-01

    Full Text Available Aedes aegypti is one of the most important mosquito vectors of human disease. The development of spatial models for Ae. aegypti provides a promising start toward model-guided vector control and risk assessment, but this will only be possible if models make reliable predictions. The reliability of model predictions is affected by specific sources of uncertainty in the model.This study quantifies uncertainties in the predicted mosquito population dynamics at the community level (a cluster of 612 houses and the individual-house level based on Skeeter Buster, a spatial model of Ae. aegypti, for the city of Iquitos, Peru. The study considers two types of uncertainty: 1 uncertainty in the estimates of 67 parameters that describe mosquito biology and life history, and 2 uncertainty due to environmental and demographic stochasticity. Our results show that for pupal density and for female adult density at the community level, respectively, the 95% prediction confidence interval ranges from 1000 to 3000 and from 700 to 5,000 individuals. The two parameters contributing most to the uncertainties in predicted population densities at both individual-house and community levels are the female adult survival rate and a coefficient determining weight loss due to energy used in metabolism at the larval stage (i.e. metabolic weight loss. Compared to parametric uncertainty, stochastic uncertainty is relatively low for population density predictions at the community level (less than 5% of the overall uncertainty but is substantially higher for predictions at the individual-house level (larger than 40% of the overall uncertainty. Uncertainty in mosquito spatial dispersal has little effect on population density predictions at the community level but is important for the prediction of spatial clustering at the individual-house level.This is the first systematic uncertainty analysis of a detailed Ae. aegypti population dynamics model and provides an approach for

  17. Understanding uncertainties in model-based predictions of Aedes aegypti population dynamics.

    Science.gov (United States)

    Xu, Chonggang; Legros, Mathieu; Gould, Fred; Lloyd, Alun L

    2010-09-28

    Aedes aegypti is one of the most important mosquito vectors of human disease. The development of spatial models for Ae. aegypti provides a promising start toward model-guided vector control and risk assessment, but this will only be possible if models make reliable predictions. The reliability of model predictions is affected by specific sources of uncertainty in the model. This study quantifies uncertainties in the predicted mosquito population dynamics at the community level (a cluster of 612 houses) and the individual-house level based on Skeeter Buster, a spatial model of Ae. aegypti, for the city of Iquitos, Peru. The study considers two types of uncertainty: 1) uncertainty in the estimates of 67 parameters that describe mosquito biology and life history, and 2) uncertainty due to environmental and demographic stochasticity. Our results show that for pupal density and for female adult density at the community level, respectively, the 95% prediction confidence interval ranges from 1000 to 3000 and from 700 to 5,000 individuals. The two parameters contributing most to the uncertainties in predicted population densities at both individual-house and community levels are the female adult survival rate and a coefficient determining weight loss due to energy used in metabolism at the larval stage (i.e. metabolic weight loss). Compared to parametric uncertainty, stochastic uncertainty is relatively low for population density predictions at the community level (less than 5% of the overall uncertainty) but is substantially higher for predictions at the individual-house level (larger than 40% of the overall uncertainty). Uncertainty in mosquito spatial dispersal has little effect on population density predictions at the community level but is important for the prediction of spatial clustering at the individual-house level. This is the first systematic uncertainty analysis of a detailed Ae. aegypti population dynamics model and provides an approach for identifying those

  18. Small-time scale network traffic prediction based on a local support vector machine regression model

    Institute of Scientific and Technical Information of China (English)

    Meng Qing-Fang; Chen Yue-Hui; Peng Yu-Hua

    2009-01-01

    In this paper we apply the nonlinear time series analysis method to small-time scale traffic measurement data. The prediction-based method is used to determine the embedding dimension of the traffic data. Based on the reconstructed phase space, the local support vector machine prediction method is used to predict the traffic measurement data, and the BIC-based neighbouring point selection method is used to choose the number of the nearest neighbouring points for the local support vector machine regression model. The experimental results show that the local support vector machine prediction method whose neighbouring points are optimized can effectively predict the small-time scale traffic measurement data and can reproduce the statistical features of real traffic measurements.

  19. Evaluating effects of normobaric oxygen therapy in acute stroke with MRI-based predictive models

    Directory of Open Access Journals (Sweden)

    Wu Ona

    2012-03-01

    Full Text Available Abstract Background Voxel-based algorithms using acute multiparametric-MRI data have been shown to accurately predict tissue outcome after stroke. We explored the potential of MRI-based predictive algorithms to objectively assess the effects of normobaric oxygen therapy (NBO, an investigational stroke treatment, using data from a pilot study of NBO in acute stroke. Methods The pilot study of NBO enrolled 11 patients randomized to NBO administered for 8 hours, and 8 Control patients who received room-air. Serial MRIs were obtained at admission, during gas therapy, post-therapy, and pre-discharge. Diffusion/perfusion MRI data acquired at admission (pre-therapy was used in generalized linear models to predict the risk of lesion growth at subsequent time points for both treatment scenarios: NBO or Control. Results Lesion volume sizes 'during NBO therapy' predicted by Control-models were significantly larger (P = 0.007 than those predicted by NBO models, suggesting that ischemic lesion growth is attenuated during NBO treatment. No significant difference was found between the predicted lesion volumes at later time-points. NBO-treated patients, despite showing larger lesion volumes on Control-models than NBO-models, tended to have reduced lesion growth. Conclusions This study shows that NBO has therapeutic potential in acute ischemic stroke, and demonstrates the feasibility of using MRI-based algorithms to evaluate novel treatments in early-phase clinical trials.

  20. A Thermodynamic Approach to Predict Formation Enthalpies of Ternary Systems Based on Miedema's Model

    Science.gov (United States)

    Mousavi, Mahbubeh Sadat; Abbasi, Roozbeh; Kashani-Bozorg, Seyed Farshid

    2016-07-01

    A novel modification to the thermodynamic semi-empirical Miedema's model has been made in order to provide more precise estimations of formation enthalpy in ternary alloys. The original Miedema's model was modified for ternary systems based on surface concentration function revisions. The results predicted by the present model were found to be in excellent agreement with the available experimental data of over 150 ternary intermetallic compounds. The novel proposed model is capable of predicting formation enthalpies of ternary intermetallics with small discrepancies of ≤20 kJ/mol as well as providing reliable enthalpy variations.

  1. Predicting Modeling Method of Ship Radiated Noise Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Guohui Li

    2016-01-01

    Full Text Available Because the forming mechanism of underwater acoustic signal is complex, it is difficult to establish the accurate predicting model. In this paper, we propose a nonlinear predicting modeling method of ship radiated noise based on genetic algorithm. Three types of ship radiated noise are taken as real underwater acoustic signal. First of all, a basic model framework is chosen. Secondly, each possible model is done with genetic coding. Thirdly, model evaluation standard is established. Fourthly, the operation of genetic algorithm such as crossover, reproduction, and mutation is designed. Finally, a prediction model of real underwater acoustic signal is established by genetic algorithm. By calculating the root mean square error and signal error ratio of underwater acoustic signal predicting model, the satisfactory results are obtained. The results show that the proposed method can establish the accurate predicting model with high prediction accuracy and may play an important role in the further processing of underwater acoustic signal such as noise reduction and feature extraction and classification.

  2. PERPEST model, a case-based reasoning approach to predict ecological risks of pesticides

    NARCIS (Netherlands)

    Brink, van den P.J.; Roelsma, J.; Nes, van E.H.; Scheffer, M.; Brock, T.C.M.

    2002-01-01

    The present paper discusses PERPEST, a model that uses case-based reasoning to predict the effects of a particular concentration of a pesticide on a defined aquatic ecosystem, based on published information about the effects of pesticides on the structure and function of aquatic ecosystems as observ

  3. Intrusion Detection for Wireless Sensor Network Based on Traffic Prediction Model

    Science.gov (United States)

    Zhijie, Han; Ruchuang, Wang

    In this paper, the authors first propose an efficient traffic prediction algorithm for sensor nodes which exploits the Markov model. Based on this algorithm, a distributed anomaly detection scheme, TPID(Traffic Prediction based Intrusion Detection), is designed to detect the attacks which make more influence on packet traffic, such as selective forwarding attacks, DOS attacks. In TPID, each node acts independently when predicting the traffic and detecting an anomaly. Neither special hardware nor nodes cooperation is needed. The scheme is evaluated and compared with other method in experiments. Results show that the proposed scheme obtain high detection ratio with less computation and communication cost.

  4. A Comprehensive Propagation Prediction Model Comprising Microfacet Based Scattering and Probability Based Coverage Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    A. S. M. Zahid Kausar

    2014-01-01

    Full Text Available Although ray tracing based propagation prediction models are popular for indoor radio wave propagation characterization, most of them do not provide an integrated approach for achieving the goal of optimum coverage, which is a key part in designing wireless network. In this paper, an accelerated technique of three-dimensional ray tracing is presented, where rough surface scattering is included for making a more accurate ray tracing technique. Here, the rough surface scattering is represented by microfacets, for which it becomes possible to compute the scattering field in all possible directions. New optimization techniques, like dual quadrant skipping (DQS and closest object finder (COF, are implemented for fast characterization of wireless communications and making the ray tracing technique more efficient. In conjunction with the ray tracing technique, probability based coverage optimization algorithm is accumulated with the ray tracing technique to make a compact solution for indoor propagation prediction. The proposed technique decreases the ray tracing time by omitting the unnecessary objects for ray tracing using the DQS technique and by decreasing the ray-object intersection time using the COF technique. On the other hand, the coverage optimization algorithm is based on probability theory, which finds out the minimum number of transmitters and their corresponding positions in order to achieve optimal indoor wireless coverage. Both of the space and time complexities of the proposed algorithm surpass the existing algorithms. For the verification of the proposed ray tracing technique and coverage algorithm, detailed simulation results for different scattering factors, different antenna types, and different operating frequencies are presented. Furthermore, the proposed technique is verified by the experimental results.

  5. Swarm Intelligence-Based Hybrid Models for Short-Term Power Load Prediction

    Directory of Open Access Journals (Sweden)

    Jianzhou Wang

    2014-01-01

    Full Text Available Swarm intelligence (SI is widely and successfully applied in the engineering field to solve practical optimization problems because various hybrid models, which are based on the SI algorithm and statistical models, are developed to further improve the predictive abilities. In this paper, hybrid intelligent forecasting models based on the cuckoo search (CS as well as the singular spectrum analysis (SSA, time series, and machine learning methods are proposed to conduct short-term power load prediction. The forecasting performance of the proposed models is augmented by a rolling multistep strategy over the prediction horizon. The test results are representative of the out-performance of the SSA and CS in tuning the seasonal autoregressive integrated moving average (SARIMA and support vector regression (SVR in improving load forecasting, which indicates that both the SSA-based data denoising and SI-based intelligent optimization strategy can effectively improve the model’s predictive performance. Additionally, the proposed CS-SSA-SARIMA and CS-SSA-SVR models provide very impressive forecasting results, demonstrating their strong robustness and universal forecasting capacities in terms of short-term power load prediction 24 hours in advance.

  6. Neural-networks-based feedback linearization versus model predictive control of continuous alcoholic fermentation process

    Energy Technology Data Exchange (ETDEWEB)

    Mjalli, F.S.; Al-Asheh, S. [Chemical Engineering Department, Qatar University, Doha (Qatar)

    2005-10-01

    In this work advanced nonlinear neural networks based control system design algorithms are adopted to control a mechanistic model for an ethanol fermentation process. The process model equations for such systems are highly nonlinear. A neural network strategy has been implemented in this work for capturing the dynamics of the mechanistic model for the fermentation process. The neural network achieved has been validated against the mechanistic model. Two neural network based nonlinear control strategies have also been adopted using the model identified. The performance of the feedback linearization technique was compared to neural network model predictive control in terms of stability and set point tracking capabilities. Under servo conditions, the feedback linearization algorithm gave comparable tracking and stability. The feedback linearization controller achieved the control target faster than the model predictive one but with vigorous and sudden controller moves. (Abstract Copyright [2005], Wiley Periodicals, Inc.)

  7. Patient Similarity in Prediction Models Based on Health Data: A Scoping Review

    Science.gov (United States)

    Sharafoddini, Anis; Dubin, Joel A

    2017-01-01

    Background Physicians and health policy makers are required to make predictions during their decision making in various medical problems. Many advances have been made in predictive modeling toward outcome prediction, but these innovations target an average patient and are insufficiently adjustable for individual patients. One developing idea in this field is individualized predictive analytics based on patient similarity. The goal of this approach is to identify patients who are similar to an index patient and derive insights from the records of similar patients to provide personalized predictions.. Objective The aim is to summarize and review published studies describing computer-based approaches for predicting patients’ future health status based on health data and patient similarity, identify gaps, and provide a starting point for related future research. Methods The method involved (1) conducting the review by performing automated searches in Scopus, PubMed, and ISI Web of Science, selecting relevant studies by first screening titles and abstracts then analyzing full-texts, and (2) documenting by extracting publication details and information on context, predictors, missing data, modeling algorithm, outcome, and evaluation methods into a matrix table, synthesizing data, and reporting results. Results After duplicate removal, 1339 articles were screened in abstracts and titles and 67 were selected for full-text review. In total, 22 articles met the inclusion criteria. Within included articles, hospitals were the main source of data (n=10). Cardiovascular disease (n=7) and diabetes (n=4) were the dominant patient diseases. Most studies (n=18) used neighborhood-based approaches in devising prediction models. Two studies showed that patient similarity-based modeling outperformed population-based predictive methods. Conclusions Interest in patient similarity-based predictive modeling for diagnosis and prognosis has been growing. In addition to raw/coded health

  8. Prediction of altimetric sea level anomalies using time series models based on spatial correlation

    Science.gov (United States)

    Miziński, Bartłomiej; Niedzielski, Tomasz

    2014-05-01

    Sea level anomaly (SLA) times series, which are time-varying gridded data, can be modelled and predicted using time series methods. This approach has been shown to provide accurate forecasts within the Prognocean system, the novel infrastructure for anticipating sea level change designed and built at the University of Wrocław (Poland) which utilizes the real-time SLA data from Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO). The system runs a few models concurrently, and our ocean prediction experiment includes both uni- and multivariate time series methods. The univariate ones are: extrapolation of polynomial-harmonic model (PH), extrapolation of polynomial-harmonic model and autoregressive prediction (PH+AR), extrapolation of polynomial-harmonic model and self-exciting threshold autoregressive prediction (PH+SETAR). The following multivariate methods are used: extrapolation of polynomial-harmonic model and vector autoregressive prediction (PH+VAR), extrapolation of polynomial-harmonic model and generalized space-time autoregressive prediction (PH+GSTAR). As the aforementioned models and the corresponding forecasts are computed in real time, hence independently and in the same computational setting, we are allowed to compare the accuracies offered by the models. The objective of this work is to verify the hypothesis that the multivariate prediction techniques, which make use of cross-correlation and spatial correlation, perform better than the univariate ones. The analysis is based on the daily-fitted and updated time series models predicting the SLA data (lead time of two weeks) over several months when El Niño/Southern Oscillation (ENSO) was in its neutral state.

  9. Antibody structure determination using a combination of homology modeling, energy-based refinement, and loop prediction

    Science.gov (United States)

    Zhu, Kai; Day, Tyler; Warshaviak, Dora; Murrett, Colleen; Friesner, Richard; Pearlman, David

    2017-01-01

    We present the blinded prediction results in the Second Antibody Modeling Assessment (AMA-II) using a fully automatic antibody structure prediction method implemented in the programs BioLuminate and Prime. We have developed a novel knowledge based approach to model the CDR loops, using a combination of sequence similarity, geometry matching, and the clustering of database structures. The homology models are further optimized with a physics-based energy function (VSGB2.0), which improves the model quality significantly. H3 loop modeling remains the most challenging task. Our ab initio loop prediction performs well for the H3 loop in the crystal structure context, and allows improved results when refining the H3 loops in the context of homology models. For the 10 human and mouse derived antibodies in this assessment, the average RMSDs for the homology model Fv and framework regions are 1.19 Å and 0.74 Å, respectively. The average RMSDs for five non-H3 CDR loops range from 0.61 Å to 1.05 Å, and the H3 loop average RMSD is 2.91 Å using our knowledge-based loop prediction approach. The ab initio H3 loop predictions yield an average RMSD of 1.28 Å when performed in the context of the crystal structure and 2.67 Å in the context of the homology modeled structure. Notably, our method for predicting the H3 loop in the crystal structure environment ranked first among the seven participating groups in AMA-II, and our method made the best prediction among all participants for seven of the ten targets. PMID:24619874

  10. Antibody structure determination using a combination of homology modeling, energy-based refinement, and loop prediction.

    Science.gov (United States)

    Zhu, Kai; Day, Tyler; Warshaviak, Dora; Murrett, Colleen; Friesner, Richard; Pearlman, David

    2014-08-01

    We present the blinded prediction results in the Second Antibody Modeling Assessment (AMA-II) using a fully automatic antibody structure prediction method implemented in the programs BioLuminate and Prime. We have developed a novel knowledge based approach to model the CDR loops, using a combination of sequence similarity, geometry matching, and the clustering of database structures. The homology models are further optimized with a physics-based energy function (VSGB2.0), which improves the model quality significantly. H3 loop modeling remains the most challenging task. Our ab initio loop prediction performs well for the H3 loop in the crystal structure context, and allows improved results when refining the H3 loops in the context of homology models. For the 10 human and mouse derived antibodies in this assessment, the average RMSDs for the homology model Fv and framework regions are 1.19 Å and 0.74 Å, respectively. The average RMSDs for five non-H3 CDR loops range from 0.61 Å to 1.05 Å, and the H3 loop average RMSD is 2.91 Å using our knowledge-based loop prediction approach. The ab initio H3 loop predictions yield an average RMSD of 1.28 Å when performed in the context of the crystal structure and 2.67 Å in the context of the homology modeled structure. Notably, our method for predicting the H3 loop in the crystal structure environment ranked first among the seven participating groups in AMA-II, and our method made the best prediction among all participants for seven of the ten targets. © 2014 Wiley Periodicals, Inc.

  11. Data Analytics Based Dual-Optimized Adaptive Model Predictive Control for the Power Plant Boiler

    Directory of Open Access Journals (Sweden)

    Zhenhao Tang

    2017-01-01

    Full Text Available To control the furnace temperature of a power plant boiler precisely, a dual-optimized adaptive model predictive control (DoAMPC method is designed based on the data analytics. In the proposed DoAMPC, an accurate predictive model is constructed adaptively by the hybrid algorithm of the least squares support vector machine and differential evolution method. Then, an optimization problem is constructed based on the predictive model and many constraint conditions. To control the boiler furnace temperature, the differential evolution method is utilized to decide the control variables by solving the optimization problem. The proposed method can adapt to the time-varying situation by updating the sample data. The experimental results based on practical data illustrate that the DoAMPC can control the boiler furnace temperature with errors of less than 1.5% which can meet the requirements of the real production process.

  12. Floating Car Data Based Nonparametric Regression Model for Short-Term Travel Speed Prediction

    Institute of Scientific and Technical Information of China (English)

    WENG Jian-cheng; HU Zhong-wei; YU Quan; REN Fu-tian

    2007-01-01

    A K-nearest neighbor (K-NN) based nonparametric regression model was proposed to predict travel speed for Beijing expressway. By using the historical traffic data collected from the detectors in Beijing expressways, a specically designed database was developed via the processes including data filtering, wavelet analysis and clustering. The relativity based weighted Euclidean distance was used as the distance metric to identify the K groups of nearest data series. Then, a K-NN nonparametric regression model was built to predict the average travel speeds up to 6 min into the future. Several randomly selected travel speed data series,collected from the floating car data (FCD) system, were used to validate the model. The results indicate that using the FCD, the model can predict average travel speeds with an accuracy of above 90%, and hence is feasible and effective.

  13. A hybrid deep neural network and physically based distributed model for river stage prediction

    Science.gov (United States)

    hitokoto, Masayuki; sakuraba, Masaaki

    2016-04-01

    We developed the real-time river stage prediction model, using the hybrid deep neural network and physically based distributed model. As the basic model, 4 layer feed-forward artificial neural network (ANN) was used. As a network training method, the deep learning technique was applied. To optimize the network weight, the stochastic gradient descent method based on the back propagation method was used. As a pre-training method, the denoising autoencoder was used. Input of the ANN model is hourly change of water level and hourly rainfall, output data is water level of downstream station. In general, the desirable input of the ANN has strong correlation with the output. In conceptual hydrological model such as tank model and storage-function model, river discharge is governed by the catchment storage. Therefore, the change of the catchment storage, downstream discharge subtracted from rainfall, can be the potent input candidate of the ANN model instead of rainfall. From this point of view, the hybrid deep neural network and physically based distributed model was developed. The prediction procedure of the hybrid model is as follows; first, downstream discharge was calculated by the distributed model, and then estimates the hourly change of catchment storage form rainfall and calculated discharge as the input of the ANN model, and finally the ANN model was calculated. In the training phase, hourly change of catchment storage can be calculated by the observed rainfall and discharge data. The developed model was applied to the one catchment of the OOYODO River, one of the first-grade river in Japan. The modeled catchment is 695 square km. For the training data, 5 water level gauging station and 14 rain-gauge station in the catchment was used. The training floods, superior 24 events, were selected during the period of 2005-2014. Prediction was made up to 6 hours, and 6 models were developed for each prediction time. To set the proper learning parameters and network

  14. Model-free prediction and regression a transformation-based approach to inference

    CERN Document Server

    Politis, Dimitris N

    2015-01-01

    The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, co...

  15. On the comparison of stochastic model predictive control strategies applied to a hydrogen-based microgrid

    Science.gov (United States)

    Velarde, P.; Valverde, L.; Maestre, J. M.; Ocampo-Martinez, C.; Bordons, C.

    2017-03-01

    In this paper, a performance comparison among three well-known stochastic model predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained model predictive control is presented. To this end, three predictive controllers have been designed and implemented in a real renewable-hydrogen-based microgrid. The experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel cell as main equipment. The real experimental results show significant differences from the plant components, mainly in terms of use of energy, for each implemented technique. Effectiveness, performance, advantages, and disadvantages of these techniques are extensively discussed and analyzed to give some valid criteria when selecting an appropriate stochastic predictive controller.

  16. Quality Prediction and Control of Reducing Pipe Based on EOS-ELM-RPLS Mathematics Modeling Method

    Directory of Open Access Journals (Sweden)

    Dong Xiao

    2014-01-01

    Full Text Available The inspection of inhomogeneous transverse and longitudinal wall thicknesses, which determines the quality of reducing pipe during the production of seamless steel reducing pipe, is lags and difficult to establish its mechanism model. Aiming at the problems, we proposed the quality prediction model of reducing pipe based on EOS-ELM-RPLS algorithm, which taking into account the production characteristics of its time-varying, nonlinearity, rapid intermission, and data echelon distribution. Key contents such as analysis of data time interval, solving of mean value, establishment of regression model, and model online prediction were introduced and the established prediction model was used in the quality prediction and iteration control of reducing pipe. It is shown through experiment and simulation that the prediction and iteration control method based on EOS-ELM-RPLS model can effectively improve the quality of steel reducing pipe, and, moreover, its maintenance cost was low and it has good characteristics of real time, reliability, and high accuracy.

  17. Dynamic Network Traffic Flow Prediction Model based on Modified Quantum-Behaved Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Hongying Jin

    2013-10-01

    Full Text Available This paper aims at effectively predicting the dynamic network traffic flow based on quantum-behaved particle swarm optimization algorithm. Firstly, the dynamic network traffic flow prediction problem is analyzed through formal description. Secondly, the structure of the network traffic flow prediction model is given. In this structure, Users can used a computer to start the traffic flow prediction process, and data collecting module can collect and return the data through the destination device. Thirdly, the dynamic network traffic flow prediction model is implemented based on BP Neural Network. Particularly, in this paper, the BP Neural Network is trained by a modified quantum-behaved particle swarm optimization(QPSO. We modified the QPSO by utilizing chaos signals to implement typical logistic mapping and pursuing the fitness function of a particle by a set of optimal parameters. Afterwards, based on the above process, dynamic network traffic flow prediction model is illustrated. Finally, a series of experiments are conduct to make performance evaluation, and related analyses for experimental results are also given

  18. A Physically Based Theoretical Model of Spore Deposition for Predicting Spread of Plant Diseases.

    Science.gov (United States)

    Isard, Scott A; Chamecki, Marcelo

    2016-03-01

    A physically based theory for predicting spore deposition downwind from an area source of inoculum is presented. The modeling framework is based on theories of turbulence dispersion in the atmospheric boundary layer and applies only to spores that escape from plant canopies. A "disease resistance" coefficient is introduced to convert the theoretical spore deposition model into a simple tool for predicting disease spread at the field scale. Results from the model agree well with published measurements of Uromyces phaseoli spore deposition and measurements of wheat leaf rust disease severity. The theoretical model has the advantage over empirical models in that it can be used to assess the influence of source distribution and geometry, spore characteristics, and meteorological conditions on spore deposition and disease spread. The modeling framework is refined to predict the detailed two-dimensional spatial pattern of disease spread from an infection focus. Accounting for the time variations of wind speed and direction in the refined modeling procedure improves predictions, especially near the inoculum source, and enables application of the theoretical modeling framework to field experiment design.

  19. A Physically Based Analytical Model to Predict Quantized Eigen Energies and Wave Functions Incorporating Penetration Effect

    CERN Document Server

    Chowdhury, Nadim; Azim, Zubair Al; Alam, Md Hasibul; Niaz, Iftikhar Ahmad; Khosru, Quazi D M

    2014-01-01

    We propose a physically based analytical compact model to calculate Eigen energies and Wave functions which incorporates penetration effect. The model is applicable for a quantum well structure that frequently appears in modern nano-scale devices. This model is equally applicable for both silicon and III-V devices. Unlike other models already available in the literature, our model can accurately predict all the eigen energies without the inclusion of any fitting parameters. The validity of our model has been checked with numerical simulations and the results show significantly better agreement compared to the available methods.

  20. Composition-Based Prediction of Temperature-Dependent Thermophysical Food Properties: Reevaluating Component Groups and Prediction Models.

    Science.gov (United States)

    Phinney, David Martin; Frelka, John C; Heldman, Dennis Ray

    2017-01-01

    Prediction of temperature-dependent thermophysical properties (thermal conductivity, density, specific heat, and thermal diffusivity) is an important component of process design for food manufacturing. Current models for prediction of thermophysical properties of foods are based on the composition, specifically, fat, carbohydrate, protein, fiber, water, and ash contents, all of which change with temperature. The objectives of this investigation were to reevaluate and improve the prediction expressions for thermophysical properties. Previously published data were analyzed over the temperature range from 10 to 150 °C. These data were analyzed to create a series of relationships between the thermophysical properties and temperature for each food component, as well as to identify the dependence of the thermophysical properties on more specific structural properties of the fats, carbohydrates, and proteins. Results from this investigation revealed that the relationships between the thermophysical properties of the major constituents of foods and temperature can be statistically described by linear expressions, in contrast to the current polynomial models. Links between variability in thermophysical properties and structural properties were observed. Relationships for several thermophysical properties based on more specific constituents have been identified. Distinctions between simple sugars (fructose, glucose, and lactose) and complex carbohydrates (starch, pectin, and cellulose) have been proposed. The relationships between the thermophysical properties and proteins revealed a potential correlation with the molecular weight of the protein. The significance of relating variability in constituent thermophysical properties with structural properties--such as molecular mass--could significantly improve composition-based prediction models and, consequently, the effectiveness of process design. © 2016 Institute of Food Technologists®.

  1. CONTROL OF NONLINEAR PROCESS USING NEURAL NETWORK BASED MODEL PREDICTIVE CONTROL

    Directory of Open Access Journals (Sweden)

    Dr.A.TRIVEDI

    2011-04-01

    Full Text Available This paper presents a Neural Network based Model Predictive Control (NNMPC strategy to control nonlinear process. Multilayer Perceptron Neural Network (MLP is chosen to represent a Nonlinear Auto Regressive with eXogenous signal (NARX model of a nonlinear system. NARX dynamic model is based on feed-forward architecture and offers good approximation capabilities along with robustness and accuracy. Based on the identified neural model, a generalized predictive control (GPC algorithm is implemented to control the composition in acontinuous stirred tank reactor (CSTR, whose parameters are optimally determined by solving quadratic performance index using well known Levenberg-Marquardt and Quasi-Newton algorithm. NNMPC is tuned by selecting few horizon parameters and weighting factor. The tracking performance of the NNMPC is tested using different amplitude function as a reference signal on CSTR application. Also the robustness and performance is tested in the presence of disturbance on random reference signal.

  2. The method of soft sensor modeling for fly ash carbon content based on ARMA deviation prediction

    Science.gov (United States)

    Yang, Xiu; Yang, Wei

    2017-03-01

    The carbon content of fly ash is an important parameter in the process of boiler combustion. Aiming at the existing problems of fly ash detection, the soft measurement model was established based on PSO-SVM, and the method of deviation correction based on ARMA model was put forward on this basis, the soft sensing model was calibrated by the values which were obtained by off-line analysis at intervals. The 600 MW supercritical sliding pressure boiler was taken for research objects, the auxiliary variables were selected and the data which collected by DCS were simulated. The result shows that the prediction model for the carbon content of fly ash based on PSO-SVM is good in effect of fitting, and introducing the correction module is helpful to improve the prediction accuracy.

  3. Predictive-model-based dynamic coordination control strategy for power-split hybrid electric bus

    Science.gov (United States)

    Zeng, Xiaohua; Yang, Nannan; Wang, Junnian; Song, Dafeng; Zhang, Nong; Shang, Mingli; Liu, Jianxin

    2015-08-01

    Parameter-matching methods and optimal control strategies of the top-selling hybrid electric vehicle (HEV), namely, power-split HEV, are widely studied. In particular, extant research on control strategy focuses on the steady-state energy management strategy to obtain better fuel economy. However, given that multi-power sources are highly coupled in power-split HEVs and influence one another during mode shifting, conducting research on dynamic coordination control strategy (DCCS) to achieve riding comfort is also important. This paper proposes a predictive-model-based DCCS. First, the dynamic model of the objective power-split HEV is built and the mode shifting process is analyzed based on the developed model to determine the reason for the system shock generated. Engine torque estimation algorithm is then designed according to the principle of the nonlinear observer, and the prediction model of the degree of shock is established based on the theory of model predictive control. Finally, the DCCS with adaptation for a complex driving cycle is realized by combining the feedback control and the predictive model. The presented DCCS is validated on the co-simulation platform of AMESim and Simulink. Results show that the shock during mode shifting is well controlled, thereby improving riding comfort.

  4. Locally linear neurofuzzy modeling and prediction of geomagnetic disturbances based on solar wind conditions

    Science.gov (United States)

    Sharifie, Javad; Lucas, Caro; Araabi, Babak N.

    2006-06-01

    Disturbance storm time index (Dst) is nonlinearly related to solar wind data. In this paper, Dst past values, Dst derivative, past values of southward interplanetary magnetic field, and the square root of dynamic pressure are used as inputs for modeling and prediction of the Dst index, especially during extreme events. The geoeffective solar wind parameters are selected depending on the physical background of the geomagnetic storm procedure and physical models. A locally linear neurofuzzy model with a progressive tree construction learning algorithm is applied as a powerful tool for nonlinear modeling of Dst index on the basis of its past values and solar wind parameters. The result for modeling and prediction of several intense storms shows that the geomagnetic disturbance Dst index based on geoeffective parameters is a nonlinear model that could be considered as the nonlinear extension of empirical linear physical models. The method is applied for prediction of some geomagnetic storms. Obtained results show that using the proposed method, the predicted values of several extreme storms are highly correlated with observed values. In addition, prediction of the main phase of many storms shows a good match with observed data, which constitutes an appropriate approach for solar storm alerting to vulnerable industries.

  5. Fractional Diffusion Based Modelling and Prediction of Human Brain Response to External Stimuli

    Directory of Open Access Journals (Sweden)

    Hamidreza Namazi

    2015-01-01

    Full Text Available Human brain response is the result of the overall ability of the brain in analyzing different internal and external stimuli and thus making the proper decisions. During the last decades scientists have discovered more about this phenomenon and proposed some models based on computational, biological, or neuropsychological methods. Despite some advances in studies related to this area of the brain research, there were fewer efforts which have been done on the mathematical modeling of the human brain response to external stimuli. This research is devoted to the modeling and prediction of the human EEG signal, as an alert state of overall human brain activity monitoring, upon receiving external stimuli, based on fractional diffusion equations. The results of this modeling show very good agreement with the real human EEG signal and thus this model can be used for many types of applications such as prediction of seizure onset in patient with epilepsy.

  6. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    Science.gov (United States)

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  7. Model-based prediction of monoclonal antibody retention in ion-exchange chromatography.

    Science.gov (United States)

    Guélat, Bertrand; Delegrange, Lydia; Valax, Pascal; Morbidelli, Massimo

    2013-07-12

    In order to support a model-based process design in ion-exchange chromatography, an adsorption equilibrium model was adapted to predict the protein retention behavior from the amino acid sequence and from structural information on the resin. It is based on the computation of protein-resin interactions with a colloidal model and accounts for the contribution of each ionizable amino acid to the protein charge. As a verification of the protein charge model, the experimental titration curve of a monoclonal antibody was compared to its predicted net charge. Using this protein charge model in the computation of the protein-resin interactions, it is possible to predict the adsorption equilibrium constant (i.e. retention factor or Henry constant) with an explicit pH and salt dependence. The application of the model-based predictions for an in silico screening of the protein retention on various stationary phases or, alternatively, for the comparison of various monoclonal antibodies on a given cation-exchanger was demonstrated. Furthermore, considering the structural differences between charge variants of a monoclonal antibody, it was possible to predict their individual retention times. The selectivity between the side variants and the main isoform of the monoclonal antibody were computed. The comparison with the experimental data showed that the model was reliable with respect to the identification of the operating conditions maximizing the selectivity, i.e. the most promising conditions for a monoclonal antibody variant separation. Such predictions can be useful in reducing the experimental effort to identify the parameter space.

  8. ProMT: effective human promoter prediction using Markov chain model based on DNA structural properties.

    Science.gov (United States)

    Xiong, Dapeng; Liu, Rongjie; Xiao, Fen; Gao, Xieping

    2014-12-01

    The core promoters play significant and extensive roles for the initiation and regulation of DNA transcription. The identification of core promoters is one of the most challenging problems yet. Due to the diverse nature of core promoters, the results obtained through existing computational approaches are not satisfactory. None of them considered the potential influence on performance of predictive approach resulted by the interference between neighboring TSSs in TSS clusters. In this paper, we sufficiently considered this main factor and proposed an approach to locate potential TSS clusters according to the correlation of regional profiles of DNA and TSS clusters. On this basis, we further presented a novel computational approach (ProMT) for promoter prediction using Markov chain model and predictive TSS clusters based on structural properties of DNA. Extensive experiments demonstrated that ProMT can significantly improve the predictive performance. Therefore, considering interference between neighboring TSSs is essential for a wider range of promoter prediction.

  9. Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Jina

    2016-01-01

    Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.

  10. Application of uncertainty reasoning based on cloud model in time series prediction

    Institute of Scientific and Technical Information of China (English)

    张锦春; 胡谷雨

    2003-01-01

    Time series prediction has been successfully used in several application areas, such as meteoro-logical forecasting, market prediction, network traffic forecasting, etc. , and a number of techniques have been developed for modeling and predicting time series. In the traditional exponential smoothing method, a fixed weight is assigned to data history, and the trend changes of time series are ignored. In this paper, an uncertainty reasoning method, based on cloud model, is employed in time series prediction, which uses cloud logic controller to adjust the smoothing coefficient of the simple exponential smoothing method dynamically to fit the current trend of the time series. The validity of this solution was proved by experiments on various data sets.

  11. Application of uncertainty reasoning based on cloud model in time series prediction

    Institute of Scientific and Technical Information of China (English)

    张锦春; 胡谷雨

    2003-01-01

    Time series prediction has been successfully used in several application areas, such as meteorological forecasting, market prediction, network traffic forecasting, etc., and a number of techniques have been developed for modeling and predicting time series. In the traditional exponential smoothing method, a fixed weight is assigned to data history, and the trend changes of time series are ignored. In this paper, an uncertainty reasoning method, based on cloud model, is employed in time series prediction, which uses cloud logic controller to adjust the smoothing coefficient of the simple exponential smoothing method dynamically to fit the current trend of the time series. The validity of this solution was proved by experiments on various data sets.

  12. Fast Fourier Transform-based Support Vector Machine for Subcellular Localization Prediction Using Different Substitution Models

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    There are approximately 109 proteins in a cell. A hotspot in bioinformatics is how to identify a protein's subcellular localization, if its sequence is known. In this paper, a method using fast Fourier transform-based support vector machine is developed to predict the subcellular localization of proteins from their physicochemical properties and structural parameters. The prediction accuracies reached 83% in prokaryotic organisms and 84% in eukaryotic organisms with the substitution model of the c-p-v matrix (c, composition; p, polarity; and v, molecular volume). The overall prediction accuracy was also evaluated using the "leave-one-out" jackknife procedure. The influence of the substitution model on prediction accuracy has also been discussed in the work. The source code of the new program is available on request from the authors.

  13. Predicting Model forComplex Production Process Based on Dynamic Neural Network

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Based on the comparison of several methods of time series predicting, this paper points out that it is nec-essary to use dynamic neural network in modeling of complex production process. Because self-feedback and mutu-al-feedback are adopted among nodes at the same layer in Elman network, it has stronger ability of dynamic ap-proximation, and can describe any non-linear dynamic system. After the structure and mathematical description be-ing given, dynamic back-propagation (BP) algorithm of training weights of Elman neural network is deduced. Atlast, the network is used to predict ash content of black amber in jigging production process. The results show thatthis neural network is powerful in predicting and suitable for modeling, predicting, and controling of complex pro-duction process.

  14. Prediction Model of Weekly Retail Price for Eggs Based on Chaotic Neural Network

    Institute of Scientific and Technical Information of China (English)

    LI Zhe-min; CUI Li-guo; XU Shi-wei; WENG Ling-yun; DONG Xiao-xia; LI Gan-qiong; YU Hai-peng

    2013-01-01

    This paper establishes a short-term prediction model of weekly retail prices for eggs based on chaotic neural network with the weekly retail prices of eggs from January 2008 to December 2012 in China. In the process of determining the structure of the chaotic neural network, the number of input layer nodes of the network is calculated by reconstructing phase space and computing its saturated embedding dimension, and then the number of hidden layer nodes is estimated by trial and error. Finally, this model is applied to predict the retail prices of eggs and compared with ARIMA. The result shows that the chaotic neural network has better nonlinear iftting ability and higher precision in the prediction of weekly retail price of eggs. The empirical result also shows that the chaotic neural network can be widely used in the ifeld of short-term prediction of agricultural prices.

  15. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    Science.gov (United States)

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  16. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    Science.gov (United States)

    Jia, Lei; Yarlagadda, Ramya; Reed, Charles C

    2015-01-01

    Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html) is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG) and melting temperature change (dTm) were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor) and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  17. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    Directory of Open Access Journals (Sweden)

    Lei Jia

    Full Text Available Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG and melting temperature change (dTm were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  18. Modeling and Simulation of Time Series Prediction Based on Dynic Neural Network

    Institute of Scientific and Technical Information of China (English)

    王雪松; 程玉虎; 彭光正

    2004-01-01

    Molding and simulation of time series prediction based on dynic neural network(NN) are studied. Prediction model for non-linear and time-varying system is proposed based on dynic Jordan NN. Aiming at the intrinsic defects of back-propagation (BP) algorithm that cannot update network weights incrementally, a hybrid algorithm combining the temporal difference (TD) method with BP algorithm to train Jordan NN is put forward. The proposed method is applied to predict the ash content of clean coal in jigging production real-time and multi-step. A practical exple is also given and its application results indicate that the method has better performance than others and also offers a beneficial reference to the prediction of nonlinear time series.

  19. Model Based Predictive Control of Thermal Comfort for Integrated Building System

    Science.gov (United States)

    Georgiev, Tz.; Jonkov, T.; Yonchev, E.; Tsankov, D.

    2011-12-01

    This article deals with the indoor thermal control problem in HVAC (heating, ventilation and air conditioning) systems. Important outdoor and indoor variables in these systems are: air temperature, global and diffuse radiations, wind speed and direction, temperature, relative humidity, mean radiant temperature, and so on. The aim of this article is to obtain the thermal comfort optimisation by model based predictive control algorithms (MBPC) of an integrated building system. The control law is given by a quadratic programming problem and the obtained control action is applied to the process. The derived models and model based predictive control algorithms are investigated based on real—live data. All researches are derived in MATLAB environment. The further research will focus on synthesis of robust energy saving control algorithms.

  20. Linear Model-Based Predictive Control of the LHC 1.8 K Cryogenic Loop

    CERN Document Server

    Blanco-Viñuela, E; De Prada-Moraga, C

    1999-01-01

    The LHC accelerator will employ 1800 superconducting magnets (for guidance and focusing of the particle beams) in a pressurized superfluid helium bath at 1.9 K. This temperature is a severely constrained control parameter in order to avoid the transition from the superconducting to the normal state. Cryogenic processes are difficult to regulate due to their highly non-linear physical parameters (heat capacity, thermal conductance, etc.) and undesirable peculiarities like non self-regulating process, inverse response and variable dead time. To reduce the requirements on either temperature sensor or cryogenic system performance, various control strategies have been investigated on a reduced-scale LHC prototype built at CERN (String Test). Model Based Predictive Control (MBPC) is a regulation algorithm based on the explicit use of a process model to forecast the plant output over a certain prediction horizon. This predicted controlled variable is used in an on-line optimization procedure that minimizes an approp...

  1. Composite control for raymond mill based on model predictive control and disturbance observer

    Directory of Open Access Journals (Sweden)

    Dan Niu

    2016-03-01

    Full Text Available In the raymond mill grinding process, precise control of operating load is vital for the high product quality. However, strong external disturbances, such as variations of ore size and ore hardness, usually cause great performance degradation. It is not easy to control the current of raymond mill constant. Several control strategies have been proposed. However, most of them (such as proportional–integral–derivative and model predictive control reject disturbances just through feedback regulation, which may lead to poor control performance in the presence of strong disturbances. For improving disturbance rejection, a control method based on model predictive control and disturbance observer is put forward in this article. The scheme employs disturbance observer as feedforward compensation and model predictive control controller as feedback regulation. The test results illustrate that compared with model predictive control method, the proposed disturbance observer–model predictive control method can obtain significant superiority in disturbance rejection, such as shorter settling time and smaller peak overshoot under strong disturbances.

  2. Component-based model to predict aerodynamic noise from high-speed train pantographs

    Science.gov (United States)

    Latorre Iglesias, E.; Thompson, D. J.; Smith, M. G.

    2017-04-01

    At typical speeds of modern high-speed trains the aerodynamic noise produced by the airflow over the pantograph is a significant source of noise. Although numerical models can be used to predict this they are still very computationally intensive. A semi-empirical component-based prediction model is proposed to predict the aerodynamic noise from train pantographs. The pantograph is approximated as an assembly of cylinders and bars with particular cross-sections. An empirical database is used to obtain the coefficients of the model to account for various factors: incident flow speed, diameter, cross-sectional shape, yaw angle, rounded edges, length-to-width ratio, incoming turbulence and directivity. The overall noise from the pantograph is obtained as the incoherent sum of the predicted noise from the different pantograph struts. The model is validated using available wind tunnel noise measurements of two full-size pantographs. The results show the potential of the semi-empirical model to be used as a rapid tool to predict aerodynamic noise from train pantographs.

  3. Spatiotemporal Modeling of Urban Growth Predictions Based on Driving Force Factors in Five Saudi Arabian Cities

    Directory of Open Access Journals (Sweden)

    Abdullah F. Alqurashi

    2016-08-01

    Full Text Available This paper investigates the effect of four driving forces, including elevation, slope, distance to drainage and distance to major roads, on urban expansion in five Saudi Arabian cities: Riyadh, Jeddah, Makkah, Al-Taif and Eastern Area. The prediction of urban probabilities in the selected cities based on the four driving forces is generated using a logistic regression model for two time periods of urban change in 1985 and 2014. The validation of the model was tested using two approaches. The first approach was a quantitative analysis by using the Relative Operating Characteristic (ROC method. The second approach was a qualitative analysis in which the probable urban growth maps based on urban changes in 1985 is used to test the performance of the model to predict the probable urban growth after 2014 by comparing the probable maps of 1985 and the actual urban growth of 2014. The results indicate that the prediction model of 2014 provides a reliable and consistent prediction based on the performance of 1985. The analysis of driving forces shows variable effects over time. Variables such as elevation, slope and road distance had significant effects on the selected cities. However, distance to major roads was the factor with the most impact to determine the urban form in all five cites in both 1985 and 2014.

  4. Protein-binding site prediction based on three-dimensional protein modeling.

    Science.gov (United States)

    Oh, Mina; Joo, Keehyoung; Lee, Jooyoung

    2009-01-01

    Structural information of a protein can guide one to understand the function of the protein, and ligand binding is one of the major biochemical functions of proteins. We have applied a two-stage template-based ligand binding site prediction method to CASP8 targets and achieved high quality results with accuracy/coverage = 70/80 (LEE). First, templates are used for protein structure modeling and then for binding site prediction by structural clustering of ligand-containing templates to the predicted protein model. Remarkably, the results are only a few percent worse than those one can obtain from native structures, which were available only after the prediction. Prediction was performed without knowing identity of ligands, and consequently, in many cases the ligand molecules used for prediction were different from the actual ligands, and yet we find that the prediction was quite successful. The current approach can be easily combined with experiments to investigate protein activities in a systematic way. Copyright 2009 Wiley-Liss, Inc.

  5. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection.

    Directory of Open Access Journals (Sweden)

    Su Yang

    Full Text Available Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1 Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2 The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3 The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.

  6. Characteristics-based model predictive control of a catalytic flow reversal reactor

    Energy Technology Data Exchange (ETDEWEB)

    Fuxman, A.M.; Forbes, J.F.; Hayes, R.E. [Alberta Univ., Edmonton, AB (Canada). Dept. of Chemical and Materials Engineering

    2007-08-15

    A model-based controller for a catalytic flow reversal reactor (CFRR) was presented. The characteristics-based model predictive control (CBMPC) was used to provide greater accuracy in the prediction of process output variables as well as to ensure the maintenance of safe operating temperatures. Performance of the CBMPC was simulated in order to evaluate combustion of lean methane streams for the reduction of greenhouse gas (GHG) emissions. Dynamics of the CFRR were described using partial differential equations (PDEs) derived from mass and energy balances. The PDEs were then transformed into an equivalent lumped parameter model, which was in turn used to design the non-linear predictive controller. The prediction horizon was divided into Hp intervals during each half cycle. A constrained quadratic program was then solved to obtain an optimal input sequence. The strategy was then evaluated by applying it to a simple CFRR plant, as well as a more complex plant modelled by a dynamical-dimensional heterogenous model that incorporated the effect of a large insulation layer needed to reduce heat loss from the reactor. Results of the simulations suggested that mass extraction in a CBMPC scheme can be used to maintain safe operating conditions. It was concluded that the strategy provided good control performance for regulation and set point tracking in the presence of inlet disturbances and other changes in operating conditions. 18 refs., 1 tab., 10 figs.

  7. Quantifying uncertainties in streamflow predictions through signature based inference of hydrological model parameters

    Science.gov (United States)

    Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro

    2016-04-01

    The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood

  8. Prediction of Coal Consumption in China Based on the Partial Linear Model

    Institute of Scientific and Technical Information of China (English)

    Ying; XIE; Chunxiang; ZHAO

    2015-01-01

    China is one of the few countries using coal as the main energy and is the world’s second largest coal consumer. Researching the coal consumption is very necessary. At present,the prediction model of coal consumption is mainly based on time series analysis of price,and it rarely considers the influence of other factors. In this paper,on the basis of demand theory,we establish the multiple impact indicators,and use principal component analysis as well as partial linear model for multiple factors to establish coal consumption model. By using this model to forecast the coal consumption in 2011,we find that the predicted value is close to actual value,which means that the model is good.

  9. Knowledge-based artificial neural network model to predict the properties of alpha+ beta titanium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Banu, P. S. Noori; Rani, S. Devaki [Dept. of Metallurgical Engineering, Jawaharlal Nehru Technological University, HyderabadI (India)

    2016-08-15

    In view of emerging applications of alpha+beta titanium alloys in aerospace and defense, we have aimed to develop a Back propagation neural network (BPNN) model capable of predicting the properties of these alloys as functions of alloy composition and/or thermomechanical processing parameters. The optimized BPNN model architecture was based on the sigmoid transfer function and has one hidden layer with ten nodes. The BPNN model showed excellent predictability of five properties: Tensile strength (r: 0.96), yield strength (r: 0.93), beta transus (r: 0.96), specific heat capacity (r: 1.00) and density (r: 0.99). The developed BPNN model was in agreement with the experimental data in demonstrating the individual effects of alloying elements in modulating the above properties. This model can serve as the platform for the design and development of new alpha+beta titanium alloys in order to attain desired strength, density and specific heat capacity.

  10. A Validation of Subchannel Based CHF Prediction Model for Rod Bundles

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae-Hyun; Kim, Seong-Jin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    A large number of CHF data base were procured from various sources which included square and non-square lattice test bundles. CHF prediction accuracy was evaluated for various models including CHF lookup table method, empirical correlations, and phenomenological DNB models. The parametric effect of the mass velocity and unheated wall has been investigated from the experimental result, and incorporated into the development of local parameter CHF correlation applicable to APWR conditions. According to the CHF design criterion, the CHF should not occur at the hottest rod in the reactor core during normal operation and anticipated operational occurrences with at least a 95% probability at a 95% confidence level. This is accomplished by assuring that the minimum DNBR (Departure from Nucleate Boiling Ratio) in the reactor core is greater than the limit DNBR which accounts for the accuracy of CHF prediction model. The limit DNBR can be determined from the inverse of the lower tolerance limit of M/P that is evaluated from the measured-to-predicted CHF ratios for the relevant CHF data base. It is important to evaluate an adequacy of the CHF prediction model for application to the actual reactor core conditions. Validation of CHF prediction model provides the degree of accuracy inferred from the comparison of solution and data. To achieve a required accuracy for the CHF prediction model, it may be necessary to calibrate the model parameters by employing the validation results. If the accuracy of the model is acceptable, then it is applied to the real complex system with the inferred accuracy of the model. In a conventional approach, the accuracy of CHF prediction model was evaluated from the M/P statistics for relevant CHF data base, which was evaluated by comparing the nominal values of the predicted and measured CHFs. The experimental uncertainty for the CHF data was not considered in this approach to determine the limit DNBR. When a subchannel based CHF prediction model

  11. Reliability estimation and remaining useful lifetime prediction for bearing based on proportional hazard model

    Institute of Scientific and Technical Information of China (English)

    王鹭; 张利; 王学芝

    2015-01-01

    As the central component of rotating machine, the performance reliability assessment and remaining useful lifetime prediction of bearing are of crucial importance in condition-based maintenance to reduce the maintenance cost and improve the reliability. A prognostic algorithm to assess the reliability and forecast the remaining useful lifetime (RUL) of bearings was proposed, consisting of three phases. Online vibration and temperature signals of bearings in normal state were measured during the manufacturing process and the most useful time-dependent features of vibration signals were extracted based on correlation analysis (feature selection step). Time series analysis based on neural network, as an identification model, was used to predict the features of bearing vibration signals at any horizons (feature prediction step). Furthermore, according to the features, degradation factor was defined. The proportional hazard model was generated to estimate the survival function and forecast the RUL of the bearing (RUL prediction step). The positive results show that the plausibility and effectiveness of the proposed approach can facilitate bearing reliability estimation and RUL prediction.

  12. Interpolation-based reduced-order models to predict transient thermal output for enhanced geothermal systems

    CERN Document Server

    Mudunuru, M K; Harp, D R; Guthrie, G D; Viswanathan, H S

    2016-01-01

    The goal of this paper is to assess the utility of Reduced-Order Models (ROMs) developed from 3D physics-based models for predicting transient thermal power output for an enhanced geothermal reservoir while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on Latin Hypercube Sampling (LHS) of model inputs drawn from uniform probability distributions. Key sensitive parameters are identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. The inputs for ROMs are based on these key sensitive parameters. The ROMs are then used to evaluate the influence of subsurface attributes on thermal power production curves. The resulting ROMs are compared with field-data and the detailed physics-based numerical simulations. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production cu...

  13. Dynamical-statistical seasonal prediction for western North Pacific typhoons based on APCC multi-models

    Science.gov (United States)

    Kim, Ok-Yeon; Kim, Hye-Mi; Lee, Myong-In; Min, Young-Mi

    2017-01-01

    This study aims at predicting the seasonal number of typhoons (TY) over the western North Pacific with an Asia-Pacific Climate Center (APCC) multi-model ensemble (MME)-based dynamical-statistical hybrid model. The hybrid model uses the statistical relationship between the number of TY during the typhoon season (July-October) and the large-scale key predictors forecasted by APCC MME for the same season. The cross validation result from the MME hybrid model demonstrates high prediction skill, with a correlation of 0.67 between the hindcasts and observation for 1982-2008. The cross validation from the hybrid model with individual models participating in MME indicates that there is no single model which consistently outperforms the other models in predicting typhoon number. Although the forecast skill of MME is not always the highest compared to that of each individual model, the skill of MME presents rather higher averaged correlations and small variance of correlations. Given large set of ensemble members from multi-models, a relative operating characteristic score reveals an 82 % (above-) and 78 % (below-normal) improvement for the probabilistic prediction of the number of TY. It implies that there is 82 % (78 %) probability that the forecasts can successfully discriminate between above normal (below-normal) from other years. The forecast skill of the hybrid model for the past 7 years (2002-2008) is more skillful than the forecast from the Tropical Storm Risk consortium. Using large set of ensemble members from multi-models, the APCC MME could provide useful deterministic and probabilistic seasonal typhoon forecasts to the end-users in particular, the residents of tropical cyclone-prone areas in the Asia-Pacific region.

  14. Passenger Flow Prediction of Subway Transfer Stations Based on Nonparametric Regression Model

    Directory of Open Access Journals (Sweden)

    Yujuan Sun

    2014-01-01

    Full Text Available Passenger flow is increasing dramatically with accomplishment of subway network system in big cities of China. As convergence nodes of subway lines, transfer stations need to assume more passengers due to amount transfer demand among different lines. Then, transfer facilities have to face great pressure such as pedestrian congestion or other abnormal situations. In order to avoid pedestrian congestion or warn the management before it occurs, it is very necessary to predict the transfer passenger flow to forecast pedestrian congestions. Thus, based on nonparametric regression theory, a transfer passenger flow prediction model was proposed. In order to test and illustrate the prediction model, data of transfer passenger flow for one month in XIDAN transfer station were used to calibrate and validate the model. By comparing with Kalman filter model and support vector machine regression model, the results show that the nonparametric regression model has the advantages of high accuracy and strong transplant ability and could predict transfer passenger flow accurately for different intervals.

  15. A novel prediction method about single components of analog circuits based on complex field modeling.

    Science.gov (United States)

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.

  16. A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling

    Directory of Open Access Journals (Sweden)

    Jingyu Zhou

    2014-01-01

    Full Text Available Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits’ single components. At last, it uses particle filter (PF to update parameters for the model and predicts remaining useful performance (RUP of analog circuits’ single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.

  17. Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval

    Science.gov (United States)

    Kasiviswanathan, K.; Sudheer, K.

    2013-05-01

    Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived

  18. Predictive models in urology.

    Science.gov (United States)

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  19. Gas Emission Prediction Model of Coal Mine Based on CSBP Algorithm

    Directory of Open Access Journals (Sweden)

    Xiong Yan

    2016-01-01

    Full Text Available In view of the nonlinear characteristics of gas emission in a coal working face, a prediction method is proposed based on cuckoo search algorithm optimized BP neural network (CSBP. In the CSBP algorithm, the cuckoo search is adopted to optimize weight and threshold parameters of BP network, and obtains the global optimal solutions. Furthermore, the twelve main affecting factors of the gas emission in the coal working face are taken as input vectors of CSBP algorithm, the gas emission is acted as output vector, and then the prediction model of BP neural network with optimal parameters is established. The results show that the CSBP algorithm has batter generalization ability and higher prediction accuracy, and can be utilized effectively in the prediction of coal mine gas emission.

  20. Comprehensible Predictive Modeling Using Regularized Logistic Regression and Comorbidity Based Features.

    Directory of Open Access Journals (Sweden)

    Gregor Stiglic

    Full Text Available Different studies have demonstrated the importance of comorbidities to better understand the origin and evolution of medical complications. This study focuses on improvement of the predictive model interpretability based on simple logical features representing comorbidities. We use group lasso based feature interaction discovery followed by a post-processing step, where simple logic terms are added. In the final step, we reduce the feature set by applying lasso logistic regression to obtain a compact set of non-zero coefficients that represent a more comprehensible predictive model. The effectiveness of the proposed approach was demonstrated on a pediatric hospital discharge dataset that was used to build a readmission risk estimation model. The evaluation of the proposed method demonstrates a reduction of the initial set of features in a regression model by 72%, with a slight improvement in the Area Under the ROC Curve metric from 0.763 (95% CI: 0.755-0.771 to 0.769 (95% CI: 0.761-0.777. Additionally, our results show improvement in comprehensibility of the final predictive model using simple comorbidity based terms for logistic regression.

  1. Research on Short-Term Wind Power Prediction Based on Combined Forecasting Models

    Directory of Open Access Journals (Sweden)

    Zhang Chi

    2016-01-01

    Full Text Available Short-Term wind power forecasting is crucial for power grid since the generated energy of wind farm fluctuates frequently. In this paper, a physical forecasting model based on NWP and a statistical forecasting model with optimized initial value in the method of BP neural network are presented. In order to make full use of the advantages of the models presented and overcome the limitation of the disadvantage, the equal weight model and the minimum variance model are established for wind power prediction. Simulation results show that the combination forecasting model is more precise than single forecasting model and the minimum variance combination model can dynamically adjust weight of each single method, restraining the forecasting error further.

  2. Traffic Incident Clearance Time and Arrival Time Prediction Based on Hazard Models

    Directory of Open Access Journals (Sweden)

    Yang beibei Ji

    2014-01-01

    Full Text Available Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an effective input for travel time prediction. In this paper, the hazard based prediction models are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.

  3. A QoS-Satisfied Prediction Model for Cloud-Service Composition Based on a Hidden Markov Model

    OpenAIRE

    Qingtao Wu; Mingchuan Zhang; Ruijuan Zheng; Ying Lou; Wangyang Wei

    2013-01-01

    Various significant issues in cloud computing, such as service provision, service matching, and service assessment, have attracted researchers’ attention recently. Quality of service (QoS) plays an increasingly important role in the provision of cloud-based services, by aiming for the seamless and dynamic integration of cloud-service components. In this paper, we focus on QoS-satisfied predictions about the composition of cloud-service components and present a QoS-satisfied prediction model b...

  4. Automated soil resources mapping based on decision tree and Bayesian predictive modeling

    Institute of Scientific and Technical Information of China (English)

    周斌; 张新刚; 王人潮

    2004-01-01

    This article presents two approaches for automated building of knowledge bases of soil resources mapping.These methods used decision tree and Bayesian predictive modeling,respectively to generate knowledge from training data.With these methods,building a knowledge base for automated soil mapping is easier than using the conventional knowledge acquisition approach.The knowledge bases built by these two methods were used by the knowledge classifier for soil type classification of the Longyou area,Zhejiang Province,China using TM bi-temporal imageries and GIS data.To evaluate the performance of the resultant knowledge bases,the classification results were compared to existing soil map based on field survey.The accuracy assessment and analysis of the resultant soil maps suggested that the knowledge bases built by these two methods were of good quality for mapping distribution model of soil classes over the study area.

  5. Automated soil resources mapping based on decision tree and Bayesian predictive modeling

    Institute of Scientific and Technical Information of China (English)

    周斌; 张新刚; 王人潮

    2004-01-01

    This article presents two approaches for automated building of knowledge bases of soil resources mapping.These methods used decision tree and Bayesian predictive modeling, respectively to generate knowledge from training data.With these methods, building a knowledge base for automated soil mapping is easier than using the conventional knowledge acquisition approach. The knowledge bases built by these two methods were used by the knowledge classifier for soil type classification of the Longyou area, Zhejiang Province, China using TM hi-temporal imageries and GIS data. To evaluate the performance of the resultant knowledge bases, the classification results were compared to existing soil map based on field survey. The accuracy assessment and analysis of the resultant soil maps suggested that the knowledge bases built by these two methods were of good quality for mapping distribution model of soil classes over the study area.

  6. Evaluation of remote-sensing-based rainfall products through predictive capability in hydrological runoff modelling

    DEFF Research Database (Denmark)

    Stisen, Simon; Sandholt, Inge

    2010-01-01

    The emergence of regional and global satellite-based rainfall products with high spatial and temporal resolution has opened up new large-scale hydrological applications in data-sparse or ungauged catchments. Particularly, distributed hydrological models can benefit from the good spatial coverage...... and distributed nature of satellite-based rainfall estimates (SRFE). In this study, five SRFEs with temporal resolution of 24 h and spatial resolution between 8 and 27 km have been evaluated through their predictive capability in a distributed hydrological model of the Senegal River basin in West Africa. The main...

  7. Ligand and structure-based classification models for Prediction of P-glycoprotein inhibitors

    DEFF Research Database (Denmark)

    Klepsch, Freya; Poongavanam, Vasanthanathan; Ecker, Gerhard Franz

    2014-01-01

    obtained by docking into a homology model of P-gp, to supervised machine learning methods, such as Kappa nearest neighbor, support vector machine (SVM), random forest and binary QSAR, by using a large, structurally diverse data set. In addition, the applicability domain of the models was assessed using...... an algorithm based on Euclidean distance. Results show that random forest and SVM performed best for classification of P-gp inhibitors and non-inhibitors, correctly predicting 73/75 % of the external test set compounds. Classification based on the docking experiments using the scoring function Chem...

  8. Imprecise Computation Based Real-time Fault Tolerant Implementation for Model Predictive Control

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Model predictive control (MPC) could not be deployed in real-time control systems for its computation time is not well defined. A real-time fault tolerant implementation algorithm based on imprecise computation is proposed for MPC,according to the solving process of quadratic programming (QP) problem. In this algorithm, system stability is guaranteed even when computation resource is not enough to finish optimization completely. By this kind of graceful degradation, the behavior of real-time control systems is still predictable and determinate. The algorithm is demonstrated by experiments on servomotor, and the simulation results show its effectiveness.

  9. A network security situation prediction model based on wavelet neural network with optimized parameters

    Directory of Open Access Journals (Sweden)

    Haibo Zhang

    2016-08-01

    Full Text Available The security incidents ion networks are sudden and uncertain, it is very hard to precisely predict the network security situation by traditional methods. In order to improve the prediction accuracy of the network security situation, we build a network security situation prediction model based on Wavelet Neural Network (WNN with optimized parameters by the Improved Niche Genetic Algorithm (INGA. The proposed model adopts WNN which has strong nonlinear ability and fault-tolerance performance. Also, the parameters for WNN are optimized through the adaptive genetic algorithm (GA so that WNN searches more effectively. Considering the problem that the adaptive GA converges slowly and easily turns to the premature problem, we introduce a novel niche technology with a dynamic fuzzy clustering and elimination mechanism to solve the premature convergence of the GA. Our final simulation results show that the proposed INGA-WNN prediction model is more reliable and effective, and it achieves faster convergence-speed and higher prediction accuracy than the Genetic Algorithm-Wavelet Neural Network (GA-WNN, Genetic Algorithm-Back Propagation Neural Network (GA-BPNN and WNN.

  10. Model-based cap thickness and peak cap stress prediction for carotid MRI.

    Science.gov (United States)

    Kok, Annette M; van der Lugt, Aad; Verhagen, Hence J M; van der Steen, Antonius F W; Wentzel, Jolanda J; Gijsen, Frank J H

    2017-07-26

    A rupture-prone carotid plaque can potentially be identified by calculating the peak cap stress (PCS). For these calculations, plaque geometry from MRI is often used. Unfortunately, MRI is hampered by a low resolution, leading to an overestimation of cap thickness and an underestimation of PCS. We developed a model to reconstruct the cap based on plaque geometry to better predict cap thickness and PCS. We used histological stained plaques from 34 patients. These plaques were segmented and served as the ground truth. Sections of these plaques contained 93 necrotic cores with a cap thickness Caps below the MRI resolution (n=31) were (digitally removed and) reconstructed according to the geometry-based model. Cap thickness and PCS were determined for the ground truth, readers, and reconstructed geometries. Cap thickness was 0.07mm for the ground truth, 0.23mm for the readers, and 0.12mm for the reconstructed geometries. The model predicts cap thickness significantly better than the readers. PCS was 464kPa for the ground truth, 262kPa for the readers and 384kPa for the reconstructed geometries. The model did not predict the PCS significantly better than the readers. The geometry-based model provided a significant improvement for cap thickness estimation and can potentially help in rupture-risk prediction, solely based on cap thickness. Estimation of PCS estimation did not improve, probably due to the complex shape of the plaques. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  11. AN APPLICATION OF HYBRID CLUSTERING AND NEURAL BASED PREDICTION MODELLING FOR DELINEATION OF MANAGEMENT ZONES

    Directory of Open Access Journals (Sweden)

    Babankumar S. Bansod

    2011-02-01

    Full Text Available Starting from descriptive data on crop yield and various other properties, the aim of this study is to reveal the trends on soil behaviour, such as crop yield. This study has been carried out by developing web application that uses a well known technique- Cluster Analysis. The cluster analysis revealed linkages between soil classes for the same field as well as between different fields, which can be partly assigned to crops rotation and determination of variable soil input rates. A hybrid clustering algorithm has been developed taking into account the traits of two clustering technologies: i Hierarchical clustering, ii K-means clustering. This hybrid clustering algorithm is applied to sensor- gathered data about soil and analysed, resulting in the formation of well delineatedmanagement zones based on various properties of soil, such as, ECa , crop yield, etc. One of the purposes of the study was to identify the main factors affecting the crop yield and the results obtained were validated with existing techniques. To accomplish this purpose, geo-referenced soil information has been examined. Also, based on this data, statistical method has been used to classify and characterize the soil behaviour. This is done using a prediction model, developed to predict the unknown behaviour of clusters based on the known behaviour of other clusters. In predictive modeling, data has been collected for the relevant predictors, a statistical model has been formulated, predictions were made and the model can be validated (or revised as additional data becomes available. The model used in the web application has been formed taking into account neural network based minimum hamming distance criterion.

  12. Offset-Free Model Predictive Control of Open Water Channel Based on Moving Horizon Estimation

    Science.gov (United States)

    Ekin Aydin, Boran; Rutten, Martine

    2016-04-01

    Model predictive control (MPC) is a powerful control option which is increasingly used by operational water managers for managing water systems. The explicit consideration of constraints and multi-objective management are important features of MPC. However, due to the water loss in open water systems by seepage, leakage and evaporation a mismatch between the model and the real system will be created. These mismatch affects the performance of MPC and creates an offset from the reference set point of the water level. We present model predictive control based on moving horizon estimation (MHE-MPC) to achieve offset free control of water level for open water canals. MHE-MPC uses the past predictions of the model and the past measurements of the system to estimate unknown disturbances and the offset in the controlled water level is systematically removed. We numerically tested MHE-MPC on an accurate hydro-dynamic model of the laboratory canal UPC-PAC located in Barcelona. In addition, we also used well known disturbance modeling offset free control scheme for the same test case. Simulation experiments on a single canal reach show that MHE-MPC outperforms disturbance modeling offset free control scheme.

  13. Prediction of anisotropic behavior of nano/micro composite based on damage mechanics with cell modeling.

    Science.gov (United States)

    Lee, Dock-Jin; Kim, Young-Jin; Kim, Moon-Ki; Choi, Jae-Boong; Chang, Yoon-Suk; Liu, Wing Kam

    2011-01-01

    New advanced composite materials have recently been of great interest. Especially, many researchers have studied on nano/micro composites based on matrix filled with nano-particles, nano-tubes, nano-wires and so forth, which have outstanding characteristics on thermal, electrical, optical, chemical and mechanical properties. Therefore, the need of numerical approach for design and development of the advanced materials has been recognized. In this paper, finite element analysis based on multi-resolution continuum theory is carried out to predict the anisotropic behavior of nano/micro composites based on damage mechanics with a cell modeling. The cell modeling systematically evaluates constitutive relationships from microstructure of the composite material. Effects of plastic anisotropy on deformation behavior and damage evolution of nano/micro composite are investigated by using Hill's 48 yield function and also compared with those obtained from Gurson-Tvergaard-Needleman isotropic damage model based on von Mises yield function.

  14. Forming limit prediction of powder forging process by the energy-based elastoplastic damage model

    Science.gov (United States)

    Yeh, Hung-Yang; Cheng, Jung-Ho; Huang, Cheng-Chao

    2004-06-01

    An energy-based elastoplastic damage model is developed and then applied to predict the deformation and fracture initiation in powder forging processes. The fracture mechanism is investigated by the newly proposed damage model, which is based on the plastic energy dissipation. The developed formulations are implemented into finite element program ABAQUS in order to simulate the complex loading conditions. The forming limits of sintered porous metals under various operational conditions are explored by comparing the relevant experiments with the finite element analyses. The sintered iron-powder preforms of various initial relative densities (RDs) and aspect ratios are compressed until crack initiates. The deformation level of the bulged billets at fracture stroke obtained from compressive fracture tests is utilized to validate the finite element model and then the forming limit diagrams are constructed with the validated model. This model is further verified by the gear blank forging. The fracture site and corresponding deformation level are predicted by the finite element simulations. Meanwhile, the gear forging experiment is performed on the sintered preforms. The predicted results agree well with the experimental observations.

  15. Regression-based air temperature spatial prediction models: an example from Poland

    Directory of Open Access Journals (Sweden)

    Mariusz Szymanowski

    2013-10-01

    Full Text Available A Geographically Weighted Regression ? Kriging (GWRK algorithm, based on the local Geographically Weighted Regression (GWR, is applied for spatial prediction of air temperature in Poland. Hengl's decision tree for selecting a suitable prediction model is extended for varying spatial relationships between the air temperature and environmental predictors with an assumption of existing environmental dependence of analyzed temperature variables. The procedure includes the potential choice of a local GWR instead of the global Multiple Linear Regression (MLR method for modeling the deterministic part of spatial variation, which is usual in the standard regression (residual kriging model (MLRK. The analysis encompassed: testing for environmental correlation, selecting an appropriate regression model, testing for spatial autocorrelation of the residual component, and validating the prediction accuracy. The proposed approach was performed for 69 air temperature cases, with time aggregation ranging from daily to annual average air temperatures. The results show that, irrespective of the level of data aggregation, the spatial distribution of temperature is better fitted by local models, and hence is the reason for choosing a GWR instead of the MLR for all variables analyzed. Additionally, in most cases (78% there is spatial autocorrelation in the residuals of the deterministic part, which suggests that the GWR model should be extended by ordinary kriging of residuals to the GWRK form. The decision tree used in this paper can be considered as universal as it encompasses either spatially varying relationships of modeled and explanatory variables or random process that can be modeled by a stochastic extension of the regression model (residual kriging. Moreover, for all cases analyzed, the selection of a method based on the local regression model (GWRK or GWR does not depend on the data aggregation level, showing the potential versatility of the technique.

  16. General Model to Predict Power Flow Transmitted into Laminated Beam Bases in Flexible Isolation Systems

    Institute of Scientific and Technical Information of China (English)

    NIU Junchuan; GE Peiqi; HOU Cuirong; LIM C W; SONG Kongjie

    2009-01-01

    For estimating the vibration transmission accurately and performing vibration control efficiently in isolation systems, a novel general model is presented to predict the power flow transmitted into the complicate flexible bases of laminated beams. In the model, the laminated beam bases are simulated by the first-order shear deformation laminated plate theory, which is relatively simple and economic but accurate in predicting the vibration solutions of flexible isolation systems with laminated beam bases in comparison with classical laminated beam theories and higher order theories. On the basis of the presented model, substructure technique and variational principle are employed to obtain the governing equation of the isolation system and the power flow solution. Then, the vibration characteristics of the flexible isolation systems with laminated bases are investigated. Several numerical examples are given to show the validity and efficiency of the presented model. It is concluded that the presented model is the extension of the classical one and it can obtain more accurate power flow solutions.

  17. Learning-based Nonlinear Model Predictive Control to Improve Vision-based Mobile Robot Path Tracking

    Science.gov (United States)

    2015-07-01

    corresponding cost function to be J(u) = ( xd − x)TQx ( xd − x) + uTRu, (20) where Qx ∈ RKnx×Knx is positive semi-definite, R and u are as in (3), xd is a...sequence of desired states, xd = ( xd ,k+1, . . . , xd ,k+K), x is a sequence of predicted states, x = (xk+1, . . . ,xk+K), and K is the given prediction...vact,k−1+b, ωact,k−1+b), based ωk θk vk xd ,i−1 xd ,i xd ,i+1 xk yk Figure 5: Definition of the robot velocities, vk and ωk, and three pose variables

  18. Formal modeling of Gene Ontology annotation predictions based on factor graphs

    Science.gov (United States)

    Spetale, Flavio; Murillo, Javier; Tapia, Elizabeth; Arce, Débora; Ponce, Sergio; Bulacio, Pilar

    2016-04-01

    Gene Ontology (GO) is a hierarchical vocabulary for gene product annotation. Its synergy with machine learning classification methods has been widely used for the prediction of protein functions. Current classification methods rely on heuristic solutions to check the consistency with some aspects of the underlying GO structure. In this work we formalize the GO is-a relationship through predicate logic. Moreover, an ontology model based on Forney Factor Graph (FFG) is shown on a general fragment of Cellular Component GO.

  19. Fuzzy Shape Control Based on Elman Dynamic Recursion Network Prediction Model

    Institute of Scientific and Technical Information of China (English)

    JIA Chun-yu; LIU Hong-min

    2006-01-01

    In the strip rolling process, shape control system possesses the characteristics of nonlinearity, strong coupling, time delay and time variation. Based on self-adapting Elman dynamic recursion network prediction model, the fuzzy control method was used to control the shape on four-high cold mill. The simulation results showed that the system can be applied to real time on line control of the shape.

  20. Yield loss prediction models based on early estimation of weed pressure

    DEFF Research Database (Denmark)

    Asif, Ali; Streibig, Jens Carl; Andreasen, Christian

    2013-01-01

    Weed control thresholds have been used to reduce costs and avoid unacceptable yield loss. Estimation of weed infestation has often been based on counts of weed plants per unit area or measurement of their relative leaf area index. Various linear, hyperbolic, and sigmoidal regression models have...... been proposed to predict yield loss, relative to yield in weed free environment from early measurements of weed infestation. The models are integrated in some weed management advisory systems. Generally, the recommendations from the advisory systems are applied to the whole field, but weed control...... time of weeds relative to crop. The aim of the review is to analyze various approaches to estimate infestation of weeds and the literature about yield loss prediction for multispecies. We discuss limitations of regression models and possible modifications to include the influential factors related...

  1. Using hybrid models to predict blood pressure reactivity to unsupported back based on anthropometric characteristics

    Institute of Scientific and Technical Information of China (English)

    Gurmanik KAUR‡; Ajat Shatru ARORA; Vijender Kumar JAIN

    2015-01-01

    Accurate blood pressure (BP) measurement is essential in epidemiological studies, screening programmes, and re-search studies as well as in clinical practice for the early detection and prevention of high BP-related risks such as coronary heart disease, stroke, and kidney failure. Posture of the participant plays a vital role in accurate measurement of BP. Guidelines on measurement of BP contain recommendations on the position of the back of the participants by advising that they should sit with supported back to avoid spuriously high readings. In this work, principal component analysis (PCA) is fused with forward stepwise regression (SWR), artificial neural network (ANN), adaptive neuro-fuzzy inference system (ANFIS), and the least squares support vector machine (LS-SVM) model for the prediction of BP reactivity to an unsupported back in normotensive and hypertensive participants. PCA is used to remove multi-collinearity among anthropometric predictor variables and to select a subset of com-ponents, termed‘principal components’ (PCs), from the original dataset. The selected PCs are fed into the proposed models for modeling and testing. The evaluation of the performance of the constructed models, using appropriate statistical indices, shows clearly that a PCA-based LS-SVM (PCA-LS-SVM) model is a promising approach for the prediction of BP reactivity in com-parison to others. This assessment demonstrates the importance and advantages posed by hybrid models for the prediction of variables in biomedical research studies.

  2. Combined Prediction Model of Death Toll for Road Traffic Accidents Based on Independent and Dependent Variables

    Science.gov (United States)

    Zhong-xiang, Feng; Shi-sheng, Lu; Wei-hua, Zhang; Nan-nan, Zhang

    2014-01-01

    In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability. PMID:25610454

  3. Combined Prediction Model of Death Toll for Road Traffic Accidents Based on Independent and Dependent Variables

    Directory of Open Access Journals (Sweden)

    Feng Zhong-xiang

    2014-01-01

    Full Text Available In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability.

  4. An IL28B genotype-based clinical prediction model for treatment of chronic hepatitis C.

    Directory of Open Access Journals (Sweden)

    Thomas R O'Brien

    Full Text Available BACKGROUND: Genetic variation in IL28B and other factors are associated with sustained virological response (SVR after pegylated-interferon/ribavirin treatment for chronic hepatitis C (CHC. Using data from the HALT-C Trial, we developed a model to predict a patient's probability of SVR based on IL28B genotype and clinical variables. METHODS: HALT-C enrolled patients with advanced CHC who had failed previous interferon-based treatment. Subjects were re-treated with pegylated-interferon/ribavirin during trial lead-in. We used step-wise logistic regression to calculate adjusted odds ratios (aOR and create the predictive model. Leave-one-out cross-validation was used to predict a priori probabilities of SVR and determine area under the receiver operator characteristics curve (AUC. RESULTS: Among 646 HCV genotype 1-infected European American patients, 14.2% achieved SVR. IL28B rs12979860-CC genotype was the strongest predictor of SVR (aOR, 7.56; p10% (43.3% of subjects had an SVR rate of 27.9% and accounted for 84.8% of subjects actually achieving SVR. To verify that consideration of both IL28B genotype and clinical variables is required for treatment decisions, we calculated AUC values from published data for the IDEAL Study. CONCLUSION: A clinical prediction model based on IL28B genotype and clinical variables can yield useful individualized predictions of the probability of treatment success that could increase SVR rates and decrease the frequency of futile treatment among patients with CHC.

  5. Accurate Mobility Modeling and Location Prediction Based on Pattern Analysis of Handover Series in Mobile Networks

    Directory of Open Access Journals (Sweden)

    Péter Fülöp

    2009-01-01

    Full Text Available The efficient dimensioning of cellular wireless access networks depends highly on the accuracy of the underlying mathematical models of user distribution and traffic estimations. Mobility prediction also considered as an effective method contributing to the accuracy of IP multicast based multimedia transmissions, and ad hoc routing algorithms. In this paper we focus on the tradeoff between the accuracy and the complexity of the mathematical models used to describe user movements in the network. We propose mobility model extension, in order to utilize user's movement history thus providing more accurate results than other widely used models in the literature. The new models are applicable in real-life scenarios, because these rely on additional information effectively available in cellular networks (e.g. handover history, too. The complexity of the proposed models is analyzed, and the accuracy is justified by means of simulation.

  6. Autopilot Design Method for the Blended Missile Based on Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Baoqing Yang

    2015-01-01

    Full Text Available This paper develops a novel autopilot design method for blended missiles with aerodynamic control surfaces and lateral jets. Firstly, the nonlinear model of blended missiles is reduced into a piecewise affine (PWA model according to the aerodynamics properties. Secondly, based on the equivalence between the PWA model and mixed logical dynamical (MLD model, the MLD model of blended missiles is proposed taking into account the on-off constraints of lateral pulse jets. Thirdly, a hybrid model predictive control (MPC method is employed to design autopilot. Finally, simulation results under different conditions are presented to show the effectiveness of the proposed method, which demonstrate that control allocation between aerodynamic control surfaces and lateral jets is realized by adjusting the weighting matrix in an index function.

  7. Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model

    Science.gov (United States)

    Morin, Mario A.; Ficarazzo, Francesco

    2006-04-01

    Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.

  8. Seasonal drought ensemble predictions based on multiple climate models in the upper Han River Basin, China

    Science.gov (United States)

    Ma, Feng; Ye, Aizhong; Duan, Qingyun

    2017-03-01

    An experimental seasonal drought forecasting system is developed based on 29-year (1982-2010) seasonal meteorological hindcasts generated by the climate models from the North American Multi-Model Ensemble (NMME) project. This system made use of a bias correction and spatial downscaling method, and a distributed time-variant gain model (DTVGM) hydrologic model. DTVGM was calibrated using observed daily hydrological data and its streamflow simulations achieved Nash-Sutcliffe efficiency values of 0.727 and 0.724 during calibration (1978-1995) and validation (1996-2005) periods, respectively, at the Danjiangkou reservoir station. The experimental seasonal drought forecasting system (known as NMME-DTVGM) is used to generate seasonal drought forecasts. The forecasts were evaluated against the reference forecasts (i.e., persistence forecast and climatological forecast). The NMME-DTVGM drought forecasts have higher detectability and accuracy and lower false alarm rate than the reference forecasts at different lead times (from 1 to 4 months) during the cold-dry season. No apparent advantage is shown in drought predictions during spring and summer seasons because of a long memory of the initial conditions in spring and a lower predictive skill for precipitation in summer. Overall, the NMME-based seasonal drought forecasting system has meaningful skill in predicting drought several months in advance, which can provide critical information for drought preparedness and response planning as well as the sustainable practice of water resource conservation over the basin.

  9. Prediction of TF target sites based on atomistic models of protein-DNA complexes

    Directory of Open Access Journals (Sweden)

    Collado-Vides Julio

    2008-10-01

    Full Text Available Abstract Background The specific recognition of genomic cis-regulatory elements by transcription factors (TFs plays an essential role in the regulation of coordinated gene expression. Studying the mechanisms determining binding specificity in protein-DNA interactions is thus an important goal. Most current approaches for modeling TF specific recognition rely on the knowledge of large sets of cognate target sites and consider only the information contained in their primary sequence. Results Here we describe a structure-based methodology for predicting sequence motifs starting from the coordinates of a TF-DNA complex. Our algorithm combines information regarding the direct and indirect readout of DNA into an atomistic statistical model, which is used to estimate the interaction potential. We first measure the ability of our method to correctly estimate the binding specificities of eight prokaryotic and eukaryotic TFs that belong to different structural superfamilies. Secondly, the method is applied to two homology models, finding that sampling of interface side-chain rotamers remarkably improves the results. Thirdly, the algorithm is compared with a reference structural method based on contact counts, obtaining comparable predictions for the experimental complexes and more accurate sequence motifs for the homology models. Conclusion Our results demonstrate that atomic-detail structural information can be feasibly used to predict TF binding sites. The computational method presented here is universal and might be applied to other systems involving protein-DNA recognition.

  10. Physiologically Based Pharmacokinetic Modeling Framework for Quantitative Prediction of an Herb–Drug Interaction

    Science.gov (United States)

    Brantley, S J; Gufford, B T; Dua, R; Fediuk, D J; Graf, T N; Scarlett, Y V; Frederick, K S; Fisher, M B; Oberlies, N H; Paine, M F

    2014-01-01

    Herb–drug interaction predictions remain challenging. Physiologically based pharmacokinetic (PBPK) modeling was used to improve prediction accuracy of potential herb–drug interactions using the semipurified milk thistle preparation, silibinin, as an exemplar herbal product. Interactions between silibinin constituents and the probe substrates warfarin (CYP2C9) and midazolam (CYP3A) were simulated. A low silibinin dose (160 mg/day × 14 days) was predicted to increase midazolam area under the curve (AUC) by 1%, which was corroborated with external data; a higher dose (1,650 mg/day × 7 days) was predicted to increase midazolam and (S)-warfarin AUC by 5% and 4%, respectively. A proof-of-concept clinical study confirmed minimal interaction between high-dose silibinin and both midazolam and (S)-warfarin (9 and 13% increase in AUC, respectively). Unexpectedly, (R)-warfarin AUC decreased (by 15%), but this is unlikely to be clinically important. Application of this PBPK modeling framework to other herb–drug interactions could facilitate development of guidelines for quantitative prediction of clinically relevant interactions. PMID:24670388

  11. A QoS-Satisfied Prediction Model for Cloud-Service Composition Based on a Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Qingtao Wu

    2013-01-01

    Full Text Available Various significant issues in cloud computing, such as service provision, service matching, and service assessment, have attracted researchers’ attention recently. Quality of service (QoS plays an increasingly important role in the provision of cloud-based services, by aiming for the seamless and dynamic integration of cloud-service components. In this paper, we focus on QoS-satisfied predictions about the composition of cloud-service components and present a QoS-satisfied prediction model based on a hidden Markov model. In providing a cloud-based service for a user, if the user’s QoS cannot be satisfied by a single cloud-service component, component composition should be considered, where its QoS-satisfied capability needs to be proactively predicted to be able to guarantee the user’s QoS. We discuss the proposed model in detail and prove some aspects of the model. Simulation results show that our model can achieve high prediction accuracies.

  12. AN AIR POLLUTION PREDICTION TECHNIQUE FOR URBAN DISTRICTS BASED ON MESO-SCALE NUMERICAL MODEL

    Institute of Scientific and Technical Information of China (English)

    YAN Jing-hua; XU Jian-ping

    2005-01-01

    Taking Shenzhen city as an example, the statistical and physical relationship between the density of pollutants and various atmospheric parameters are analyzed in detail, and a space-partitioned city air pollution potential prediction scheme is established based on it. The scheme considers quantitatively more than ten factors at the surface and planetary boundary layer (PBL), especially the effects of anisotropy of geographical environment, and treats wind direction as an independent impact factor. While the scheme treats the prediction equation respectively for different pollutants according to their differences in dilute properties, it considers as well the possible differences in dilute properties at different districts of the city under the same atmospheric condition, treating predictions respectively for different districts. Finally, the temporally and spatially high resolution predictions for the atmospheric factors are made with a high resolution numerical model, and further the space-partitioned and time-variational city pollution potential predictions are made. The scheme is objective and quantitative, and with clear physical meaning, so it is suitable to use in making high resolution air pollution predictions.

  13. A rule based fuzzy model for the prediction of petrophysical rock parameters

    Energy Technology Data Exchange (ETDEWEB)

    Finol, J.; Jing, X.D. [T.H. Huxley School of Environment, Earth Sciences and Engineering, Imperial College, Prince Consort Road, SW7 2BP London (United Kingdom); Ke Guo, Y. [Fujitsu Parallel Computing Centre, Department of Computing, Imperial College, SW7 2BZ London (United Kingdom)

    2001-04-01

    A new approach for the prediction of petrophysical rock parameters based on a rule-based fuzzy model is presented. The rule-based fuzzy model corresponds to the Takagi-Sugeno-Kang method of fuzzy reasoning proposed by Sugeno and his co-authors. This fuzzy model is defined by a set of fuzzy implications with linear consequent parts, each of which establishes a local linear input-output relationship between the variables of the model. In this approach, a fuzzy clustering algorithm is combined with the least-square approximation method to identify the structure and parameters of the fuzzy model from sets of numerical data. To verify the effectiveness of the proposed fuzzy modeling method, two examples are developed using core and electrical log data from three oil wells in Ceuta Field, Lake Maracaibo Basin. The numerical results of the fuzzy modelling method are compared with the results of a conventional linear regression model. It is shown that the fuzzy modeling approach is not only more accurate than the conventional regression approach but also provides some qualitative information about the underlying complexities of the porous system.

  14. Genetic Modeling of GIS-Based Cell Clusters and Its Application in Mineral Resources Prediction

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper presents a synthetic analysis method for multi-sourced geological data from geographic information system (GIS). In the previous practices of mineral resources prediction, a usually adopted methodology has been statistical analysis of cells delimitated based on thoughts of random sampiing. That might lead to insufficient utilization of local spatial information, for a cell is treated as a point without internal structure. We now take "cell clusters", i. e. , spatial associations of cells, as basic units of statistics, thus the spatial configuration information of geological variables is easier to be detected and utilized, and the accuracy and reliability of prediction are improved. We build a linear multi-discriminating model for the clusters via genetic algorithm. Both the right-judgment rates and the in-class vs. between-class distance ratios are considered to form the evolutional adaptive values of the population. An application of the method in gold mineral resources prediction in east Xinjiang, China is presented.

  15. Advanced Emergency Braking Control Based on a Nonlinear Model Predictive Algorithm for Intelligent Vehicles

    Directory of Open Access Journals (Sweden)

    Ronghui Zhang

    2017-05-01

    Full Text Available Focusing on safety, comfort and with an overall aim of the comprehensive improvement of a vision-based intelligent vehicle, a novel Advanced Emergency Braking System (AEBS is proposed based on Nonlinear Model Predictive Algorithm. Considering the nonlinearities of vehicle dynamics, a vision-based longitudinal vehicle dynamics model is established. On account of the nonlinear coupling characteristics of the driver, surroundings, and vehicle itself, a hierarchical control structure is proposed to decouple and coordinate the system. To avoid or reduce the collision risk between the intelligent vehicle and collision objects, a coordinated cost function of tracking safety, comfort, and fuel economy is formulated. Based on the terminal constraints of stable tracking, a multi-objective optimization controller is proposed using the theory of non-linear model predictive control. To quickly and precisely track control target in a finite time, an electronic brake controller for AEBS is designed based on the Nonsingular Fast Terminal Sliding Mode (NFTSM control theory. To validate the performance and advantages of the proposed algorithm, simulations are implemented. According to the simulation results, the proposed algorithm has better integrated performance in reducing the collision risk and improving the driving comfort and fuel economy of the smart car compared with the existing single AEBS.

  16. Multivariate Autoregressive Model Based Heart Motion Prediction Approach for Beating Heart Surgery

    Directory of Open Access Journals (Sweden)

    Fan Liang

    2013-02-01

    Full Text Available A robotic tool can enable a surgeon to conduct off-pump coronary artery graft bypass surgery on a beating heart. The robotic tool actively alleviates the relative motion between the point of interest (POI on the heart surface and the surgical tool and allows the surgeon to operate as if the heart were stationary. Since the beating heart's motion is relatively high-band, with nonlinear and nonstationary characteristics, it is difficult to follow. Thus, precise beating heart motion prediction is necessary for the tracking control procedure during the surgery. In the research presented here, we first observe that Electrocardiography (ECG signal contains the causal phase information on heart motion and non-stationary heart rate dynamic variations. Then, we investigate the relationship between ECG signal and beating heart motion using Granger Causality Analysis, which describes the feasibility of the improved prediction of heart motion. Next, we propose a nonlinear time-varying multivariate vector autoregressive (MVAR model based adaptive prediction method. In this model, the significant correlation between ECG and heart motion enables the improvement of the prediction of sharp changes in heart motion and the approximation of the motion with sufficient detail. Dual Kalman Filters (DKF estimate the states and parameters of the model, respectively. Last, we evaluate the proposed algorithm through comparative experiments using the two sets of collected vivo data.

  17. Hyperspectral-based predictive modelling of grapevine water status in the Portuguese Douro wine region

    Science.gov (United States)

    Pôças, Isabel; Gonçalves, João; Costa, Patrícia Malva; Gonçalves, Igor; Pereira, Luís S.; Cunha, Mario

    2017-06-01

    In this study, hyperspectral reflectance (HySR) data derived from a handheld spectroradiometer were used to assess the water status of three grapevine cultivars in two sub-regions of Douro wine region during two consecutive years. A large set of potential predictors derived from the HySR data were considered for modelling/predicting the predawn leaf water potential (Ψpd) through different statistical and machine learning techniques. Three HySR vegetation indices were selected as final predictors for the computation of the models and the in-season time trend was removed from data by using a time predictor. The vegetation indices selected were the Normalized Reflectance Index for the wavelengths 554 nm and 561 nm (NRI554;561), the water index (WI) for the wavelengths 900 nm and 970 nm, and the D1 index which is associated with the rate of reflectance increase in the wavelengths of 706 nm and 730 nm. These vegetation indices covered the green, red edge and the near infrared domains of the electromagnetic spectrum. A large set of state-of-the-art analysis and statistical and machine-learning modelling techniques were tested. Predictive modelling techniques based on generalized boosted model (GBM), bagged multivariate adaptive regression splines (B-MARS), generalized additive model (GAM), and Bayesian regularized neural networks (BRNN) showed the best performance for predicting Ψpd, with an average determination coefficient (R2) ranging between 0.78 and 0.80 and RMSE varying between 0.11 and 0.12 MPa. When cultivar Touriga Nacional was used for training the models and the cultivars Touriga Franca and Tinta Barroca for testing (independent validation), the models performance was good, particularly for GBM (R2 = 0.85; RMSE = 0.09 MPa). Additionally, the comparison of Ψpd observed and predicted showed an equitable dispersion of data from the various cultivars. The results achieved show a good potential of these predictive models based on vegetation indices to support

  18. Analysis and Prediction of Rural Residents’ Living Consumption Growth in Sichuan Province Based on Markov Prediction and ARMA Model

    Institute of Scientific and Technical Information of China (English)

    LU Xiao-li

    2012-01-01

    I select 32 samples concerning per capita living consumption of rural residents in Sichuan Province during the period 1978-2009. First, using Markov prediction method, the growth rate of living consumption level in the future is predicted to largely range from 10% to 20%. Then, in order to improve the prediction accuracy, time variable t is added into the traditional ARMA model for modeling and prediction. The prediction results show that the average relative error rate is 1.56%, and the absolute value of relative error during the period 2006-2009 is less than 0.5%. Finally, I compare the prediction results during the period 2010-2012 by Markov prediction method and ARMA model, respectively, indicating that the two are consistent in terms of growth rate of living consumption, and the prediction results are reliable. The results show that under the similar policies, rural residents’ consumer demand in Sichuan Province will continue to grow in the short term, so it is necessary to further expand the consumer market.

  19. Economic Model Predictive Control for Hot Water Based Heating Systems in Smart Buildings

    DEFF Research Database (Denmark)

    Awadelrahman, M. A. Ahmed; Zong, Yi; Li, Hongwei

    2017-01-01

    This paper presents a study to optimize the heating energy costs in a residential building with varying electricity price signals based on an Economic Model Predictive Controller (EMPC). The investigated heating system consists of an air source heat pump (ASHP) incorporated with a hot water tank...... as active Thermal Energy Storage (TES), where two optimization problems are integrated together to optimize both the ASHP electricity consumption and the building heating consumption utilizing a heat dynamic model of the building. The results show that the proposed EMPC can save the energy cost by load...

  20. Predicting speech intelligibility in adverse conditions: evaluation of the speech-based envelope power spectrum model

    DEFF Research Database (Denmark)

    2011-01-01

    The speech-based envelope power spectrum model (sEPSM) [Jørgensen and Dau (2011). J. Acoust. Soc. Am., 130 (3), 1475–1487] estimates the envelope signal-to-noise ratio (SNRenv) of distorted speech and accurately describes the speech recognition thresholds (SRT) for normal-hearing listeners...... conditions by comparing predictions to measured data from [Kjems et al. (2009). J. Acoust. Soc. Am. 126 (3), 1415-1426] where speech is mixed with four different interferers, including speech-shaped noise, bottle noise, car noise, and cafe noise. The model accounts well for the differences in intelligibility...

  1. A drifting trajectory prediction model based on object shape and stochastic mo-tion features

    Institute of Scientific and Technical Information of China (English)

    王胜正; 聂皓冰; 施朝健

    2014-01-01

    There is a huge demand to develop a method for marine search and rescue (SAR) operators automatically predicting the most probable searching area of the drifting object. This paper presents a novel drifting prediction model to improve the accuracy of the drifting trajectory computation of the sea-surface objects. First, a new drifting kinetic model based on the geometry characteristics of the objects is proposed that involves the effects of the object shape and stochastic motion features in addition to the traditional factors of wind and currents. Then, a computer simulation-based method is employed to analyze the stochastic motion features of the drifting objects, which is applied to estimate the uncertainty parameters of the stochastic factors of the drifting objects. Finally, the accuracy of the model is evaluated by comparison with the flume experimental results. It is shown that the proposed method can be used for various shape objects in the drifting trajectory prediction and the maritime search and rescue decision-making system.

  2. Factors influencing protein tyrosine nitration – structure-based predictive models

    Science.gov (United States)

    Bayden, Alexander S.; Yakovlev, Vasily A.; Graves, Paul R.; Mikkelsen, Ross B.; Kellogg, Glen E.

    2010-01-01

    Models for exploring tyrosine nitration in proteins have been created based on 3D structural features of 20 proteins for which high resolution X-ray crystallographic or NMR data are available and for which nitration of 35 total tyrosines has been experimentally proven under oxidative stress. Factors suggested in previous work to enhance nitration were examined with quantitative structural descriptors. The role of neighboring acidic and basic residues is complex: for the majority of tyrosines that are nitrated the distance to the heteroatom of the closest charged sidechain corresponds to the distance needed for suspected nitrating species to form hydrogen bond bridges between the tyrosine and that charged amino acid. This suggests that such bridges play a very important role in tyrosine nitration. Nitration is generally hindered for tyrosines that are buried and for those tyrosines where there is insufficient space for the nitro group. For in vitro nitration, closed environments with nearby heteroatoms or unsaturated centers that can stabilize radicals are somewhat favored. Four quantitative structure-based models, depending on the conditions of nitration, have been developed for predicting site-specific tyrosine nitration. The best model, relevant for both in vitro and in vivo cases predicts 30 of 35 tyrosine nitrations (positive predictive value) and has a sensitivity of 60/71 (11 false positives). PMID:21172423

  3. Tuning SISO offset-free Model Predictive Control based on ARX models

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay

    2012-01-01

    present MPC for SISO systems based on ARX models combined with the first order filter. We derive expressions for the closed-loop variance of the unconstrained MPC based on a state space representation in innovation form and use these expressions to develop a tuning procedure for the regulator. We...... establish formal equivalence between GPC and state space based off-set free MPC. By simulation we demonstrate this procedure for a third order system. The offset-free ARX MPC demonstrates satisfactory set point tracking and rejection of an unmeasured step disturbance for a simulated furnace with a long time...

  4. Antenna pointing system for satellite tracking based on Kalman filtering and model predictive control techniques

    Science.gov (United States)

    Souza, André L. G.; Ishihara, João Y.; Ferreira, Henrique C.; Borges, Renato A.; Borges, Geovany A.

    2016-12-01

    The present work proposes a new approach for an antenna pointing system for satellite tracking. Such a system uses the received signal to estimate the beam pointing deviation and then adjusts the antenna pointing. The present work has two contributions. First, the estimation is performed by a Kalman filter based conical scan technique. This technique uses the Kalman filter avoiding the batch estimator and applies a mathematical manipulation avoiding the linearization approximations. Secondly, a control technique based on the model predictive control together with an explicit state feedback solution are obtained in order to reduce the computational burden. Numerical examples illustrate the results.

  5. Multivariate Radiological-Based Models for the Prediction of Future Knee Pain: Data from the OAI

    Directory of Open Access Journals (Sweden)

    Jorge I. Galván-Tejada

    2015-01-01

    Full Text Available In this work, the potential of X-ray based multivariate prognostic models to predict the onset of chronic knee pain is presented. Using X-rays quantitative image assessments of joint-space-width (JSW and paired semiquantitative central X-ray scores from the Osteoarthritis Initiative (OAI, a case-control study is presented. The pain assessments of the right knee at the baseline and the 60-month visits were used to screen for case/control subjects. Scores were analyzed at the time of pain incidence (T-0, the year prior incidence (T-1, and two years before pain incidence (T-2. Multivariate models were created by a cross validated elastic-net regularized generalized linear models feature selection tool. Univariate differences between cases and controls were reported by AUC, C-statistics, and ODDs ratios. Univariate analysis indicated that the medial osteophytes were significantly more prevalent in cases than controls: C-stat 0.62, 0.62, and 0.61, at T-0, T-1, and T-2, respectively. The multivariate JSW models significantly predicted pain: AUC = 0.695, 0.623, and 0.620, at T-0, T-1, and T-2, respectively. Semiquantitative multivariate models predicted paint with C-stat = 0.671, 0.648, and 0.645 at T-0, T-1, and T-2, respectively. Multivariate models derived from plain X-ray radiography assessments may be used to predict subjects that are at risk of developing knee pain.

  6. A Bayesian network model for predicting type 2 diabetes risk based on electronic health records

    Science.gov (United States)

    Xie, Jiang; Liu, Yan; Zeng, Xu; Zhang, Wu; Mei, Zhen

    2017-07-01

    An extensive, in-depth study of diabetes risk factors (DBRF) is of crucial importance to prevent (or reduce) the chance of suffering from type 2 diabetes (T2D). Accumulation of electronic health records (EHRs) makes it possible to build nonlinear relationships between risk factors and diabetes. However, the current DBRF researches mainly focus on qualitative analyses, and the inconformity of physical examination items makes the risk factors likely to be lost, which drives us to study the novel machine learning approach for risk model development. In this paper, we use Bayesian networks (BNs) to analyze the relationship between physical examination information and T2D, and to quantify the link between risk factors and T2D. Furthermore, with the quantitative analyses of DBRF, we adopt EHR and propose a machine learning approach based on BNs to predict the risk of T2D. The experiments demonstrate that our approach can lead to better predictive performance than the classical risk model.

  7. LMI-Based Generation of Feedback Laws for a Robust Model Predictive Control Algorithm

    Science.gov (United States)

    Acikmese, Behcet; Carson, John M., III

    2007-01-01

    This technical note provides a mathematical proof of Corollary 1 from the paper 'A Nonlinear Model Predictive Control Algorithm with Proven Robustness and Resolvability' that appeared in the 2006 Proceedings of the American Control Conference. The proof was omitted for brevity in the publication. The paper was based on algorithms developed for the FY2005 R&TD (Research and Technology Development) project for Small-body Guidance, Navigation, and Control [2].The framework established by the Corollary is for a robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems that guarantees the resolvability of the associated nite-horizon optimal control problem in a receding-horizon implementation. Additional details of the framework are available in the publication.

  8. An Efficient Constrained Model Predictive Control Algorithm Based on Approximate Computation

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The on-line computational burden related to model predictive control (MPC) of large-scale constrained systems hampers its real-time applications and limits it to slow dynamic process with moderate number of inputs. To avoid this, an efficient and fast algorithm based on aggregation optimization is proposed in this paper. It only optimizes the current control action at time instant k, while other future control sequences in the optimization horizon are approximated off-line by the linear feedback control sequence, so the on-line optimization can be converted into a low dimensional quadratic programming problem. Input constraints can be well handled in this scheme. The comparable performance is achieved with existing standard model predictive control algorithm. Simulation results well demonstrate its effectiveness.

  9. Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network.

    Science.gov (United States)

    Han, Hong-Gui; Zhang, Lu; Hou, Ying; Qiao, Jun-Fei

    2016-02-01

    A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration.

  10. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-09-27

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017. Published by Elsevier B.V.

  11. A prediction model of radiation-induced necrosis for intracranial radiosurgery based on target volume.

    Science.gov (United States)

    Zhao, Bo; Wen, Ning; Chetty, Indrin J; Huang, Yimei; Brown, Stephen L; Snyder, Karen C; Siddiqui, Farzan; Movsas, Benjamin; Siddiqui, M Salim

    2017-08-01

    This study aims to extend the observation that the 12 Gy-radiosurgical-volume (V12Gy) correlates with the incidence of radiation necrosis in patients with intracranial tumors treated with radiosurgery by using target volume to predict V12Gy. V12Gy based on the target volume was used to predict the radiation necrosis probability (P) directly. Also investigated was the reduction in radiation necrosis rates (ΔP) as a result of optimizing the prescription isodose lines for linac-based SRS. Twenty concentric spherical targets and 22 patients with brain tumors were retrospectively studied. For each case, a standard clinical plan and an optimized plan with prescription isodose lines based on gradient index were created. V12Gy were extracted from both plans to analyze the correlation between V12Gy and target volume. The necrosis probability P as a function of V12Gy was evaluated. To account for variation in prescription, the relation between V12Gy and prescription was also investigated. A prediction model for radiation-induced necrosis was presented based on the retrospective study. The model directly relates the typical prescribed dose and the target volume to the radionecrosis probability; V12Gy increased linearly with the target volume (R(2)  > 0.99). The linear correlation was then integrated into a logistic model to predict P directly from the target volume. The change in V12Gy as a function of prescription was modeled using a single parameter, s (=-1.15). Relatively large ΔP was observed for target volumes between 7 and 28 cm(3) with the maximum reduction (8-9%) occurring at approximately 18 cm(3) . Based on the model results, optimizing the prescription isodose line for target volumes between 7 and 28 cm(3) results in a significant reduction in necrosis probability. V12Gy based on the target volume could provide clinicians a predictor of radiation necrosis at the contouring stage thus facilitating treatment decisions. © 2017 American Association of

  12. Predictive Modeling of Tacrolimus Dose Requirement Based on High-Throughput Genetic Screening.

    Science.gov (United States)

    Damon, C; Luck, M; Toullec, L; Etienne, I; Buchler, M; Hurault de Ligny, B; Choukroun, G; Thierry, A; Vigneau, C; Moulin, B; Heng, A-E; Subra, J-F; Legendre, C; Monnot, A; Yartseva, A; Bateson, M; Laurent-Puig, P; Anglicheau, D; Beaune, P; Loriot, M A; Thervet, E; Pallet, N

    2017-04-01

    Any biochemical reaction underlying drug metabolism depends on individual gene-drug interactions and on groups of genes interacting together. Based on a high-throughput genetic approach, we sought to identify a set of covariant single-nucleotide polymorphisms predictive of interindividual tacrolimus (Tac) dose requirement variability. Tac blood concentrations (Tac C0 ) of 229 kidney transplant recipients were repeatedly monitored after transplantation over 3 mo. Given the high dimension of the genomic data in comparison to the low number of observations and the high multicolinearity among the variables (gene variants), we developed an original predictive approach that integrates an ensemble variable-selection strategy to reinforce the stability of the variable-selection process and multivariate modeling. Our predictive models explained up to 70% of total variability in Tac C0 per dose with a maximum of 44 gene variants (p-value <0.001 with a permutation test). These models included molecular networks of drug metabolism with oxidoreductase activities and the multidrug-resistant ABCC8 transporter, which was found in the most stringent model. Finally, we identified an intronic variant of the gene encoding SLC28A3, a drug transporter, as a key gene involved in Tac metabolism, and we confirmed it in an independent validation cohort. © 2016 The American Society of Transplantation and the American Society of Transplant Surgeons.

  13. Introduction of a prediction model to assigning periodontal prognosis based on survival rates.

    Science.gov (United States)

    Martinez-Canut, Pedro; Alcaraz, Jaime; Alcaraz, Jaime; Alvarez-Novoa, Pablo; Alvarez-Novoa, Carmen; Marcos, Ana; Noguerol, Blas; Noguerol, Fernando; Zabalegui, Ion

    2017-09-04

    To develop a prediction model for tooth loss due to periodontal disease (TLPD) in patients following periodontal maintenance (PM), and assess its performance using a multicentre approach. A multilevel analysis of eleven predictors of TLPD in 500 patients following PM was carried out to calculate the probability of TLPD. This algorithm was applied to three different TLPD samples (369 teeth) gathered retrospectively by nine periodontist, associating several intervals of probability with the corresponding survival rates, based on significant differences in the mean survival rates. The reproducibility of these associations was assessed in each sample (One-way ANOVA and pair-wise comparison with Bonferroni corrections). The model presented high specificity and moderate sensitivity, with optimal calibration and discrimination measurements. Seven intervals of probability were associated with seven survival rates and these associations contained close to 80% of the cases: the probability predicted the survival rate at this percentage. The model performed well in the three samples, since the mean survival rates of each association were significantly different within each sample, while no significant differences between the samples were found in pair-wise comparisons of means. This model might be useful for predicting survival rates in different TLPD samples This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  14. Constrained generalized predictive control of battery charging process based on a coupled thermoelectric model

    Science.gov (United States)

    Liu, Kailong; Li, Kang; Zhang, Cheng

    2017-04-01

    Battery temperature is a primary factor affecting the battery performance, and suitable battery temperature control in particular internal temperature control can not only guarantee battery safety but also improve its efficiency. This is however challenging as current controller designs for battery charging have no mechanisms to incorporate such information. This paper proposes a novel battery charging control strategy which applies the constrained generalized predictive control (GPC) to charge a LiFePO4 battery based on a newly developed coupled thermoelectric model. The control target primarily aims to maintain the battery cell internal temperature within a desirable range while delivering fast charging. To achieve this, the coupled thermoelectric model is firstly introduced to capture the battery behaviours in particular SOC and internal temperature which are not directly measurable in practice. Then a controlled auto-regressive integrated moving average (CARIMA) model whose parameters are identified by the recursive least squares (RLS) algorithm is developed as an online self-tuning predictive model for a GPC controller. Then the constrained generalized predictive controller is developed to control the charging current. Experiment results confirm the effectiveness of the proposed control strategy. Further, the best region of heat dissipation rate and proper internal temperature set-points are also investigated and analysed.

  15. Predicting Postfire Hillslope Erosion with a Web-based Probabilistic Model

    Science.gov (United States)

    Robichaud, P. R.; Elliot, W. J.; Pierson, F. B.; Hall, D. E.; Moffet, C. A.

    2005-12-01

    Modeling erosion after major disturbances, such as wildfire, has major challenges that need to be overcome. Fire-induced changes include increased erosion due to loss of the protective litter and duff, loss of soil water storage, and in some cases, creation of water repellent soil conditions. These conditions increase the potential for flooding, and sedimentation, which are of special concern to people who live and mange resources in the areas adjacent to burned areas. A web-based Erosion Risk Management Tool (ERMiT), has been developed to predict surface erosion from postfire hillslopes and to evaluate the potential effectiveness of various erosion mitigation practices. The model uses a probabilistic approach that incorporates variability in weather, soil properties, and burn severity for forests, rangeland, and chaparral hillslopes. The Water Erosion Prediction Project (WEPP) is the erosion prediction engine used in a Monte Carlo simulation mode to provide event-based erosion rate probabilities. The one-page custom interface is targeted for hydrologists and soil scientists. The interface allows users to select climate, soil texture, burn severity, and hillslope topography. For a given hillslope, the model uses a single 100-year run to obtain weather variability and then twenty 5- to 10-year runs to incorporate soil property, cover, and spatial burn severity variability. The output, in both tabular and graphical form, relates the probability of soil erosion exceeding a given amount in each of the first five years following the fire. Event statistics are provided to show the magnitude and rainfall intensity of the storms used to predict erosion rates. ERMiT also allows users to compare the effects of various mitigation treatments (mulches, seeding, and barrier treatments such as contour-felled logs or straw wattles) on the erosion rate probability. Data from rainfall simulation and concentrated flow (rill) techniques were used to parameterize ERMiT for these varied

  16. Constrained predictive control based on T-S fuzzy model for nonlinear systems

    Institute of Scientific and Technical Information of China (English)

    Su Baili; Chen Zengqiang; Yuan Zhuzhi

    2007-01-01

    A constrained generalized predictive control (GPC) algorithm based on the T-S fuzzy model is presented for the nonlinear system. First, a Takagi-Sugeno (T-S) fuzzy model based on the fuzzy cluster algorithm and the orthogonal least square method is constructed to approach the nonlinear system. Since its consequence is linear, it can divide the nonlinear system into a number of linear or nearly linear subsystems. For this T-S fuzzy model, a GPC algorithm with input constraints is presented.This strategy takes into account all the constraints of the control signal and its increment, and does not require the calculation of the Diophantine equations. So it needs only a small computer memory and the computational speed is high. The simulation results show a good performance for the nonlinear systems.

  17. A residual life prediction model based on the generalized σ -N curved surface

    Directory of Open Access Journals (Sweden)

    Zongwen AN

    2016-06-01

    Full Text Available In order to investigate change rule of the residual life of structure under random repeated load, firstly, starting from the statistic meaning of random repeated load, the joint probability density function of maximum stress and minimum stress is derived based on the characteristics of order statistic (maximum order statistic and minimum order statistic; then, based on the equation of generalized σ -N curved surface, considering the influence of load cycles number on fatigue life, a relationship among minimum stress, maximum stress and residual life, that is the σmin(n- σmax(n-Nr(n curved surface model, is established; finally, the validity of the proposed model is demonstrated by a practical case. The result shows that the proposed model can reflect the influence of maximum stress and minimum stress on residual life of structure under random repeated load, which can provide a theoretical basis for life prediction and reliability assessment of structure.

  18. Enabling Persistent Autonomy for Underwater Gliders with Ocean Model Predictions and Terrain Based Navigation

    Directory of Open Access Journals (Sweden)

    Andrew eStuntz

    2016-04-01

    Full Text Available Effective study of ocean processes requires sampling over the duration of long (weeks to months oscillation patterns. Such sampling requires persistent, autonomous underwater vehicles, that have a similarly long deployment duration. The spatiotemporal dynamics of the ocean environment, coupled with limited communication capabilities, make navigation and localization difficult, especially in coastal regions where the majority of interesting phenomena occur. In this paper, we consider the combination of two methods for reducing navigation and localization error; a predictive approach based on ocean model predictions and a prior information approach derived from terrain-based navigation. The motivation for this work is not only for real-time state estimation, but also for accurately reconstructing the actual path that the vehicle traversed to contextualize the gathered data, with respect to the science question at hand. We present an application for the practical use of priors and predictions for large-scale ocean sampling. This combined approach builds upon previous works by the authors, and accurately localizes the traversed path of an underwater glider over long-duration, ocean deployments. The proposed method takes advantage of the reliable, short-term predictions of an ocean model, and the utility of priors used in terrain-based navigation over areas of significant bathymetric relief to bound uncertainty error in dead-reckoning navigation. This method improves upon our previously published works by 1 demonstrating the utility of our terrain-based navigation method with multiple field trials, and 2 presenting a hybrid algorithm that combines both approaches to bound navigational error and uncertainty for long-term deployments of underwater vehicles. We demonstrate the approach by examining data from actual field trials with autonomous underwater gliders, and demonstrate an ability to estimate geographical location of an underwater glider to 2

  19. A STREAMLINE-BASED PREDICTIVE MODEL FOR ENHANCED-OIL-RECOVERY POTENTIALITY

    Institute of Scientific and Technical Information of China (English)

    HOU Jian; ZHANG Shun-kang; DU Qing-jun; LI Yu-bin

    2008-01-01

    A pseudo-three-dimensional model of potentiality prediction is proposed for enhanced oil recovery, based on the streamline method described in this article. The potential distribution of the flow through a porous medium under a complicated boundary condition is solved with the boundary element method. Furthermore, the method for tracing streamlines between injection wells and producing wells is presented. Based on the results, a numerical solution can be obtained by solving the seepage problem of the stream-tube with consideration of different methods of Enhanced Oil Recovery(EOR). The advantage of the method given in this article is that it can obtain dynamic calculation with different well patterns of any shape by easily considering different physicochemical phenomena having less calculation time and good stability. Based on the uniform theory basis-streamline method, different models, including CO2 miscible flooding, polymer flooding, alkaline/surfactant/polymer flooding and microbial flooding, are established in this article.

  20. Predicting commuter flows in spatial networks using a radiation model based on temporal ranges

    CERN Document Server

    Ren, Yihui; Wang, Pu; González, Marta C; Toroczkai, Zoltán

    2014-01-01

    Understanding network flows such as commuter traffic in large transportation networks is an ongoing challenge due to the complex nature of the transportation infrastructure and of human mobility. Here we show a first-principles based method for traffic prediction using a cost based generalization of the radiation model for human mobility, coupled with a cost-minimizing algorithm for efficient distribution of the mobility fluxes through the network. Using US census and highway traffic data we show that traffic can efficiently and accurately be computed from a range-limited, network betweenness type calculation. The model based on travel time costs captures the lognormal distribution of the traffic and attains a high Pearson correlation coefficient (0.75) when compared to real traffic. Due to its principled nature, this method can inform many applications related to human mobility driven flows in spatial networks, ranging from transportation, through urban planning to mitigation of the effects of catastrophic e...

  1. Predicting commuter flows in spatial networks using a radiation model based on temporal ranges

    Science.gov (United States)

    Ren, Yihui; Ercsey-Ravasz, Mária; Wang, Pu; González, Marta C.; Toroczkai, Zoltán

    2014-11-01

    Understanding network flows such as commuter traffic in large transportation networks is an ongoing challenge due to the complex nature of the transportation infrastructure and human mobility. Here we show a first-principles based method for traffic prediction using a cost-based generalization of the radiation model for human mobility, coupled with a cost-minimizing algorithm for efficient distribution of the mobility fluxes through the network. Using US census and highway traffic data, we show that traffic can efficiently and accurately be computed from a range-limited, network betweenness type calculation. The model based on travel time costs captures the log-normal distribution of the traffic and attains a high Pearson correlation coefficient (0.75) when compared with real traffic. Because of its principled nature, this method can inform many applications related to human mobility driven flows in spatial networks, ranging from transportation, through urban planning to mitigation of the effects of catastrophic events.

  2. An artificial neural network prediction model of congenital heart disease based on risk factors

    Science.gov (United States)

    Li, Huixia; Luo, Miyang; Zheng, Jianfei; Luo, Jiayou; Zeng, Rong; Feng, Na; Du, Qiyun; Fang, Junqun

    2017-01-01

    Abstract An artificial neural network (ANN) model was developed to predict the risks of congenital heart disease (CHD) in pregnant women. This hospital-based case-control study involved 119 CHD cases and 239 controls all recruited from birth defect surveillance hospitals in Hunan Province between July 2013 and June 2014. All subjects were interviewed face-to-face to fill in a questionnaire that covered 36 CHD-related variables. The 358 subjects were randomly divided into a training set and a testing set at the ratio of 85:15. The training set was used to identify the significant predictors of CHD by univariate logistic regression analyses and develop a standard feed-forward back-propagation neural network (BPNN) model for the prediction of CHD. The testing set was used to test and evaluate the performance of the ANN model. Univariate logistic regression analyses were performed on SPSS 18.0. The ANN models were developed on Matlab 7.1. The univariate logistic regression identified 15 predictors that were significantly associated with CHD, including education level (odds ratio  = 0.55), gravidity (1.95), parity (2.01), history of abnormal reproduction (2.49), family history of CHD (5.23), maternal chronic disease (4.19), maternal upper respiratory tract infection (2.08), environmental pollution around maternal dwelling place (3.63), maternal exposure to occupational hazards (3.53), maternal mental stress (2.48), paternal chronic disease (4.87), paternal exposure to occupational hazards (2.51), intake of vegetable/fruit (0.45), intake of fish/shrimp/meat/egg (0.59), and intake of milk/soymilk (0.55). After many trials, we selected a 3-layer BPNN model with 15, 12, and 1 neuron in the input, hidden, and output layers, respectively, as the best prediction model. The prediction model has accuracies of 0.91 and 0.86 on the training and testing sets, respectively. The sensitivity, specificity, and Yuden Index on the testing set (training set) are 0.78 (0.83), 0.90 (0

  3. A Machine Learning based Efficient Software Reusability Prediction Model for Java Based Object Oriented Software

    Directory of Open Access Journals (Sweden)

    Surbhi Maggo

    2014-01-01

    Full Text Available Software reuse refers to the development of new software systems with the likelihood of completely or partially using existing components or resources with or without modification. Reusability is the measure of the ease with which previously acquired concepts and objects can be used in new contexts. It is a promising strategy for improvements in software quality, productivity and maintainability as it provides for cost effective, reliable (with the consideration that prior testing and use has eliminated bugs and accelerated (reduced time to market development of the software products. In this paper we present an efficient automation model for the identification and evaluation of reusable software components to measure the reusability levels (high, medium or low of procedure oriented Java based (object oriented software systems. The presented model uses a metric framework for the functional analysis of the Object oriented software components that target essential attributes of reusability analysis also taking into consideration Maintainability Index to account for partial reuse. Further machine learning algorithm LMNN is explored to establish relationships between the functional attributes. The model works at functional level rather than at structural level. The system is implemented as a tool in Java and the performance of the automation tool developed is recorded using criteria like precision, recall, accuracy and error rate. The results gathered indicate that the model can be effectively used as an efficient, accurate, fast and economic model for the identification of procedure based reusable components from the existing inventory of software resources.

  4. Alignment and prediction of cis-regulatory modules based on a probabilistic model of evolution.

    Science.gov (United States)

    He, Xin; Ling, Xu; Sinha, Saurabh

    2009-03-01

    Cross-species comparison has emerged as a powerful paradigm for predicting cis-regulatory modules (CRMs) and understanding their evolution. The comparison requires reliable sequence alignment, which remains a challenging task for less conserved noncoding sequences. Furthermore, the existing models of DNA sequence evolution generally do not explicitly treat the special properties of CRM sequences. To address these limitations, we propose a model of CRM evolution that captures different modes of evolution of functional transcription factor binding sites (TFBSs) and the background sequences. A particularly novel aspect of our work is a probabilistic model of gains and losses of TFBSs, a process being recognized as an important part of regulatory sequence evolution. We present a computational framework that uses this model to solve the problems of CRM alignment and prediction. Our alignment method is similar to existing methods of statistical alignment but uses the conserved binding sites to improve alignment. Our CRM prediction method deals with the inherent uncertainties of binding site annotations and sequence alignment in a probabilistic framework. In simulated as well as real data, we demonstrate that our program is able to improve both alignment and prediction of CRM sequences over several state-of-the-art methods. Finally, we used alignments produced by our program to study binding site conservation in genome-wide binding data of key transcription factors in the Drosophila blastoderm, with two intriguing results: (i) the factor-bound sequences are under strong evolutionary constraints even if their neighboring genes are not expressed in the blastoderm and (ii) binding sites in distal bound sequences (relative to transcription start sites) tend to be more conserved than those in proximal regions. Our approach is implemented as software, EMMA (Evolutionary Model-based cis-regulatory Module Analysis), ready to be applied in a broad biological context.

  5. Alignment and prediction of cis-regulatory modules based on a probabilistic model of evolution.

    Directory of Open Access Journals (Sweden)

    Xin He

    2009-03-01

    Full Text Available Cross-species comparison has emerged as a powerful paradigm for predicting cis-regulatory modules (CRMs and understanding their evolution. The comparison requires reliable sequence alignment, which remains a challenging task for less conserved noncoding sequences. Furthermore, the existing models of DNA sequence evolution generally do not explicitly treat the special properties of CRM sequences. To address these limitations, we propose a model of CRM evolution that captures different modes of evolution of functional transcription factor binding sites (TFBSs and the background sequences. A particularly novel aspect of our work is a probabilistic model of gains and losses of TFBSs, a process being recognized as an important part of regulatory sequence evolution. We present a computational framework that uses this model to solve the problems of CRM alignment and prediction. Our alignment method is similar to existing methods of statistical alignment but uses the conserved binding sites to improve alignment. Our CRM prediction method deals with the inherent uncertainties of binding site annotations and sequence alignment in a probabilistic framework. In simulated as well as real data, we demonstrate that our program is able to improve both alignment and prediction of CRM sequences over several state-of-the-art methods. Finally, we used alignments produced by our program to study binding site conservation in genome-wide binding data of key transcription factors in the Drosophila blastoderm, with two intriguing results: (i the factor-bound sequences are under strong evolutionary constraints even if their neighboring genes are not expressed in the blastoderm and (ii binding sites in distal bound sequences (relative to transcription start sites tend to be more conserved than those in proximal regions. Our approach is implemented as software, EMMA (Evolutionary Model-based cis-regulatory Module Analysis, ready to be applied in a broad biological context.

  6. Recent developments in predictive uncertainty assessment based on the model conditional processor approach

    Directory of Open Access Journals (Sweden)

    G. Coccia

    2011-10-01

    Full Text Available The work aims at discussing the role of predictive uncertainty in flood forecasting and flood emergency management, its relevance to improve the decision making process and the techniques to be used for its assessment.

    Real time flood forecasting requires taking into account predictive uncertainty for a number of reasons. Deterministic hydrological/hydraulic forecasts give useful information about real future events, but their predictions, as usually done in practice, cannot be taken and used as real future occurrences but rather used as pseudo-measurements of future occurrences in order to reduce the uncertainty of decision makers. Predictive Uncertainty (PU is in fact defined as the probability of occurrence of a future value of a predictand (such as water level, discharge or water volume conditional upon prior observations and knowledge as well as on all the information we can obtain on that specific future value from model forecasts. When dealing with commensurable quantities, as in the case of floods, PU must be quantified in terms of a probability distribution function which will be used by the emergency managers in their decision process in order to improve the quality and reliability of their decisions.

    After introducing the concept of PU, the presently available processors are introduced and discussed in terms of their benefits and limitations. In this work the Model Conditional Processor (MCP has been extended to the possibility of using two joint Truncated Normal Distributions (TNDs, in order to improve adaptation to low and high flows.

    The paper concludes by showing the results of the application of the MCP on two case studies, the Po river in Italy and the Baron Fork river, OK, USA. In the Po river case the data provided by the Civil Protection of the Emilia Romagna region have been used to implement an operational example, where the predicted variable is the observed water level. In the Baron Fork River

  7. Development of a Dynamics-Based Statistical Prediction Model for the Changma Onset

    Science.gov (United States)

    Park, H. L.; Seo, K. H.; Son, J. H.

    2015-12-01

    The timing of the changma onset has high impacts on the Korean Peninsula, yet its seasonal prediction remains a great challenge because the changma undergoes various influences from the tropics, subtropics, and midlatitudes. In this study, a dynamics-based statistical prediction model for the changma onset is proposed. This model utilizes three predictors of slowly varying sea surface temperature anomalies (SSTAs) over the northern tropical central Pacific, the North Atlantic, and the North Pacific occurring in the preceding spring season. SSTAs associated with each predictor persist until June and have an effect on the changma onset by inducing an anticyclonic anomaly to the southeast of the Korean Peninsula earlier than the climatological changma onset date. The persisting negative SSTAs over the northern tropical central Pacific and accompanying anomalous trade winds induce enhanced convection over the far-western tropical Pacific; in turn, these induce a cyclonic anomaly over the South China Sea and an anticyclonic anomaly southeast of the Korean Peninsula. Diabatic heating and cooling tendency related to the North Atlantic dipolar SSTAs induces downstream Rossby wave propagation in the upper troposphere, developing a barotropic anticyclonic anomaly to the south of the Korean Peninsula. A westerly wind anomaly at around 45°N resulting from the developing positive SSTAs over the North Pacific directly reduces the strength of the Okhotsk high and gives rise to an anticyclonic anomaly southeast of the Korean Peninsula. With the dynamics-based statistical prediction model, it is demonstrated that the changma onset has considerable predictability of r = 0.73 for the period from 1982 to 2014.

  8. An ANN-GA model based promoter prediction in Arabidopsis thaliana using tilling microarray data

    Science.gov (United States)

    Mishra, Hrishikesh; Singh, Nitya; Misra, Krishna; Lahiri, Tapobrata

    2011-01-01

    Identification of promoter region is an important part of gene annotation. Identification of promoters in eukaryotes is important as promoters modulate various metabolic functions and cellular stress responses. In this work, a novel approach utilizing intensity values of tilling microarray data for a model eukaryotic plant Arabidopsis thaliana, was used to specify promoter region from non-promoter region. A feed-forward back propagation neural network model supported by genetic algorithm was employed to predict the class of data with a window size of 41. A dataset comprising of 2992 data vectors representing both promoter and non-promoter regions, chosen randomly from probe intensity vectors for whole genome of Arabidopsis thaliana generated through tilling microarray technique was used. The classifier model shows prediction accuracy of 69.73% and 65.36% on training and validation sets, respectively. Further, a concept of distance based class membership was used to validate reliability of classifier, which showed promising results. The study shows the usability of micro-array probe intensities to predict the promoter regions in eukaryotic genomes. PMID:21887014

  9. Improved Quality Prediction Model for Multistage Machining Process Based on Geometric Constraint Equation

    Institute of Scientific and Technical Information of China (English)

    ZHU Limin; HE Gaiyun; SONG Zhanjie

    2016-01-01

    Product variation reduction is critical to improve process efficiency and product quality, especially for multistage machining process (MMP). However, due to the variation accumulation and propagation, it becomes quite difficult to predict and reduce product variation for MMP. While the method of statistical process control can be used to control product quality, it is used mainly to monitor the process change rather than to analyze the cause of product variation. In this paper, based on a differential description of the contact kinematics of locators and part surfaces, and the geometric constraints equation defined by the locating scheme, an improved analytical variation propagation model for MMP is presented. In which the influence of both locator position and machining error on part quality is considered while, in traditional model, it usually focuses on datum error and fixture error. Coordinate transformation theory is used to reflect the generation and transmission laws of error in the establishment of the model. The concept of deviation matrix is heavily applied to establish an explicit mapping between the geometric deviation of part and the process error sources. In each machining stage, the part deviation is formulized as three separated components corresponding to three different kinds of error sources, which can be further applied to fault identification and design optimization for complicated machining process. An example part for MMP is given out to validate the effectiveness of the methodology. The experiment results show that the model prediction and the actual measurement match well. This paper provides a method to predict part deviation under the influence of fixture error, datum error and machining error, and it enriches the way of quality prediction for MMP.

  10. Developing An Explanatory Prediction Model Based On Rainfall Patterns For Cholera Outbreaks In Africa

    Science.gov (United States)

    van der Merwe, M. R.; Du Preez, M.

    2012-12-01

    area) and Uganda (inland area) which is not only based on correlation study results but also on the identification of cause-effect mechanisms. This is done by following an integrative multidisciplinary approach which involves the integration of laboratory and field study results, in situ and satellite data, and modeled data. We conclude that a prediction model for early warning and intervention purposes needs to be based on the identification and understanding of cause-effect mechanisms associated with the correlation between cholera outbreaks and rainfall; be parametrized for local conditions; and be based on a driver(s) or proxy for a driver(s) which allows sufficient time for decision makers to act.

  11. Analysis of direct contact membrane distillation based on a lumped-parameter dynamic predictive model

    KAUST Repository

    Karam, Ayman M.

    2016-10-03

    Membrane distillation (MD) is an emerging technology that has a great potential for sustainable water desalination. In order to pave the way for successful commercialization of MD-based water desalination techniques, adequate and accurate dynamical models of the process are essential. This paper presents the predictive capabilities of a lumped-parameter dynamic model for direct contact membrane distillation (DCMD) and discusses the results under wide range of steady-state and dynamic conditions. Unlike previous studies, the proposed model captures the time response of the spacial temperature distribution along the flow direction. It also directly solves for the local temperatures at the membrane interfaces, which allows to accurately model and calculate local flux values along with other intrinsic variables of great influence on the process, like the temperature polarization coefficient (TPC). The proposed model is based on energy and mass conservation principles and analogy between thermal and electrical systems. Experimental data was collected to validated the steady-state and dynamic responses of the model. The obtained results shows great agreement with the experimental data. The paper discusses the results of several simulations under various conditions to optimize the DCMD process efficiency and analyze its response. This demonstrates some potential applications of the proposed model to carry out scale up and design studies. © 2016

  12. Long-term Failure Prediction based on an ARP Model of Global Risk Network

    Science.gov (United States)

    Lin, Xin; Moussawi, Alaa; Szymanski, Boleslaw; Korniss, Gyorgy

    Risks that threaten modern societies form an intricately interconnected network. Hence, it is important to understand how risk materializations in distinct domains influence each other. In the paper, we study the global risks network defined by World Economic Forum experts in the form of Stochastic Block Model. We model risks as Alternating Renewal Processes with variable intensities driven by hidden values of exogenous and endogenous failure probabilities. Based on the expert assessments and historical status of each risk, we use Maximum Likelihood Evaluation to find the optimal model parameters and demonstrate that the model considering network effects significantly outperforms the others. In the talk, we discuss how the model can be used to provide quantitative means for measuring interdependencies and materialization of risks in the network. We also present recent results of long-term predictions in the form of predicated distributions of materializations over various time periods. Finally we show how the simulation of ARP's enables us to probe limits of the predictability of the system parameters from historical data and ability to recover hidden variable. Supported in part by DTRA, ARL NS-CTA.

  13. A computational model for aperture control in reach-to-grasp movement based on predictive variability

    Directory of Open Access Journals (Sweden)

    Naohiro eTakemura

    2015-12-01

    Full Text Available In human reach-to-grasp movement, visual occlusion of a target object leads to a larger peak grip aperture compared to conditions where online vision is available. However, no previous computational and neural network models for reach-to-grasp movement explain the mechanism of this effect. We simulated the effect of online vision on the reach-to-grasp movement by proposing a computational control model based on the hypothesis that the grip aperture is controlled to compensate for both motor variability and sensory uncertainty. In this model, the aperture is formed to achieve a target aperture size that is sufficiently large to accommodate the actual target; it also includes a margin to ensure proper grasping despite sensory and motor variability. To this end, the model considers: i the variability of the grip aperture, which is predicted by the Kalman filter, and ii the uncertainty of the object size, which is affected by visual noise. Using this model, we simulated experiments in which the effect of the duration of visual occlusion was investigated. The simulation replicated the experimental result wherein the peak grip aperture increased when the target object was occluded, especially in the early phase of the movement. Both predicted motor variability and sensory uncertainty play important roles in the online visuomotor process responsible for grip aperture control.

  14. Use of different sampling schemes in machine learning-based prediction of hydrological models' uncertainty

    Science.gov (United States)

    Kayastha, Nagendra; Solomatine, Dimitri; Lal Shrestha, Durga; van Griensven, Ann

    2013-04-01

    In recent years, a lot of attention in the hydrologic literature is given to model parameter uncertainty analysis. The robustness estimation of uncertainty depends on the efficiency of sampling method used to generate the best fit responses (outputs) and on ease of use. This paper aims to investigate: (1) how sampling strategies effect the uncertainty estimations of hydrological models, (2) how to use this information in machine learning predictors of models uncertainty. Sampling of parameters may employ various algorithms. We compared seven different algorithms namely, Monte Carlo (MC) simulation, generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), partical swarm optimization (PSO) and adaptive cluster covering (ACCO) [1]. These methods were applied to estimate uncertainty of streamflow simulation using conceptual model HBV and Semi-distributed hydrological model SWAT. Nzoia catchment in West Kenya is considered as the case study. The results are compared and analysed based on the shape of the posterior distribution of parameters, uncertainty results on model outputs. The MLUE method [2] uses results of Monte Carlo sampling (or any other sampling shceme) to build a machine learning (regression) model U able to predict uncertainty (quantiles of pdf) of a hydrological model H outputs. Inputs to these models are specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. The problem here is that different sampling algorithms result in different data sets used to train such a model U, which leads to several models (and there is no clear evidence which model is the best since there is no basis for comparison). A solution could be to form a committee of all models U and

  15. DSC modelling for predicting resilient modulus of crushed rock base as a road base material for Western Australia roads

    Institute of Scientific and Technical Information of China (English)

    KHOBKLANG Pakdee; VIMONSATIT Vanissorn; JITSANGIAM Peerapong; NIKRAZ Hamid

    2013-01-01

    In order to increase the applied efficiency of crushed rock base (CRB) in pavement structure design for Western Australia roads,the material modelling based on the experimental results was investigated,and the disturbed state concept (DSC) was used to predict the resilient modulus of CRB because of its simplicity and strong ability in capturing the elastic and inelastic responses of materials to loads.The actual deformation of DSC,at any loading state,was determined from its assumed relative intact (RI) state.The DSC equation of CRB was constructed by using a set of experimental results of resilient modulus tests,and an idealized material model,namely the linear elastic model,of relative intact (RI) part was considered.Analysis results reveal that the resilient modulus-applied stress relationships back-predicted by using the DSC modelling are consistent with the experimental results,so,the DSC equation is suited for predicting the resilient modulus of CRB specimen.However,the model and the equation coming from the test results are conducted in accordance with the Austroads standard,so further investigation and validation with respect to the field behaviours of pavement structure should be performed.7 figs,11 refs.

  16. The k-nearest neighbour-based GMDH prediction model and its applications

    Science.gov (United States)

    Li, Qiumin; Tian, Yixiang; Zhang, Gaoxun

    2014-11-01

    This paper centres on a new GMDH (group method of data handling) algorithm based on the k-nearest neighbour (k-NN) method. Instead of the transfer function that has been used in traditional GMDH, the k-NN kernel function is adopted in the proposed GMDH to characterise relationships between the input and output variables. The proposed method combines the advantages of the k-nearest neighbour (k-NN) algorithm and GMDH algorithm, and thus improves the predictive capability of the GMDH algorithm. It has been proved that when the bandwidth of the kernel is less than a certain constant C, the predictive capability of the new model is superior to that of the traditional one. As an illustration, it is shown that the new method can accurately forecast consumer price index (CPI).

  17. Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems

    Directory of Open Access Journals (Sweden)

    Man Hong

    2013-01-01

    Full Text Available To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR is proposed. A predictive control algorithm based on the Kalman filter for constrained Hammerstein-Wiener systems is designed. An output feedback control law regarding the linear subsystem is derived by state observation. The size of reaction heat produced and its influence on the output are evaluated by the Kalman filter. The observation and evaluation results are calculated by the multistep predictive approach. Actual control variables are computed while considering the constraints of the optimal control problem in a finite horizon through the receding horizon. The simulation example of the CSTR tester shows the effectiveness and feasibility of the proposed algorithm.

  18. Logic-based models in systems biology: a predictive and parameter-free network analysis method†

    Science.gov (United States)

    Wynn, Michelle L.; Consul, Nikita; Merajver, Sofia D.

    2012-01-01

    Highly complex molecular networks, which play fundamental roles in almost all cellular processes, are known to be dysregulated in a number of diseases, most notably in cancer. As a consequence, there is a critical need to develop practical methodologies for constructing and analysing molecular networks at a systems level. Mathematical models built with continuous differential equations are an ideal methodology because they can provide a detailed picture of a network’s dynamics. To be predictive, however, differential equation models require that numerous parameters be known a priori and this information is almost never available. An alternative dynamical approach is the use of discrete logic-based models that can provide a good approximation of the qualitative behaviour of a biochemical system without the burden of a large parameter space. Despite their advantages, there remains significant resistance to the use of logic-based models in biology. Here, we address some common concerns and provide a brief tutorial on the use of logic-based models, which we motivate with biological examples. PMID:23072820

  19. Hierarchical model-based predictive control of a power plant portfolio

    DEFF Research Database (Denmark)

    Edlund, Kristian; Bendtsen, Jan Dimon; Jørgensen, John Bagterp

    2011-01-01

    control” – becomes increasingly important as the ratio of renewable energy in a power system grows. As a consequence, tomorrow's “smart grids” require highly flexible and scalable control systems compared to conventional power systems. This paper proposes a hierarchical model-based predictive control...... design for power system portfolio control, which aims specifically at meeting these demands.The design involves a two-layer hierarchical structure with clearly defined interfaces that facilitate an object-oriented implementation approach. The same hierarchical structure is reflected in the underlying...

  20. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Directory of Open Access Journals (Sweden)

    S. Galelli

    2013-02-01

    Full Text Available Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modeling. In this paper we investigate the prediction capability of extremely randomized trees (Extra-Trees, in terms of accuracy, explanation ability and computational efficiency, in a streamflow modeling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i alleviates the poor generalization property and tendency to overfitting of traditional standalone decision trees (e.g. CART; (ii is computationally very efficient; and, (iii allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analyzed on two real-world case studies (Marina catchment (Singapore and Canning River (Western Australia representing two different morphoclimatic contexts comparatively with other tree-based methods (CART and M5 and parametric data-driven approaches (ANNs and multiple linear regression. Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5 in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  1. Model-based Comparative Prediction of Transcription-Factor Binding Motifs in Anabolic Responses in Bone

    Institute of Scientific and Technical Information of China (English)

    Andy; B.; Chen; Kazunori; Hamamura; Guohua; Wang; Weirong; Xing; Subburaman; Mohan; Hiroki; Yokota; Yunlong; Liu

    2007-01-01

    Understanding the regulatory mechanism that controls the alteration of global gene expression patterns continues to be a challenging task in computational biology. We previously developed an ant algorithm, a biologically-inspired computational technique for microarray data, and predicted putative transcription-factor binding motifs (TFBMs) through mimicking interactive behaviors of natural ants. Here we extended the algorithm into a set of web-based software, Ant Modeler, and applied it to investigate the transcriptional mechanism underlying bone formation. Mechanical loading and administration of bone morphogenic proteins (BMPs) are two known treatments to strengthen bone. We addressed a question: Is there any TFBM that stimulates both "anabolic responses of mechanical loading" and "BMP-mediated osteogenic signaling"? Although there is no significant overlap among genes in the two responses, a comparative model-based analysis suggests that the two independent osteogenic processes employ common TFBMs, such as a stress responsive element and a motif for peroxisome proliferator-activated recep- tor (PPAR). The post-modeling in vitro analysis using mouse osteoblast cells sup- ported involvements of the predicted TFBMs such as PPAR, Ikaros 3, and LMO2 in response to mechanical loading. Taken together, the results would be useful to derive a set of testable hypotheses and examine the role of specific regulators in complex transcriptional control of bone formation.

  2. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Science.gov (United States)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  3. A Bayesian design space for analytical methods based on multivariate models and predictions.

    Science.gov (United States)

    Lebrun, Pierre; Boulanger, Bruno; Debrus, Benjamin; Lambert, Philippe; Hubert, Philippe

    2013-01-01

    The International Conference for Harmonization (ICH) has released regulatory guidelines for pharmaceutical development. In the document ICH Q8, the design space of a process is presented as the set of factor settings providing satisfactory results. However, ICH Q8 does not propose any practical methodology to define, derive, and compute design space. In parallel, in the last decades, it has been observed that the diversity and the quality of analytical methods have evolved exponentially, allowing substantial gains in selectivity and sensitivity. However, there is still a lack of a rationale toward the development of robust separation methods in a systematic way. Applying ICH Q8 to analytical methods provides a methodology for predicting a region of the space of factors in which results will be reliable. Combining design of experiments and Bayesian standard multivariate regression, an identified form of the predictive distribution of a new response vector has been identified and used, under noninformative as well as informative prior distributions of the parameters. From the responses and their predictive distribution, various critical quality attributes can be easily derived. This Bayesian framework was then extended to the multicriteria setting to estimate the predictive probability that several critical quality attributes will be jointly achieved in the future use of an analytical method. An example based on a high-performance liquid chromatography (HPLC) method is given. For this example, a constrained sampling scheme was applied to ensure the modeled responses have desirable properties.

  4. Scaling Properties of Rainfall-Induced Landslides Predicted by a Physically Based Model

    CERN Document Server

    Alvioli, M; Rossi, M

    2013-01-01

    Natural landslides exhibit scaling properties, including the frequency of the size of the landslides, and the rainfall conditions responsible for landslides. Reasons for the scaling behavior of landslides are poorly known, and only a few attempts were made to describe the empirical evidences of the self-similar scaling behavior of landslides with physically based models. We investigate the possibility of using the TRIGRS code, a consolidated, physically motivated, numerical model to describe the stability conditions of natural slopes forced by rainfall, to determine the frequency of the area of the unstable slopes and the rainfall intensity-duration (I-D) conditions that result in landslides in a region.We apply TRIGRS in a portion of the Upper Tiber River Basin, Central Italy. The spatially distributed model predicts the stability conditions of individual grid cells, given the terrain and rainfall conditions. We run TRIGRS using multiple rainfall histories, and we compare the results to empirical evidences o...

  5. Simulation of complex glazing products; from optical data measurements to model based predictive controls

    Energy Technology Data Exchange (ETDEWEB)

    Kohler, Christian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-04-01

    Complex glazing systems such as venetian blinds, fritted glass and woven shades require more detailed optical and thermal input data for their components than specular non light-redirecting glazing systems. Various methods for measuring these data sets are described in this paper. These data sets are used in multiple simulation tools to model the thermal and optical properties of complex glazing systems. The output from these tools can be used to generate simplified rating values or as an input to other simulation tools such as whole building annual energy programs, or lighting analysis tools. I also describe some of the challenges of creating a rating system for these products and which factors affect this rating. A potential future direction of simulation and building operations is model based predictive controls, where detailed computer models are run in real-time, receiving data for an actual building and providing control input to building elements such as shades.

  6. Passivity-based model predictive control for mobile vehicle motion planning

    CERN Document Server

    Tahirovic, Adnan

    2013-01-01

    Passivity-based Model Predictive Control for Mobile Vehicle Navigation represents a complete theoretical approach to the adoption of passivity-based model predictive control (MPC) for autonomous vehicle navigation in both indoor and outdoor environments. The brief also introduces analysis of the worst-case scenario that might occur during the task execution. Some of the questions answered in the text include: • how to use an MPC optimization framework for the mobile vehicle navigation approach; • how to guarantee safe task completion even in complex environments including obstacle avoidance and sideslip and rollover avoidance; and  • what to expect in the worst-case scenario in which the roughness of the terrain leads the algorithm to generate the longest possible path to the goal. The passivity-based MPC approach provides a framework in which a wide range of complex vehicles can be accommodated to obtain a safer and more realizable tool during the path-planning stage. During task execution, the optimi...

  7. PredSTP: a highly accurate SVM based model to predict sequential cystine stabilized peptides.

    Science.gov (United States)

    Islam, S M Ashiqul; Sajed, Tanvir; Kearney, Christopher Michel; Baker, Erich J

    2015-07-05

    Numerous organisms have evolved a wide range of toxic peptides for self-defense and predation. Their effective interstitial and macro-environmental use requires energetic and structural stability. One successful group of these peptides includes a tri-disulfide domain arrangement that offers toxicity and high stability. Sequential tri-disulfide connectivity variants create highly compact disulfide folds capable of withstanding a variety of environmental stresses. Their combination of toxicity and stability make these peptides remarkably valuable for their potential as bio-insecticides, antimicrobial peptides and peptide drug candidates. However, the wide sequence variation, sources and modalities of group members impose serious limitations on our ability to rapidly identify potential members. As a result, there is a need for automated high-throughput member classification approaches that leverage their demonstrated tertiary and functional homology. We developed an SVM-based model to predict sequential tri-disulfide peptide (STP) toxins from peptide sequences. One optimized model, called PredSTP, predicted STPs from training set with sensitivity, specificity, precision, accuracy and a Matthews correlation coefficient of 94.86%, 94.11%, 84.31%, 94.30% and 0.86, respectively, using 200 fold cross validation. The same model outperforms existing prediction approaches in three independent out of sample testsets derived from PDB. PredSTP can accurately identify a wide range of cystine stabilized peptide toxins directly from sequences in a species-agnostic fashion. The ability to rapidly filter sequences for potential bioactive peptides can greatly compress the time between peptide identification and testing structural and functional properties for possible antimicrobial and insecticidal candidates. A web interface is freely available to predict STP toxins from http://crick.ecs.baylor.edu/.

  8. SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties

    Energy Technology Data Exchange (ETDEWEB)

    Panebianco, Stefano; Lemaître, Jean-Francois; Sida, Jean-Luc [CEA Centre de Saclay, Gif-sur-Ivette (France); Dubray, Noëel [CEA, DAM, DIF, Arpajon (France); Goriely, Stephane [Institut d' Astronomie et d' Astrophisique, Universite Libre de Bruxelles, Brussels (Belgium)

    2014-07-01

    Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed. (author)

  9. SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties

    Science.gov (United States)

    Panebianco, Stefano; Dubray, Nöel; Goriely, Stéphane; Hilaire, Stéphane; Lemaître, Jean-François; Sida, Jean-Luc

    2014-04-01

    Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed.

  10. SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties

    Directory of Open Access Journals (Sweden)

    Panebianco Stefano

    2014-04-01

    Full Text Available Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed.

  11. Presentation of a model-based data mining to predict lung cancer.

    Science.gov (United States)

    Shahhoseini, Reza; Ghazvini, Ali; Esmaeilpour, Mansour; Pourtaghi, Gholamhossein; Tofighi, Shahram

    2015-01-01

    The data related to patients often have very useful information that can help us to resolve a lot of problems and difficulties in different areas. This study was performed to present a model-based data mining to predict lung cancer in 2014. In this exploratory and modeling study, information was collected by two methods library and field methods. All gathered variables were in the format of form of data transferring from those affected by pulmonary problems (303 records) as well as 26 fields including clinical and environmental variables. The validity of form of data transferring was obtained via consensus and meeting group method using purposive sampling through several meetings among members of research group and lung group. The methodology used was based on classification and prediction method of data mining as well as the method of supervision with algorithms of classification and regression tree using Clementine 12 software. For clinical variables, model's precision was high in three parts of training, test and validation. For environmental variables, maximum precision of model in training part relevant to C&R algorithm was equal to 76%, in test part relevant to Neural Net algorithm was equal to 61%, and in validation part relevant to Neural Net algorithm was equal to 57%. In clinical variables, C5.0, CHAID, C & R models were stable and suitable for detection of lung cancer. In addition, in environmental variables, C & R model was stable and suitable for detection of lung cancer. Variables such as pulmonary nodules, effusion of plural fluid, diameter of pulmonary nodules, and place of pulmonary nodules are very important variables that have the greatest impact on detection of lung cancer.

  12. Enhancing the Lasso Approach for Developing a Survival Prediction Model Based on Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Shuhei Kaneko

    2015-01-01

    Full Text Available In the past decade, researchers in oncology have sought to develop survival prediction models using gene expression data. The least absolute shrinkage and selection operator (lasso has been widely used to select genes that truly correlated with a patient’s survival. The lasso selects genes for prediction by shrinking a large number of coefficients of the candidate genes towards zero based on a tuning parameter that is often determined by a cross-validation (CV. However, this method can pass over (or fail to identify true positive genes (i.e., it identifies false negatives in certain instances, because the lasso tends to favor the development of a simple prediction model. Here, we attempt to monitor the identification of false negatives by developing a method for estimating the number of true positive (TP genes for a series of values of a tuning parameter that assumes a mixture distribution for the lasso estimates. Using our developed method, we performed a simulation study to examine its precision in estimating the number of TP genes. Additionally, we applied our method to a real gene expression dataset and found that it was able to identify genes correlated with survival that a CV method was unable to detect.

  13. Computational prediction of octanol-water partition coefficient based on the extended solvent-contact model.

    Science.gov (United States)

    Kim, Taeho; Park, Hwangseo

    2015-07-01

    The logarithm of 1-octanol/water partition coefficient (LogP) is one of the most important molecular design parameters in drug discovery. Assuming that LogP can be calculated from the difference between the solvation free energy of a molecule in water and that in 1-octanol, we propose a method for predicting the molecular LogP values based on the extended solvent-contact model. To obtain the molecular solvation free energy data for the two solvents, a proper potential energy function was defined for each solvent with respect to atomic distributions and three kinds of atomic parameters. Total 205 atomic parameters were optimized with the standard genetic algorithm using the training set consisting of 139 organic molecules with varying shapes and functional groups. The LogP values estimated with the two optimized solvation free energy functions compared reasonably well with the experimental results with the associated squared correlation coefficient and root mean square error of 0.824 and 0.697, respectively. Besides the prediction accuracy, the present method has the merit in practical applications because molecular LogP values can be computed straightforwardly from the simple potential energy functions without the need to calculate various molecular descriptors. The methods for enhancing the accuracy of the present prediction model are also discussed.

  14. Disulfide Connectivity Prediction Based on Modelled Protein 3D Structural Information and Random Forest Regression.

    Science.gov (United States)

    Yu, Dong-Jun; Li, Yang; Hu, Jun; Yang, Xibei; Yang, Jing-Yu; Shen, Hong-Bin

    2015-01-01

    Disulfide connectivity is an important protein structural characteristic. Accurately predicting disulfide connectivity solely from protein sequence helps to improve the intrinsic understanding of protein structure and function, especially in the post-genome era where large volume of sequenced proteins without being functional annotated is quickly accumulated. In this study, a new feature extracted from the predicted protein 3D structural information is proposed and integrated with traditional features to form discriminative features. Based on the extracted features, a random forest regression model is performed to predict protein disulfide connectivity. We compare the proposed method with popular existing predictors by performing both cross-validation and independent validation tests on benchmark datasets. The experimental results demonstrate the superiority of the proposed method over existing predictors. We believe the superiority of the proposed method benefits from both the good discriminative capability of the newly developed features and the powerful modelling capability of the random forest. The web server implementation, called TargetDisulfide, and the benchmark datasets are freely available at: http://csbio.njust.edu.cn/bioinf/TargetDisulfide for academic use.

  15. PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems.

    Science.gov (United States)

    Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota

    2016-07-01

    PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems.

  16. A radar-based hydrological model for flash flood prediction in the dry regions of Israel

    Science.gov (United States)

    Ronen, Alon; Peleg, Nadav; Morin, Efrat

    2014-05-01

    Flash floods are floods which follow shortly after rainfall events, and are among the most destructive natural disasters that strike people and infrastructures in humid and arid regions alike. Using a hydrological model for the prediction of flash floods in gauged and ungauged basins can help mitigate the risk and damage they cause. The sparsity of rain gauges in arid regions requires the use of radar measurements in order to get reliable quantitative precipitation estimations (QPE). While many hydrological models use radar data, only a handful do so in dry climate. This research presents a robust radar-based hydro-meteorological model built specifically for dry climate. Using this model we examine the governing factors of flash floods in the arid and semi-arid regions of Israel in particular and in dry regions in general. The hydrological model built is a semi-distributed, physically-based model, which represents the main hydrological processes in the area, namely infiltration, flow routing and transmission losses. Three infiltration functions were examined - Initial & Constant, SCS-CN and Green&Ampt. The parameters for each function were found by calibration based on 53 flood events in three catchments, and validation was performed using 55 flood events in six catchments. QPE were obtained from a C-band weather radar and adjusted using a weighted multiple regression method based on a rain gauge network. Antecedent moisture conditions were calculated using a daily recharge assessment model (DREAM). We found that the SCS-CN infiltration function performed better than the other two, with reasonable agreement between calculated and measured peak discharge. Effects of storm characteristics were studied using synthetic storms from a high resolution weather generator (HiReS-WG), and showed a strong correlation between storm speed, storm direction and rain depth over desert soils to flood volume and peak discharge.

  17. Finite Element Method Based Modeling for Prediction of Cutting Forces in Micro-end Milling

    Science.gov (United States)

    Pratap, Tej; Patra, Karali

    2017-02-01

    Micro-end milling is one of the widely used processes for producing micro features/components in micro-fluidic systems, biomedical applications, aerospace applications, electronics and many more fields. However in these applications, the forces generated in the micro-end milling process can cause tool vibration, process instability and even cause tool breakage if not minimized. Therefore, an accurate prediction of cutting forces in micro-end milling is essential. In this work, a finite element method based model is developed using ABAQUS/Explicit 6.12 software for prediction of cutting forces in micro-end milling with due consideration of tool edge radius effect, thermo-mechanical properties and failure parameters of the workpiece material including friction behaviour at tool-chip interface. Experiments have been performed for manufacturing of microchannels on copper plate using 500 µm diameter tungsten carbide micro-end mill and cutting forces are acquired through a dynamometer. Predicted cutting forces in feed and cross feed directions are compared with experimental results and are found to be in good agreements. Results also show that FEM based simulations can be applied to analyze size effects of specific cutting forces in micro-end milling process.

  18. Coupled agent-based and finite-element models for predicting scar structure following myocardial infarction.

    Science.gov (United States)

    Rouillard, Andrew D; Holmes, Jeffrey W

    2014-08-01

    Following myocardial infarction, damaged muscle is gradually replaced by collagenous scar tissue. The structural and mechanical properties of the scar are critical determinants of heart function, as well as the risk of serious post-infarction complications such as infarct rupture, infarct expansion, and progression to dilated heart failure. A number of therapeutic approaches currently under development aim to alter infarct mechanics in order to reduce complications, such as implantation of mechanical restraint devices, polymer injection, and peri-infarct pacing. Because mechanical stimuli regulate scar remodeling, the long-term consequences of therapies that alter infarct mechanics must be carefully considered. Computational models have the potential to greatly improve our ability to understand and predict how such therapies alter heart structure, mechanics, and function over time. Toward this end, we developed a straightforward method for coupling an agent-based model of scar formation to a finite-element model of tissue mechanics, creating a multi-scale model that captures the dynamic interplay between mechanical loading, scar deformation, and scar material properties. The agent-based component of the coupled model predicts how fibroblasts integrate local chemical, structural, and mechanical cues as they deposit and remodel collagen, while the finite-element component predicts local mechanics at any time point given the current collagen fiber structure and applied loads. We used the coupled model to explore the balance between increasing stiffness due to collagen deposition and increasing wall stress due to infarct thinning and left ventricular dilation during the normal time course of healing in myocardial infarcts, as well as the negative feedback between strain anisotropy and the structural anisotropy it promotes in healing scar. The coupled model reproduced the observed evolution of both collagen fiber structure and regional deformation following coronary

  19. Predicting biological parameters of estuarine benthic communities using models based on environmental data

    Directory of Open Access Journals (Sweden)

    José Souto Rosa-Filho

    2004-08-01

    Full Text Available This study aimed to predict the biological parameters (species composition, abundance, richness, diversity and evenness of benthic assemblages in southern Brazil estuaries using models based on environmental data (sediment characteristics, salinity, air and water temperature and depth. Samples were collected seasonally from five estuaries between the winter of 1996 and the summer of 1998. At each estuary, samples were taken in unpolluted areas with similar characteristics related to presence or absence of vegetation, depth and distance from the mouth. In order to obtain predictive models, two methods were used, the first one based on Multiple Discriminant Analysis (MDA, and the second based on Multiple Linear Regression (MLR. Models using MDA had better results than those based on linear regression. The best results using MLR were obtained for diversity and richness. It could be concluded that the use predictions models based on environmental data would be very useful in environmental monitoring studies in estuaries.Este trabalho objetivou predizer parâmetros da estrutura de associações macrobentônicas (composição específica, abundância, riqueza, diversidade e equitatividade em estuários do Sul do Brasil, utilizando modelos baseados em dados ambientais (características dos sedimentos, salinidade, temperaturas do ar e da água, e profundidade. As amostragens foram realizadas sazonalmente em cinco estuários entre o inverno de 1996 e o verão de 1998. Em cada estuário as amostras foram coletadas em áreas não poluídas, com características semelhantes quanto a presença ou ausência de vegetação, profundidade e distância da desenbocadura. Para a obtenção dos modelos de predição, foram utilizados dois métodos: o primeiro baseado em Análise Discriminante Múltipla (ADM e o segundo em Regressão Linear Múltipla (RLM. Os modelos baseados em ADM apresentaram resultados melhores do que os baseados em regressão linear. Os melhores

  20. Introducing spatial information into predictive NF-kappaB modelling--an agent-based approach.

    Directory of Open Access Journals (Sweden)

    Mark Pogson

    Full Text Available Nature is governed by local interactions among lower-level sub-units, whether at the cell, organ, organism, or colony level. Adaptive system behaviour emerges via these interactions, which integrate the activity of the sub-units. To understand the system level it is necessary to understand the underlying local interactions. Successful models of local interactions at different levels of biological organisation, including epithelial tissue and ant colonies, have demonstrated the benefits of such 'agent-based' modelling. Here we present an agent-based approach to modelling a crucial biological system--the intracellular NF-kappaB signalling pathway. The pathway is vital to immune response regulation, and is fundamental to basic survival in a range of species. Alterations in pathway regulation underlie a variety of diseases, including atherosclerosis and arthritis. Our modelling of individual molecules, receptors and genes provides a more comprehensive outline of regulatory network mechanisms than previously possible with equation-based approaches. The method also permits consideration of structural parameters in pathway regulation; here we predict that inhibition of NF-kappaB is directly affected by actin filaments of the cytoskeleton sequestering excess inhibitors, therefore regulating steady-state and feedback behaviour.

  1. The grain grading model and prediction of deleterious porosity of cement-based materials

    Institute of Scientific and Technical Information of China (English)

    FENG Qi; LIU Jun-zhe

    2008-01-01

    The calculating model for the packing degree of spherical particles system was modified. The grain grading model of cement-based materials was established and could be applied in the global grading system as well as in the nano-fiber reinforced system. According to the grain grading model, two kinds of mortar were de-signed by using the global grain materials and nano-fiber materials such as fly ash, silica fume and NR powder.In this paper, the densities of two above systems cured for 90d were tested and the relationship of deleterious porosity and the total porosity of hardened mortar was discussed. Research results show that nano-fiber materialsuch as NR powder can increase the density of cement-based materials. The relationship of deleterious porosity and the total porosity of hardened mortar accords with logarithmic curve. The deleterious porosity and the ration-ality of the grading can be roughly predicted through calculating the packing degree by the grain grading model of cement-based materials.

  2. A Predictive Model of Cell Traction Forces Based on Cell Geometry

    OpenAIRE

    Lemmon, Christopher A.; Romer, Lewis H

    2010-01-01

    Recent work has indicated that the shape and size of a cell can influence how a cell spreads, develops focal adhesions, and exerts forces on the substrate. However, it is unclear how cell shape regulates these events. Here we present a computational model that uses cell shape to predict the magnitude and direction of forces generated by cells. The predicted results are compared to experimentally measured traction forces, and show that the model can predict traction force direction, relative m...

  3. GOBF-ARMA based model predictive control for an ideal reactive distillation column.

    Science.gov (United States)

    Seban, Lalu; Kirubakaran, V; Roy, B K; Radhakrishnan, T K

    2015-11-01

    This paper discusses the control of an ideal reactive distillation column (RDC) using model predictive control (MPC) based on a combination of deterministic generalized orthonormal basis filter (GOBF) and stochastic autoregressive moving average (ARMA) models. Reactive distillation (RD) integrates reaction and distillation in a single process resulting in process and energy integration promoting green chemistry principles. Improved selectivity of products, increased conversion, better utilization and control of reaction heat, scope for difficult separations and the avoidance of azeotropes are some of the advantages that reactive distillation offers over conventional technique of distillation column after reactor. The introduction of an in situ separation in the reaction zone leads to complex interactions between vapor-liquid equilibrium, mass transfer rates, diffusion and chemical kinetics. RD with its high order and nonlinear dynamics, and multiple steady states is a good candidate for testing and verification of new control schemes. Here a combination of GOBF-ARMA models is used to catch and represent the dynamics of the RDC. This GOBF-ARMA model is then used to design an MPC scheme for the control of product purity of RDC under different operating constraints and conditions. The performance of proposed modeling and control using GOBF-ARMA based MPC is simulated and analyzed. The proposed controller is found to perform satisfactorily for reference tracking and disturbance rejection in RDC.

  4. Contact modeling and prediction-based routing in sparse mobile networks

    Institute of Scientific and Technical Information of China (English)

    GUO Yang; QU Yugui; BAI Ronggang; ZHAO Baohua

    2007-01-01

    Mobile ad-hoc networks(MANETs) provide highly robust and self-configuring network capacity required in many critical applications,such as battlefields,disaster relief,and wild life tracking.In this paper,we focus on efficient message forwarding in sparse MANETs,which suffers from frequent and long-duration partitions.Asynchronous contacts become the basic way of communication in such kind of network instead of data links in traditional ad-hoc networks.Current approaches are primarily based on estimation with pure probability calculation.Stochastic forwarding decisions from statistic results can lead to disastrous routing performance when wrong choices are made.This paper introduces a new routing protocol,based on contact modeling and contact prediction,to address the problem.Our contact model focuses on the periodic contact pattern of nodes with actual inter-contact time involved,in order to get an accurate realization of network cooperation and connectivity status.The corresponding contact prediction algorithm makes use of both statistic and time sequence information of contacts and allows choosing the relay that has the earliest contact to the destination,which results in low average latency.Simulation is used to compare the routing performance of our algorithm with three other categories of forwarding algorithm proposed already.The results demonstrate that our scheme is more efficient in both data delivery and energy consumption than previously proposed schemes.

  5. Obstacle avoidance using predictive vision based on a dynamic 3D world model

    Science.gov (United States)

    Benjamin, D. Paul; Lyons, Damian; Achtemichuk, Tom

    2006-10-01

    We have designed and implemented a fast predictive vision system for a mobile robot based on the principles of active vision. This vision system is part of a larger project to design a comprehensive cognitive architecture for mobile robotics. The vision system represents the robot's environment with a dynamic 3D world model based on a 3D gaming platform (Ogre3D). This world model contains a virtual copy of the robot and its environment, and outputs graphics showing what the virtual robot "sees" in the virtual world; this is what the real robot expects to see in the real world. The vision system compares this output in real time with the visual data. Any large discrepancies are flagged and sent to the robot's cognitive system, which constructs a plan for focusing on the discrepancies and resolving them, e.g. by updating the position of an object or by recognizing a new object. An object is recognized only once; thereafter its observed data are monitored for consistency with the predictions, greatly reducing the cost of scene understanding. We describe the implementation of this vision system and how the robot uses it to locate and avoid obstacles.

  6. Model Predictive Control of A Matrix-Converter Based Solid State Transformer for Utility Grid Interaction

    Energy Technology Data Exchange (ETDEWEB)

    Xue, Yaosuo [ORNL

    2016-01-01

    The matrix converter solid state transformer (MC-SST), formed from the back-to-back connection of two three-to-single-phase matrix converters, is studied for use in the interconnection of two ac grids. The matrix converter topology provides a light weight and low volume single-stage bidirectional ac-ac power conversion without the need for a dc link. Thus, the lifetime limitations of dc-bus storage capacitors are avoided. However, space vector modulation of this type of MC-SST requires to compute vectors for each of the two MCs, which must be carefully coordinated to avoid commutation failure. An additional controller is also required to control power exchange between the two ac grids. In this paper, model predictive control (MPC) is proposed for an MC-SST connecting two different ac power grids. The proposed MPC predicts the circuit variables based on the discrete model of MC-SST system and the cost function is formulated so that the optimal switch vector for the next sample period is selected, thereby generating the required grid currents for the SST. Simulation and experimental studies are carried out to demonstrate the effectiveness and simplicity of the proposed MPC for such MC-SST-based grid interfacing systems.

  7. Yule-Nielsen based recto-verso color halftone transmittance prediction model.

    Science.gov (United States)

    Hébert, Mathieu; Hersch, Roger D

    2011-02-01

    The transmittance spectrum of halftone prints on paper is predicted thanks to a model inspired by the Yule-Nielsen modified spectral Neugebauer model used for reflectance predictions. This model is well adapted for strongly scattering printing supports and applicable to recto-verso prints. Model parameters are obtained by a few transmittance measurements of calibration patches printed on one side of the paper. The model was verified with recto-verso specimens printed by inkjet with classical and custom inks, at different halftone frequencies and on various types of paper. Predictions are as accurate as those obtained with a previously developed reflectance and transmittance prediction model relying on the multiple reflections of light between the paper and the print-air interfaces. Optimal n values are smaller in transmission mode compared with the reflection model. This indicates a smaller amount of lateral light propagation in the transmission mode.

  8. A hybrid predictive model for acoustic noise in urban areas based on time series analysis and artificial neural network

    Science.gov (United States)

    Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine

    2017-06-01

    The dangerous effect of noise on human health is well known. Both the auditory and non-auditory effects are largely documented in literature, and represent an important hazard in human activities. Particular care is devoted to road traffic noise, since it is growing according to the growth of residential, industrial and commercial areas. For these reasons, it is important to develop effective models able to predict the noise in a certain area. In this paper, a hybrid predictive model is presented. The model is based on the mixing of two different approach: the Time Series Analysis (TSA) and the Artificial Neural Network (ANN). The TSA model is based on the evaluation of trend and seasonality in the data, while the ANN model is based on the capacity of the network to "learn" the behavior of the data. The mixed approach will consist in the evaluation of noise levels by means of TSA and, once the differences (residuals) between TSA estimations and observed data have been calculated, in the training of a ANN on the residuals. This hybrid model will exploit interesting features and results, with a significant variation related to the number of steps forward in the prediction. It will be shown that the best results, in terms of prediction, are achieved predicting one step ahead in the future. Anyway, a 7 days prediction can be performed, with a slightly greater error, but offering a larger range of prediction, with respect to the single day ahead predictive model.

  9. A Prediction Model of Peasants’ Income in China Based on BP Neural Network

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    According to the related data affecting the peasants’ income in China in the years 1978-2008,a total of 13 indices are selected,such as agricultural population,output value of primary industry,and rural employees.Based on the standardized method and BP neural network method,the peasants’ income and the artificial neural network model are established and analyzed.Results show that the simulation value agrees well with the real value;the neural network model with improved BP algorithm has high prediction accuracy,rapid convergence rate and good generalization ability.Finally,suggestions are put forward to increase the peasants’ income,such as promoting the process of urbanization,developing small and medium-sized enterprises in rural areas,encouraging intensive operation,and strengthening the rural infrastructure and agricultural science and technology input.

  10. Predicting the Water Level Fluctuation in an Alpine Lake Using Physically Based, Artificial Neural Network, and Time Series Forecasting Models

    Directory of Open Access Journals (Sweden)

    Chih-Chieh Young

    2015-01-01

    Full Text Available Accurate prediction of water level fluctuation is important in lake management due to its significant impacts in various aspects. This study utilizes four model approaches to predict water levels in the Yuan-Yang Lake (YYL in Taiwan: a three-dimensional hydrodynamic model, an artificial neural network (ANN model (back propagation neural network, BPNN, a time series forecasting (autoregressive moving average with exogenous inputs, ARMAX model, and a combined hydrodynamic and ANN model. Particularly, the black-box ANN model and physically based hydrodynamic model are coupled to more accurately predict water level fluctuation. Hourly water level data (a total of 7296 observations was collected for model calibration (training and validation. Three statistical indicators (mean absolute error, root mean square error, and coefficient of correlation were adopted to evaluate model performances. Overall, the results demonstrate that the hydrodynamic model can satisfactorily predict hourly water level changes during the calibration stage but not for the validation stage. The ANN and ARMAX models better predict the water level than the hydrodynamic model does. Meanwhile, the results from an ANN model are superior to those by the ARMAX model in both training and validation phases. The novel proposed concept using a three-dimensional hydrodynamic model in conjunction with an ANN model has clearly shown the improved prediction accuracy for the water level fluctuation.

  11. Probabilistic prediction of hydrologic drought using a conditional probability approach based on the meta-Gaussian model

    Science.gov (United States)

    Hao, Zengchao; Hao, Fanghua; Singh, Vijay P.; Sun, Alexander Y.; Xia, Youlong

    2016-11-01

    Prediction of drought plays an important role in drought preparedness and mitigation, especially because of large impacts of drought and increasing demand for water resources. An important aspect for improving drought prediction skills is the identification of drought predictability sources. In general, a drought originates from precipitation deficit and thus the antecedent meteorological drought may provide predictive information for other types of drought. In this study, a hydrological drought (represented by Standardized Runoff Index (SRI)) prediction method is proposed based on the meta-Gaussian model taking into account the persistence and its prior meteorological drought condition (represented by Standardized Precipitation Index (SPI)). Considering the inherent nature of standardized drought indices, the meta-Gaussian model arises as a suitable model for constructing the joint distribution of multiple drought indices. Accordingly, the conditional distribution of hydrological drought can be derived analytically, which enables the probabilistic prediction of hydrological drought in the target period and uncertainty quantifications. Based on monthly precipitation and surface runoff of climate divisions of Texas, U.S., 1-month and 2-month lead predictions of hydrological drought are illustrated and compared to the prediction from Ensemble Streamflow Prediction (ESP). Results, based on 10 climate divisions in Texas, show that the proposed meta-Gaussian model provides useful drought prediction information with performance depending on regions and seasons.

  12. Predictive modelling for shelf life determination of nutricereal based fermented baby food.

    Science.gov (United States)

    Rasane, Prasad; Jha, Alok; Sharma, Nitya

    2015-08-01

    A shelf life model based on storage temperatures was developed for a nutricereal based fermented baby food formulation. The formulated baby food samples were packaged and stored at 10, 25, 37 and 45 °C for a test storage period of 180 days. A shelf life study was conducted using consumer and semi-trained panels, along with chemical analysis (moisture and acidity). The chemical parameters (moisture and titratable acidity) were found inadequate in determining the shelf life of the formulated product. Weibull hazard analysis was used to determine the shelf life of the product based on sensory evaluation. Considering 25 and 50 % rejection probability, the shelf life of the baby food formulation was predicted to be 98 and 322 days, 84 and 271 days, 71 and 221 days and 58 and 171 days for the samples stored at 10, 25, 37 and 45 °C, respectively. A shelf life equation was proposed using the rejection times obtained from the consumer study. Finally, the formulated baby food samples were subjected to microbial analysis for the predicted shelf life period and were found microbiologically safe for consumption during the storage period of 360 days.

  13. A PSO-SVM Model for Short-Term Travel Time Prediction Based on Bluetooth Technology

    Institute of Scientific and Technical Information of China (English)

    Qun Wang; Zhuyun Liu; Zhongren Peng

    2015-01-01

    The accurate prediction of travel time along roadway provides valuable traffic information for travelers and traffic managers. Aiming at short⁃term travel time forecasting on urban arterials, a prediction model ( PSO⁃SVM) combining support vector machine ( SVM) and particle swarm optimization ( PSO) is developed. Travel time data collected with Bluetooth devices are used to calibrate the proposed model. Field experiments show that the PSO⁃SVM model ’ s error indicators are lower than the single SVM model and the BP neural network (BPNN)model. Particularly, the mean⁃absolute percentage error (MAPE) of PSO⁃SVM is only 9�453 4 %which is less than that of the single SVM model ( 12�230 2 %) and the BPNN model ( 15�314 7 %) . The results indicate that the proposed PSO⁃SVM model is feasible and more effective than other models for short⁃term travel time prediction on urban arterials.

  14. Coordinated Voltage Control of a Wind Farm based on Model Predictive Control

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Guo, Qinglai

    2016-01-01

    This paper presents an autonomous wind farm voltage controller based on Model Predictive Control (MPC). The reactive power compensation and voltage regulation devices of the wind farm include Static Var Compensators (SVCs), Static Var Generators (SVGs), Wind Turbine Generators (WTGs) and On......-Load Tap Changing (OLTC) Transformer, and they are coordinated to keep the voltages of all the buses within the feasible range. Moreover, the reactive power distribution is optimized throughout the wind farm in order to maximize the dynamic reactive power reserve. The sensitivity coefficients...... are calculated based on an analytical method to improve the computation efficiency and overcome the convergence problem. Two control modes are designed for both voltage violated and normal operation conditions. A wind farm with 20 wind turbines was used to conduct case studies to verify the proposed coordinated...

  15. Development of Neural Network Model for Predicting Peak Ground Acceleration Based on Microtremor Measurement and Soil Boring Test Data

    National Research Council Canada - National Science Library

    Kerh, T; Lin, J. S; Gunaratnam, D

    2012-01-01

    .... This paper is therefore aimed at developing a neural network model, based on available microtremor measurement and on-site soil boring test data, for predicting peak ground acceleration at a site...

  16. Prediction of Activities in Fe-Based Ternary Liquid Alloys by Hoch-Arpshofen Model

    Institute of Scientific and Technical Information of China (English)

    YANGHong—wei; LIANChao; TAODong—ping

    2012-01-01

    Thermodynamic properties for an alloy system play an important role in the materials science and engineer- ing. Therefore, theoretical calculations having the flexibility to deal with complexity are very useful and have scien- tific meaning. The Hoch-Arpshofen model was deduced from physical principles and is applicable to binary, ternary and larger system using its binary interaction parameters only. Calculations of the activities of Fe-based liquid alloys are calculated using Hoch-Arpshofen model from data on the binary subsystems. Results for the activities for Fe-Au- Ni, Fe-Cr-Ni, Fe-Co-Cr and Fe-Co-Ni systems at required temperature are presented by Hoch-Arpshofen model. The average relative errors of prediction are 7.8%, 4.5%, 4.9~ and 2.7%, respectively. It shows that the calcu- lated results are in good agreement with the experimental data except Fe-Au-Ni system, which exhibits strong inter- action between unlike atoms. The model provides a simple, reliable and general method for calculating the activities for Fe-based liquid alloys.

  17. Study on Apparent Kinetic Prediction Model of the Smelting Reduction Based on the Time-Series

    Directory of Open Access Journals (Sweden)

    Guo-feng Fan

    2012-01-01

    Full Text Available A series of direct smelting reduction experiment has been carried out with high phosphorous iron ore of the different bases by thermogravimetric analyzer. The derivative thermogravimetric (DTG data have been obtained from the experiments. One-step forward local weighted linear (LWL method , one of the most suitable ways of predicting chaotic time-series methods which focus on the errors, is used to predict DTG. In the meanwhile, empirical mode decomposition-autoregressive (EMD-AR, a data mining technique in signal processing, is also used to predict DTG. The results show that (1 EMD-AR(4 is the most appropriate and its error is smaller than the former; (2 root mean square error (RMSE has decreased about two-thirds; (3 standardized root mean square error (NMSE has decreased in an order of magnitude. Finally in this paper, EMD-AR method has been improved by golden section weighting; its error would be smaller than before. Therefore, the improved EMD-AR model is a promising alternative for apparent reaction rate (DTG. The analytical results have been an important reference in the field of industrial control.

  18. Reliability–based economic model predictive control for generalised flow–based networks including actuators’ health–aware capabilities

    Directory of Open Access Journals (Sweden)

    Grosso Juan M.

    2016-09-01

    Full Text Available This paper proposes a reliability-based economic model predictive control (MPC strategy for the management of generalised flow-based networks, integrating some ideas on network service reliability, dynamic safety stock planning, and degradation of equipment health. The proposed strategy is based on a single-layer economic optimisation problem with dynamic constraints, which includes two enhancements with respect to existing approaches. The first enhancement considers chance-constraint programming to compute an optimal inventory replenishment policy based on a desired risk acceptability level, leading to dynamical allocation of safety stocks in flow-based networks to satisfy non-stationary flow demands. The second enhancement computes a smart distribution of the control effort and maximises actuators’ availability by estimating their degradation and reliability. The proposed approach is illustrated with an application of water transport networks using the Barcelona network as the case study considered.

  19. Development and external validation of a faecal immunochemical test-based prediction model for colorectal cancer detection in symptomatic patients

    OpenAIRE

    Alves, Maria Teresa; Álvarez-Sánchez, Victoria; Ferrandez, Ángel; Rodríguez-Alcalde, Daniel; ,; Cubiella, Joaquín; Vega, Pablo; Salve, María; Díaz-Ondina, Marta; Blanco, Irene; Macía, Pedro; Sánchez, Eloy; Fernández-Seara, Javier; Alves, María Teresa; Quintero, Enrique

    2016-01-01

    Background Risk prediction models for colorectal cancer (CRC) detection in symptomatic patients based on available biomarkers may improve CRC diagnosis. Our aim was to develop, compare with the NICE referral criteria and externally validate a CRC prediction model, COLONPREDICT, based on clinical and laboratory variables. Methods This prospective cross-sectional study included consecutive patients with gastrointestinal symptoms referred for colonoscopy between March 2012 and September 2013 in ...

  20. Artificial Neural Network and Response Surface Methodology Modeling in Ionic Conductivity Predictions of Phthaloylchitosan-Based Gel Polymer Electrolyte

    Directory of Open Access Journals (Sweden)

    Ahmad Danial Azzahari

    2016-01-01

    Full Text Available A gel polymer electrolyte system based on phthaloylchitosan was prepared. The effects of process variables, such as lithium iodide, caesium iodide, and 1-butyl-3-methylimidazolium iodide were investigated using a distance-based ternary mixture experimental design. A comparative approach was made between response surface methodology (RSM and artificial neural network (ANN to predict the ionic conductivity. The predictive capabilities of the two methodologies were compared in terms of coefficient of determination R2 based on the validation data set. It was shown that the developed ANN model had better predictive outcome as compared to the RSM model.

  1. Improving predictive power of physically based rainfall-induced shallow landslide models: a probabilistic approach

    Directory of Open Access Journals (Sweden)

    S. Raia

    2014-03-01

    Full Text Available Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are based on deterministic laws. These models extend spatially the static stability models adopted in geotechnical engineering, and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the operation of the existing models lays in the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of rainfall-induced shallow landslides. For this purpose, we have modified the transient rainfall infiltration and grid-based regional slope-stability analysis (TRIGRS code. The new code (TRIGRS-P adopts a probabilistic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs

  2. Godunov-Based Model of Swash Zone Dynamics to Advance Coastal Flood Prediction

    Science.gov (United States)

    Shakeri Majd, M.; Sanders, B. F.

    2012-12-01

    Urbanized lowlands in southern California are defended against coastal flooding by sandy beaches that dynamically adjust to changes in water level and wave conditions, particularly during storm events. Recent research has shown that coastal flood impacts are scaled by the volume of beach overtopping flows, and an improved characterization of dynamic overtopping rates is needed to improve coastal flood forecasting (Gallien et al. 2012). However, uncertainty in the beach slope and height makes it difficult to predict the onset of overtopping and the magnitude of resulting flooding. That is, beaches may evolve significantly over a storm event. Sallenger (Sallenger, 2000) describes Impact Levels to distinguish different impact regimes (swash, collision, overwash and inundation) on dunes and barrier islands. Our goal is to model processes in different regimes as was described by him. Godunov-based models adopt a depth-integrated, two-phase approach and the shallow-water hypothesis to resolve flow and sediment transport in a tightly coupled manner that resolves shocks in the air/fluid and fluid/sediment interface. These models are best known in the context of debris flow modeling where the ability to predict the flow of highly concentrated sediment/fluid mixtures is required. Here, the approach is directed at the swash zone. Existing Godunov-based models are reviewed and shown to have drawbacks relative to wetting and drying and "avalanching"—important processes in the swash zone. This nonphysical erosion can be described as the natural tendency of the schemes to smear out steep bed slopes. To denote and reduce these numerical errors, new numerical methods are presented to address these limitations and the resulting model is applied to a set of laboratory-scale test problems. The shallow-water hypothesis limits the applicability of the model to the swash zone, so it is forced by a time series of water level and cross-shore velocity that accounts for surf zone wave

  3. explICU: A web-based visualization and predictive modeling toolkit for mortality in intensive care patients.

    Science.gov (United States)

    Chen, Robert; Kumar, Vikas; Fitch, Natalie; Jagadish, Jitesh; Lifan Zhang; Dunn, William; Duen Horng Chau

    2015-01-01

    Preventing mortality in intensive care units (ICUs) has been a top priority in American hospitals. Predictive modeling has been shown to be effective in prediction of mortality based upon data from patients' past medical histories from electronic health records (EHRs). Furthermore, visualization of timeline events is imperative in the ICU setting in order to quickly identify trends in patient histories that may lead to mortality. With the increasing adoption of EHRs, a wealth of medical data is becoming increasingly available for secondary uses such as data exploration and predictive modeling. While data exploration and predictive modeling are useful for finding risk factors in ICU patients, the process is time consuming and requires a high level of computer programming ability. We propose explICU, a web service that hosts EHR data, displays timelines of patient events based upon user-specified preferences, performs predictive modeling in the back end, and displays results to the user via intuitive, interactive visualizations.

  4. Assessment of a remote sensing-based model for predicting malaria transmission risk in villages of Chiapas, Mexico

    Science.gov (United States)

    Beck, L. R.; Rodriguez, M. H.; Dister, S. W.; Rodriguez, A. D.; Washino, R. K.; Roberts, D. R.; Spanner, M. A.

    1997-01-01

    A blind test of two remote sensing-based models for predicting adult populations of Anopheles albimanus in villages, an indicator of malaria transmission risk, was conducted in southern Chiapas, Mexico. One model was developed using a discriminant analysis approach, while the other was based on regression analysis. The models were developed in 1992 for an area around Tapachula, Chiapas, using Landsat Thematic Mapper (TM) satellite data and geographic information system functions. Using two remotely sensed landscape elements, the discriminant model was able to successfully distinguish between villages with high and low An. albimanus abundance with an overall accuracy of 90%. To test the predictive capability of the models, multitemporal TM data were used to generate a landscape map of the Huixtla area, northwest of Tapachula, where the models were used to predict risk for 40 villages. The resulting predictions were not disclosed until the end of the test. Independently, An. albimanus abundance data were collected in the 40 randomly selected villages for which the predictions had been made. These data were subsequently used to assess the models' accuracies. The discriminant model accurately predicted 79% of the high-abundance villages and 50% of the low-abundance villages, for an overall accuracy of 70%. The regression model correctly identified seven of the 10 villages with the highest mosquito abundance. This test demonstrated that remote sensing-based models generated for one area can be used successfully in another, comparable area.

  5. An Observer-Based Finite Control Set Model Predictive Control for Three-Phase Power Converters

    Directory of Open Access Journals (Sweden)

    Tao Liu

    2014-01-01

    Full Text Available Finite control set model predictive control (FCS-MPC for three-phase power converters uses a discrete mathematical model of the power converter to predict the future current value for all possible switching states. The circuit parameters and measured input currents are necessary components. For this reason, parameter error and time delay of current signals may degrade the performance of the control system. In the previous studies of the FCS-MPC, few articles study these aspects in detail and almost no method is proposed to avoid these negative influences. This paper, first, investigates the negative impacts of inductance inaccuracy and AC-side current distortion due to the time delay caused by filter on FCS-MPC system. Then, it proposes an observer-based FCS-MPC approach with which the inductance error can be corrected, the current signal’s time delay caused by filter can be compensated, and therefore the performance of FCS-MPC will be improved. At last, as an example, it illustrates the effectiveness of the proposed approach with experimental testing results for a power converter.

  6. Osmotic pressure of ionic liquids in an electric double layer: Prediction based on a continuum model.

    Science.gov (United States)

    Moon, Gi Jong; Ahn, Myung Mo; Kang, In Seok

    2015-12-01

    An analysis has been performed for the osmotic pressure of ionic liquids in the electric double layer (EDL). By using the electromechanical approach, we first derive a differential equation that is valid for computing the osmotic pressure in the continuum limit of any incompressible fluid in EDL. Then a specific model for ionic liquids proposed by Bazant et al. [M. Z. Bazant, B. D. Storey, and A. A. Kornyshev, Phys. Rev. Lett. 106, 046102 (2011)] is adopted for more detailed computation of the osmotic pressure. Ionic liquids are characterized by the correlation and the steric effects of ions and their effects are analyzed. In the low voltage cases, the correlation effect is dominant and the problem becomes linear. For this low voltage limit, a closed form formula is derived for predicting the osmotic pressure in EDL with no overlapping. It is found that the osmotic pressure decreases as the correlation effect increases. The osmotic pressures at the nanoslit surface and nanoslit centerline are also obtained for the low voltage limit. For the cases of moderately high voltage with high correlation factor, approximate formulas are derived for estimating osmotic pressure values based on the concept of a condensed layer near the electrode. In order to corroborate the results predicted by analytical studies, the full nonlinear model has been solved numerically.

  7. Model predictive control of an air suspension system with damping multi-mode switching damper based on hybrid model

    Science.gov (United States)

    Sun, Xiaoqiang; Yuan, Chaochun; Cai, Yingfeng; Wang, Shaohua; Chen, Long

    2017-09-01

    This paper presents the hybrid modeling and the model predictive control of an air suspension system with damping multi-mode switching damper. Unlike traditional damper with continuously adjustable damping, in this study, a new damper with four discrete damping modes is applied to vehicle semi-active air suspension. The new damper can achieve different damping modes by just controlling the on-off statuses of two solenoid valves, which makes its damping adjustment more efficient and more reliable. However, since the damping mode switching induces different modes of operation, the air suspension system with the new damper poses challenging hybrid control problem. To model both the continuous/discrete dynamics and the switching between different damping modes, the framework of mixed logical dynamical (MLD) systems is used to establish the system hybrid model. Based on the resulting hybrid dynamical model, the system control problem is recast as a model predictive control (MPC) problem, which allows us to optimize the switching sequences of the damping modes by taking into account the suspension performance requirements. Numerical simulations results demonstrate the efficacy of the proposed control method finally.

  8. Bending Angle Prediction Model Based on BPNN-Spline in Air Bending Springback Process

    OpenAIRE

    Zhefeng Guo; Wencheng Tang

    2017-01-01

    In order to rapidly and accurately predict the springback bending angle in V-die air bending process, a springback bending angle prediction model on the combination of error back propagation neural network and spline function (BPNN-Spline) is presented in this study. An orthogonal experimental sample set for training BPNN-Spline is obtained by finite element simulation. Through the analysis of network structure, the BPNN-Spline black box function of bending angle prediction is established, an...

  9. Score-based prediction of genomic islands in prokaryotic genomes using hidden Markov models

    Directory of Open Access Journals (Sweden)

    Surovcik Katharina

    2006-03-01

    Full Text Available Abstract Background Horizontal gene transfer (HGT is considered a strong evolutionary force shaping the content of microbial genomes in a substantial manner. It is the difference in speed enabling the rapid adaptation to changing environmental demands that distinguishes HGT from gene genesis, duplications or mutations. For a precise characterization, algorithms are needed that identify transfer events with high reliability. Frequently, the transferred pieces of DNA have a considerable length, comprise several genes and are called genomic islands (GIs or more specifically pathogenicity or symbiotic islands. Results We have implemented the program SIGI-HMM that predicts GIs and the putative donor of each individual alien gene. It is based on the analysis of codon usage (CU of each individual gene of a genome under study. CU of each gene is compared against a carefully selected set of CU tables representing microbial donors or highly expressed genes. Multiple tests are used to identify putatively alien genes, to predict putative donors and to mask putatively highly expressed genes. Thus, we determine the states and emission probabilities of an inhomogeneous hidden Markov model working on gene level. For the transition probabilities, we draw upon classical test theory with the intention of integrating a sensitivity controller in a consistent manner. SIGI-HMM was written in JAVA and is publicly available. It accepts as input any file created according to the EMBL-format. It generates output in the common GFF format readable for genome browsers. Benchmark tests showed that the output of SIGI-HMM is in agreement with known findings. Its predictions were both consistent with annotated GIs and with predictions generated by different methods. Conclusion SIGI-HMM is a sensitive tool for the identification of GIs in microbial genomes. It allows to interactively analyze genomes in detail and to generate or to test hypotheses about the origin of acquired

  10. Model-based evaluation of subsurface monitoring networks for improved efficiency and predictive certainty of regional groundwater models

    Science.gov (United States)

    Gosses, M. J.; Wöhling, Th.; Moore, C. R.; Dann, R.; Scott, D. M.; Close, M.

    2012-04-01

    -specific prediction target under consideration. Therefore, the worth of individual observation locations may differ for different prediction targets. Our evaluation is based on predictions of lowland stream discharge resulting from changes in land use and irrigation in the upper Central Plains catchment. In our analysis, we adopt the model predictive uncertainty analysis method by Moore and Doherty (2005) which accounts for contributions from both measurement errors and uncertain structural heterogeneity. The method is robust and efficient due to a linearity assumption in the governing equations and readily implemented for application in the model-independent parameter estimation and uncertainty analysis toolkit PEST (Doherty, 2010). The proposed methods can be applied not only for the evaluation of monitoring networks, but also for the optimization of networks, to compare alternative monitoring strategies, as well as to identify best cost-benefit monitoring design even prior to any data acquisition.

  11. Predictive Modeling of Mechanical Properties of Welded Joints Based on Dynamic Fuzzy RBF Neural Network

    Directory of Open Access Journals (Sweden)

    ZHANG Yongzhi

    2016-10-01

    Full Text Available A dynamic fuzzy RBF neural network model was built to predict the mechanical properties of welded joints, and the purpose of the model was to overcome the shortcomings of static neural networks including structural identification, dynamic sample training and learning algorithm. The structure and parameters of the model are no longer head of default, dynamic adaptive adjustment in the training, suitable for dynamic sample data for learning, learning algorithm introduces hierarchical learning and fuzzy rule pruning strategy, to accelerate the training speed of model and make the model more compact. Simulation of the model was carried out by using three kinds of thickness and different process TC4 titanium alloy TIG welding test data. The results show that the model has higher prediction accuracy, which is suitable for predicting the mechanical properties of welded joints, and has opened up a new way for the on-line control of the welding process.

  12. Radiation induced dissolution of UO 2 based nuclear fuel - A critical review of predictive modelling approaches

    Science.gov (United States)

    Eriksen, Trygve E.; Shoesmith, David W.; Jonsson, Mats

    2012-01-01

    Radiation induced dissolution of uranium dioxide (UO 2) nuclear fuel and the consequent release of radionuclides to intruding groundwater are key-processes in the safety analysis of future deep geological repositories for spent nuclear fuel. For several decades, these processes have been studied experimentally using both spent fuel and various types of simulated spent fuels. The latter have been employed since it is difficult to draw mechanistic conclusions from real spent nuclear fuel experiments. Several predictive modelling approaches have been developed over the last two decades. These models are largely based on experimental observations. In this work we have performed a critical review of the modelling approaches developed based on the large body of chemical and electrochemical experimental data. The main conclusions are: (1) the use of measured interfacial rate constants give results in generally good agreement with experimental results compared to simulations where homogeneous rate constants are used; (2) the use of spatial dose rate distributions is particularly important when simulating the behaviour over short time periods; and (3) the steady-state approach (the rate of oxidant consumption is equal to the rate of oxidant production) provides a simple but fairly accurate alternative, but errors in the reaction mechanism and in the kinetic parameters used may not be revealed by simple benchmarking. It is essential to use experimentally determined rate constants and verified reaction mechanisms, irrespective of whether the approach is chemical or electrochemical.

  13. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    Science.gov (United States)

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  14. Soil erosion model predictions using parent material/soil texture-based parameters compared to using site-specific parameters

    Science.gov (United States)

    R. B. Foltz; W. J. Elliot; N. S. Wagenbrenner

    2011-01-01

    Forested areas disturbed by access roads produce large amounts of sediment. One method to predict erosion and, hence, manage forest roads is the use of physically based soil erosion models. A perceived advantage of a physically based model is that it can be parameterized at one location and applied at another location with similar soil texture or geological parent...

  15. A Dynamic Web Page Prediction Model Based on Access Patterns to Offer Better User Latency

    CERN Document Server

    Mukhopadhyay, Debajyoti; Saha, Dwaipayan; Kim, Young-Chon

    2011-01-01

    The growth of the World Wide Web has emphasized the need for improvement in user latency. One of the techniques that are used for improving user latency is Caching and another is Web Prefetching. Approaches that bank solely on caching offer limited performance improvement because it is difficult for caching to handle the large number of increasingly diverse files. Studies have been conducted on prefetching models based on decision trees, Markov chains, and path analysis. However, the increased uses of dynamic pages, frequent changes in site structure and user access patterns have limited the efficacy of these static techniques. In this paper, we have proposed a methodology to cluster related pages into different categories based on the access patterns. Additionally we use page ranking to build up our prediction model at the initial stages when users haven't already started sending requests. This way we have tried to overcome the problems of maintaining huge databases which is needed in case of log based techn...

  16. Early Prediction of Alzheimer’s Disease Using Null Longitudinal Model-Based Classifiers

    Science.gov (United States)

    Kanaan-Izquierdo, Samir; Mataró-Serrat, María; Perera-Lluna, Alexandre

    2017-01-01

    Incipient Alzheimer’s Disease (AD) is characterized by a slow onset of clinical symptoms, with pathological brain changes starting several years earlier. Consequently, it is necessary to first understand and differentiate age-related changes in brain regions in the absence of disease, and then to support early and accurate AD diagnosis. However, there is poor understanding of the initial stage of AD; seemingly healthy elderly brains lose matter in regions related to AD, but similar changes can also be found in non-demented subjects having mild cognitive impairment (MCI). By using a Linear Mixed Effects approach, we modelled the change of 166 Magnetic Resonance Imaging (MRI)-based biomarkers available at a 5-year follow up on healthy elderly control (HC, n = 46) subjects. We hypothesized that, by identifying their significant variant (vr) and quasi-variant (qvr) brain regions over time, it would be possible to obtain an age-based null model, which would characterize their normal atrophy and growth patterns as well as the correlation between these two regions. By using the null model on those subjects who had been clinically diagnosed as HC (n = 161), MCI (n = 209) and AD (n = 331), normal age-related changes were estimated and deviation scores (residuals) from the observed MRI-based biomarkers were computed. Subject classification, as well as the early prediction of conversion to MCI and AD, were addressed through residual-based Support Vector Machines (SVM) modelling. We found reductions in most cortical volumes and thicknesses (with evident gender differences) as well as in sub-cortical regions, including greater atrophy in the hippocampus. The average accuracies (ACC) recorded for men and women were: AD-HC: 94.11%, MCI-HC: 83.77% and MCI converted to AD (cAD)-MCI non-converter (sMCI): 76.72%. Likewise, as compared to standard clinical diagnosis methods, SVM classifiers predicted the conversion of cAD to be 1.9 years earlier for females (ACC:72.5%) and 1.4 years

  17. Construction of risk prediction model of type 2 diabetes mellitus based on logistic regression

    Directory of Open Access Journals (Sweden)

    Li Jian

    2017-01-01

    Full Text Available Objective: to construct multi factor prediction model for the individual risk of T2DM, and to explore new ideas for early warning, prevention and personalized health services for T2DM. Methods: using logistic regression techniques to screen the risk factors for T2DM and construct the risk prediction model of T2DM. Results: Male’s risk prediction model logistic regression equation: logit(P=BMI × 0.735+ vegetables × (−0.671 + age × 0.838+ diastolic pressure × 0.296+ physical activity× (−2.287 + sleep ×(−0.009 +smoking ×0.214; Female’s risk prediction model logistic regression equation: logit(P=BMI ×1.979+ vegetables× (−0.292 + age × 1.355+ diastolic pressure× 0.522+ physical activity × (−2.287 + sleep × (−0.010.The area under the ROC curve of male was 0.83, the sensitivity was 0.72, the specificity was 0.86, the area under the ROC curve of female was 0.84, the sensitivity was 0.75, the specificity was 0.90. Conclusion: This study model data is from a compared study of nested case, the risk prediction model has been established by using the more mature logistic regression techniques, and the model is higher predictive sensitivity, specificity and stability.

  18. KiDoQ: using docking based energy scores to develop ligand based model for predicting antibacterials

    Directory of Open Access Journals (Sweden)

    Tewari Rupinder

    2010-03-01

    Full Text Available Abstract Background Identification of novel drug targets and their inhibitors is a major challenge in the field of drug designing and development. Diaminopimelic acid (DAP pathway is a unique lysine biosynthetic pathway present in bacteria, however absent in mammals. This pathway is vital for bacteria due to its critical role in cell wall biosynthesis. One of the essential enzymes of this pathway is dihydrodipicolinate synthase (DHDPS, considered to be crucial for the bacterial survival. In view of its importance, the development and prediction of potent inhibitors against DHDPS may be valuable to design effective drugs against bacteria, in general. Results This paper describes a methodology for predicting novel/potent inhibitors against DHDPS. Here, quantitative structure activity relationship (QSAR models were trained and tested on experimentally verified 23 enzyme's inhibitors having inhibitory value (Ki in the range of 0.005-22(mM. These inhibitors were docked at the active site of DHDPS (1YXD using AutoDock software, which resulted in 11 energy-based descriptors. For QSAR modeling, Multiple Linear Regression (MLR model was engendered using best four energy-based descriptors yielding correlation values R/q2 of 0.82/0.67 and MAE of 2.43. Additionally, Support Vector Machine (SVM based model was developed with three crucial descriptors selected using F-stepping remove-one approach, which enhanced the performance by attaining R/q2 values of 0.93/0.80 and MAE of 1.89. To validate the performance of QSAR models, external cross-validation procedure was adopted which accomplished high training/testing correlation values (q2/r2 in the range of 0.78-0.83/0.93-0.95. Conclusions Our results suggests that ligand-receptor binding interactions for DHDPS employing QSAR modeling seems to be a promising approach for prediction of antibacterial agents. To serve the experimentalist to develop novel/potent inhibitors, a webserver "KiDoQ" has been developed http

  19. Improving predictive power of physically based rainfall-induced shallow landslide models: a probabilistic approach

    Directory of Open Access Journals (Sweden)

    S. Raia

    2013-02-01

    Full Text Available Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are deterministic. These models extend spatially the static stability models adopted in geotechnical engineering and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the existing models is the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of shallow rainfall-induced landslides. For the purpose, we have modified the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Analysis (TRIGRS code. The new code (TRIGRS-P adopts a stochastic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs of several model runs obtained varying

  20. Improving predictive power of physically based rainfall-induced shallow landslide models: a probablistic approach

    Science.gov (United States)

    Raia, S.; Alvioli, M.; Rossi, M.; Baum, R.L.; Godt, J.W.; Guzzetti, F.

    2013-01-01

    Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are deterministic. These models extend spatially the static stability models adopted in geotechnical engineering and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the existing models is the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of shallow rainfall-induced landslides. For the purpose, we have modified the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Analysis (TRIGRS) code. The new code (TRIGRS-P) adopts a stochastic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs of several model runs obtained varying the input parameters

  1. A study on model fidelity for model predictive control-based obstacle avoidance in high-speed autonomous ground vehicles

    Science.gov (United States)

    Liu, Jiechao; Jayakumar, Paramsothy; Stein, Jeffrey L.; Ersal, Tulga

    2016-11-01

    This paper investigates the level of model fidelity needed in order for a model predictive control (MPC)-based obstacle avoidance algorithm to be able to safely and quickly avoid obstacles even when the vehicle is close to its dynamic limits. The context of this work is large autonomous ground vehicles that manoeuvre at high speed within unknown, unstructured, flat environments and have significant vehicle dynamics-related constraints. Five different representations of vehicle dynamics models are considered: four variations of the two degrees-of-freedom (DoF) representation as lower fidelity models and a fourteen DoF representation with combined-slip Magic Formula tyre model as a higher fidelity model. It is concluded that the two DoF representation that accounts for tyre nonlinearities and longitudinal load transfer is necessary for the MPC-based obstacle avoidance algorithm in order to operate the vehicle at its limits within an environment that includes large obstacles. For less challenging environments, however, the two DoF representation with linear tyre model and constant axle loads is sufficient.

  2. Predicting debris flow occurrence in Eastern Italian Alps based on hydrological and geomorphological modelling

    Science.gov (United States)

    Nikolopoulos, Efthymios I.; Borga, Marco; Destro, Elisa; Marchi, Lorenzo

    2015-04-01

    Most of the work so far on the prediction of debris flow occurrence is focused on the identification of critical rainfall conditions. However, findings in the literature have shown that critical rainfall thresholds cannot always accurately identify debris flow occurrence, leading to false detections (positive or negative). One of the main reasons for this limitation is attributed to the fact that critical rainfall thresholds do not account for the characteristics of underlying land surface (e.g. geomorphology, moisture conditions, sediment availability, etc), which are strongly related to debris flow triggering. In addition, in areas where debris flows occur predominantly as a result of channel bed failure (as in many Alpine basins), the triggering factor is runoff, which suggests that identification of critical runoff conditions for debris flow prediction is more pertinent than critical rainfall. The primary objective of this study is to investigate the potential of a triggering index (TI), which combines variables related to runoff generation and channel morphology, for predicting debris flows occurrence. TI is based on a threshold criterion developed on past works (Tognacca et al., 2000; Berti and Simoni, 2005; Gregoretti and Dalla Fontana, 2008) and combines information on unit width peak flow, local channel slope and mean grain size. Estimation of peak discharge is based on the application of a distributed hydrologic model, while local channel slope is derived from a high-resolution (5m) DEM. Scaling functions of peak flows and channel width with drainage area are adopted since it is not possible to measure channel width or simulate peak flow at all channel nodes. TI values are mapped over the channel network thus allowing spatially distributed prediction but instead of identifying debris flow occurrence on single points, we identify their occurrence with reference to the tributary catchment involved. Evaluation of TI is carried out for five different basins

  3. Prediction of Hemodynamic Response to Epinephrine via Model-Based System Identification.

    Science.gov (United States)

    Bighamian, Ramin; Soleymani, Sadaf; Reisner, Andrew T; Seri, Istvan; Hahn, Jin-Oh

    2016-01-01

    In this study, we present a system identification approach to the mathematical modeling of hemodynamic responses to vasopressor-inotrope agents. We developed a hybrid model called the latency-dose-response-cardiovascular (LDC) model that incorporated 1) a low-order lumped latency model to reproduce the delay associated with the transport of vasopressor-inotrope agent and the onset of physiological effect, 2) phenomenological dose-response models to dictate the steady-state inotropic, chronotropic, and vasoactive responses as a function of vasopressor-inotrope dose, and 3) a physiological cardiovascular model to translate the agent's actions into the ultimate response of blood pressure. We assessed the validity of the LDC model to fit vasopressor-inotrope dose-response data using data collected from five piglet subjects during variable epinephrine infusion rates. The results suggested that the LDC model was viable in modeling the subjects' dynamic responses: After tuning the model to each subject, the r (2) values for measured versus model-predicted mean arterial pressure were consistently higher than 0.73. The results also suggested that intersubject variability in the dose-response models, rather than the latency models, had significantly more impact on the model's predictive capability: Fixing the latency model to population-averaged parameter values resulted in r(2) values higher than 0.57 between measured versus model-predicted mean arterial pressure, while fixing the dose-response model to population-averaged parameter values yielded nonphysiological predictions of mean arterial pressure. We conclude that the dose-response relationship must be individualized, whereas a population-averaged latency-model may be acceptable with minimal loss of model fidelity.

  4. Combined Active and Reactive Power Control of Wind Farms based on Model Predictive Control

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Wang, Jianhui;

    2017-01-01

    This paper proposes a combined wind farm controller based on Model Predictive Control (MPC). Compared with the conventional decoupled active and reactive power control, the proposed control scheme considers the significant impact of active power on voltage variations due to the low X=R ratio...... of wind farm collector systems. The voltage control is improved. Besides, by coordination of active and reactive power, the Var capacity is optimized to prevent potential failures due to Var shortage, especially when the wind farm operates close to its full load. An analytical method is used to calculate...... the sensitivity coefficients to improve the computation efficiency and overcome the convergence problem. Two control modes are designed for both normal and emergency conditions. A wind farm with 20 wind turbines was used to verify the proposed combined control scheme....

  5. Iterated non-linear model predictive control based on tubes and contractive constraints.

    Science.gov (United States)

    Murillo, M; Sánchez, G; Giovanini, L

    2016-05-01

    This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle.

  6. Predicting lung dosimetry of inhaled particleborne benzo[a]pyrene using physiologically based pharmacokinetic modeling

    Science.gov (United States)

    Campbell, Jerry; Franzen, Allison; Van Landingham, Cynthia; Lumpkin, Michael; Crowell, Susan; Meredith, Clive; Loccisano, Anne; Gentry, Robinan; Clewell, Harvey

    2016-01-01

    Abstract Benzo[a]pyrene (BaP) is a by-product of incomplete combustion of fossil fuels and plant/wood products, including tobacco. A physiologically based pharmacokinetic (PBPK) model for BaP for the rat was extended to simulate inhalation exposures to BaP in rats and humans including particle deposition and dissolution of absorbed BaP and renal elimination of 3-hydroxy benzo[a]pyrene (3-OH BaP) in humans. The clearance of particle-associated BaP from lung based on existing data in rats and dogs suggest that the process is bi-phasic. An initial rapid clearance was represented by BaP released from particles followed by a slower first-order clearance that follows particle kinetics. Parameter values for BaP-particle dissociation were estimated using inhalation data from isolated/ventilated/perfused rat lungs and optimized in the extended inhalation model using available rat data. Simulations of acute inhalation exposures in rats identified specific data needs including systemic elimination of BaP metabolites, diffusion-limited transfer rates of BaP from lung tissue to blood and the quantitative role of macrophage-mediated and ciliated clearance mechanisms. The updated BaP model provides very good prediction of the urinary 3-OH BaP concentrations and the relative difference between measured 3-OH BaP in nonsmokers versus smokers. This PBPK model for inhaled BaP is a preliminary tool for quantifying lung BaP dosimetry in rat and humans and was used to prioritize data needs that would provide significant model refinement and robust internal dosimetry capabilities. PMID:27569524

  7. Case studies of extended model-based flood forecasting: prediction of dike strength and flood impacts

    Science.gov (United States)

    Stuparu, Dana; Bachmann, Daniel; Bogaard, Tom; Twigt, Daniel; Verkade, Jan; de Bruijn, Karin; de Leeuw, Annemargreet

    2017-04-01

    Flood forecasts, warning and emergency response are important components in flood risk management. Most flood forecasting systems use models to translate weather predictions to forecasted discharges or water levels. However, this information is often not sufficient for real time decisions. A sound understanding of the reliability of embankments and flood dynamics is needed to react timely and reduce the negative effects of the flood. Where are the weak points in the dike system? When, how much and where the water will flow? When and where is the greatest impact expected? Model-based flood impact forecasting tries to answer these questions by adding new dimensions to the existing forecasting systems by providing forecasted information about: (a) the dike strength during the event (reliability), (b) the flood extent in case of an overflow or a dike failure (flood spread) and (c) the assets at risk (impacts). This work presents three study-cases in which such a set-up is applied. Special features are highlighted. Forecasting of dike strength. The first study-case focusses on the forecast of dike strength in the Netherlands for the river Rhine branches Waal, Nederrijn and IJssel. A so-called reliability transformation is used to translate the predicted water levels at selected dike sections into failure probabilities during a flood event. The reliability of a dike section is defined by fragility curves - a summary of the dike strength conditional to the water level. The reliability information enhances the emergency management and inspections of embankments. Ensemble forecasting. The second study-case shows the setup of a flood impact forecasting system in Dumfries, Scotland. The existing forecasting system is extended with a 2D flood spreading model in combination with the Delft-FIAT impact model. Ensemble forecasts are used to make use of the uncertainty in the precipitation forecasts, which is useful to quantify the certainty of a forecasted flood event. From global

  8. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  9. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  10. Finite element based model predictive control for active vibration suppression of a one-link flexible manipulator.

    Science.gov (United States)

    Dubay, Rickey; Hassan, Marwan; Li, Chunying; Charest, Meaghan

    2014-09-01

    This paper presents a unique approach for active vibration control of a one-link flexible manipulator. The method combines a finite element model of the manipulator and an advanced model predictive controller to suppress vibration at its tip. This hybrid methodology improves significantly over the standard application of a predictive controller for vibration control. The finite element model used in place of standard modelling in the control algorithm provides a more accurate prediction of dynamic behavior, resulting in enhanced control. Closed loop control experiments were performed using the flexible manipulator, instrumented with strain gauges and piezoelectric actuators. In all instances, experimental and simulation results demonstrate that the finite element based predictive controller provides improved active vibration suppression in comparison with using a standard predictive control strategy. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  11. A Short-Term Climate Prediction Model Based on a Modular Fuzzy Neural Network

    Institute of Scientific and Technical Information of China (English)

    JIN Long; JIN Jian; YAO Cai

    2005-01-01

    In terms of the modular fuzzy neural network (MFNN) combining fuzzy c-mean (FCM) cluster and single-layer neural network, a short-term climate prediction model is developed. It is found from modeling results that the MFNN model for short-term climate prediction has advantages of simple structure, no hidden layer and stable network parameters because of the assembling of sound functions of the selfadaptive learning, association and fuzzy information processing of fuzzy mathematics and neural network methods. The case computational results of Guangxi flood season (JJA) rainfall show that the mean absolute error (MAE) and mean relative error (MRE) of the prediction during 1998-2002 are 68.8 mm and 9.78%, and in comparison with the regression method, under the conditions of the same predictors and period they are 97.8 mm and 12.28% respectively. Furthermore, it is also found from the stability analysis of the modular model that the change of the prediction results of independent samples with training times in the stably convergent interval of the model is less than 1.3 mm. The obvious oscillation phenomenon of prediction results with training times, such as in the common back-propagation neural network (BPNN)model, does not occur, indicating a better practical application potential of the MFNN model.

  12. Model predictive control-based scheduler for repetitive discrete event systems with capacity constraints

    Directory of Open Access Journals (Sweden)

    Hiroyuki Goto

    2013-07-01

    Full Text Available A model predictive control-based scheduler for a class of discrete event systems is designed and developed. We focus on repetitive, multiple-input, multiple-output, and directed acyclic graph structured systems on which capacity constraints can be imposed. The target system’s behaviour is described by linear equations in max-plus algebra, referred to as state-space representation. Assuming that the system’s performance can be improved by paying additional cost, we adjust the system parameters and determine control inputs for which the reference output signals can be observed. The main contribution of this research is twofold, 1: For systems with capacity constraints, we derived an output prediction equation as functions of adjustable variables in a recursive form, 2: Regarding the construct for the system’s representation, we improved the structure to accomplish general operations which are essential for adjusting the system parameters. The result of numerical simulation in a later section demonstrates the effectiveness of the developed controller.

  13. A Novel Adaptive Conditional Probability-Based Predicting Model for User’s Personality Traits

    Directory of Open Access Journals (Sweden)

    Mengmeng Wang

    2015-01-01

    Full Text Available With the pervasive increase in social media use, the explosion of users’ generated data provides a potentially very rich source of information, which plays an important role in helping online researchers understand user’s behaviors deeply. Since user’s personality traits are the driving force of user’s behaviors, hence, in this paper, along with social network features, we first extract linguistic features, emotional statistical features, and topic features from user’s Facebook status updates, followed by quantifying importance of features via Kendall correlation coefficient. And then, on the basis of weighted features and dynamic updated thresholds of personality traits, we deploy a novel adaptive conditional probability-based predicting model which considers prior knowledge of correlations between user’s personality traits to predict user’s Big Five personality traits. In the experimental work, we explore the existence of correlations between user’s personality traits which provides a better theoretical support for our proposed method. Moreover, on the same Facebook dataset, compared to other methods, our method can achieve an F1-measure of 80.6% when taking into account correlations between user’s personality traits, and there is an impressive improvement of 5.8% over other approaches.

  14. Predictive modelling-based design and experiments for synthesis and spinning of bioinspired silk fibres

    Science.gov (United States)

    Lin, Shangchao; Ryu, Seunghwa; Tokareva, Olena; Gronau, Greta; Jacobsen, Matthew M.; Huang, Wenwen; Rizzo, Daniel J.; Li, David; Staii, Cristian; Pugno, Nicola M.; Wong, Joyce Y.; Kaplan, David L.; Buehler, Markus J.

    2015-05-01

    Scalable computational modelling tools are required to guide the rational design of complex hierarchical materials with predictable functions. Here, we utilize mesoscopic modelling, integrated with genetic block copolymer synthesis and bioinspired spinning process, to demonstrate de novo materials design that incorporates chemistry, processing and material characterization. We find that intermediate hydrophobic/hydrophilic block ratios observed in natural spider silks and longer chain lengths lead to outstanding silk fibre formation. This design by nature is based on the optimal combination of protein solubility, self-assembled aggregate size and polymer network topology. The original homogeneous network structure becomes heterogeneous after spinning, enhancing the anisotropic network connectivity along the shear flow direction. Extending beyond the classical polymer theory, with insights from the percolation network model, we illustrate the direct proportionality between network conductance and fibre Young's modulus. This integrated approach provides a general path towards de novo functional network materials with enhanced mechanical properties and beyond (optical, electrical or thermal) as we have experimentally verified.

  15. Predictive modelling-based design and experiments for synthesis and spinning of bioinspired silk fibres.

    Science.gov (United States)

    Lin, Shangchao; Ryu, Seunghwa; Tokareva, Olena; Gronau, Greta; Jacobsen, Matthew M; Huang, Wenwen; Rizzo, Daniel J; Li, David; Staii, Cristian; Pugno, Nicola M; Wong, Joyce Y; Kaplan, David L; Buehler, Markus J

    2015-05-28

    Scalable computational modelling tools are required to guide the rational design of complex hierarchical materials with predictable functions. Here, we utilize mesoscopic modelling, integrated with genetic block copolymer synthesis and bioinspired spinning process, to demonstrate de novo materials design that incorporates chemistry, processing and material characterization. We find that intermediate hydrophobic/hydrophilic block ratios observed in natural spider silks and longer chain lengths lead to outstanding silk fibre formation. This design by nature is based on the optimal combination of protein solubility, self-assembled aggregate size and polymer network topology. The original homogeneous network structure becomes heterogeneous after spinning, enhancing the anisotropic network connectivity along the shear flow direction. Extending beyond the classical polymer theory, with insights from the percolation network model, we illustrate the direct proportionality between network conductance and fibre Young's modulus. This integrated approach provides a general path towards de novo functional network materials with enhanced mechanical properties and beyond (optical, electrical or thermal) as we have experimentally verified.

  16. In silico prediction of toxicity of non-congeneric industrial chemicals using ensemble learning based modeling approaches

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com; Gupta, Shikha

    2014-03-15

    Ensemble learning approach based decision treeboost (DTB) and decision tree forest (DTF) models are introduced in order to establish quantitative structure–toxicity relationship (QSTR) for the prediction of toxicity of 1450 diverse chemicals. Eight non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals was evaluated using Tanimoto similarity index. Stochastic gradient boosting and bagging algorithms supplemented DTB and DTF models were constructed for classification and function optimization problems using the toxicity end-point in T. pyriformis. Special attention was drawn to prediction ability and robustness of the models, investigated both in external and 10-fold cross validation processes. In complete data, optimal DTB and DTF models rendered accuracies of 98.90%, 98.83% in two-category and 98.14%, 98.14% in four-category toxicity classifications. Both the models further yielded classification accuracies of 100% in external toxicity data of T. pyriformis. The constructed regression models (DTB and DTF) using five descriptors yielded correlation coefficients (R{sup 2}) of 0.945, 0.944 between the measured and predicted toxicities with mean squared errors (MSEs) of 0.059, and 0.064 in complete T. pyriformis data. The T. pyriformis regression models (DTB and DTF) applied to the external toxicity data sets yielded R{sup 2} and MSE values of 0.637, 0.655; 0.534, 0.507 (marine bacteria) and 0.741, 0.691; 0.155, 0.173 (algae). The results suggest for wide applicability of the inter-species models in predicting toxicity of new chemicals for regulatory purposes. These approaches provide useful strategy and robust tools in the screening of ecotoxicological risk or environmental hazard potential of chemicals. - Graphical abstract: Importance of input variables in DTB and DTF classification models for (a) two-category, and (b) four-category toxicity intervals in T. pyriformis data. Generalization and predictive abilities of the

  17. Model-based Pedestrian Trajectory Prediction using Environmental Sensor for Mobile Robots Navigation

    Directory of Open Access Journals (Sweden)

    Haruka Tonoki

    2017-02-01

    Full Text Available Safety is the most important to the mobile robots that coexist with human. There are many studies that investigate obstacle detection and collision avoidance by predicting obstacles’ trajectories several seconds into the future using mounted sensors such as cameras and laser range finder (LRF for the safe behavior control of robots. In environments such as crossing roads where blind areas occur because of visual barriers like walls, obstacle detection might be delayed and collisions might be difficult to avoid. Using environmental sensors to detect obstacles is effective in such environments. When crossing roads, there are several passages pedestrian might move and it is difficult to depict going each passage in the same movement model. Therefore, we hypothesize that a more effective way to predict pedestrian movement is by predicting passages pedestrian might move and estimating the trajectories to the passages. We acquire pedestrian trajectory data using an environmental LRF with an extended Kalman filter (EKF and construct pedestrian movement models using vector auto regressive (VAR models, which pedestrian state is consisting of the position, speed and direction. Then, we test the validity of the constructed pedestrian movement models using experimental data. We narrow down the selection of a pedestrian movement model by comparing the prediction error for each path between the estimated pedestrian state using an EKF, and the predicted state using each movement model. We predict the trajectory using the selected movement model. Finally, we confirm that an appropriate path model that a pedestrian can actually move through is selected before the crossing area and that only the appropriate model is selected near the crossing area.

  18. Past, present and prospect of an Artificial Intelligence (AI) based model for sediment transport prediction

    Science.gov (United States)

    Afan, Haitham Abdulmohsin; El-shafie, Ahmed; Mohtar, Wan Hanna Melini Wan; Yaseen, Zaher Mundher

    2016-10-01

    An accurate model for sediment prediction is a priority for all hydrological researchers. Many conventional methods have shown an inability to achieve an accurate prediction of suspended sediment. These methods are unable to understand the behaviour of sediment transport in rivers due to the complexity, noise, non-stationarity, and dynamism of the sediment pattern. In the past two decades, Artificial Intelligence (AI) and computational approaches have become a remarkable tool for developing an accurate model. These approaches are considered a powerful tool for solving any non-linear model, as they can deal easily with a large number of data and sophisticated models. This paper is a review of all AI approaches that have been applied in sediment modelling. The current research focuses on the development of AI application in sediment transport. In addition, the review identifies major challenges and opportunities for prospective research. Throughout the literature, complementary models superior to classical modelling.

  19. Evaluation of atmospheric dust prediction models using ground-based observations

    Science.gov (United States)

    Terradellas, Enric; María Baldasano, José; Cuevas, Emilio; Basart, Sara; Huneeus, Nicolás; Camino, Carlos; Dundar, Cinhan; Benincasa, Francesco

    2013-04-01

    An important step in numerical prediction of mineral dust is the model evaluation aimed to assess its performance to forecast the atmospheric dust content and to lead to new directions in model development and improvement. The first problem to address the evaluation is the scarcity of ground-based routine observations intended for dust monitoring. An alternative option would be the use of satellite products. They have the advantage of a large spatial coverage and a regular availability. However, they do have numerous drawbacks that make the quantitative retrievals of aerosol-related variables difficult and imprecise. This work presents the use of different ground-based observing systems for the evaluation of dust models in the Regional Center for Northern Africa, Middle East and Europe of the World Meteorological Organization (WMO) Sand and Dust Storm Warning Advisory and Assessment System (SDS-WAS). The dust optical depth at 550 nm forecast by different models is regularly compared with the AERONET measurements of Aerosol Optical Depth (AOD) for 40 selected stations. Photometric measurements are a powerful tool for remote sensing of the atmosphere allowing retrieval of aerosol properties, such as AOD. This variable integrates the contribution of different aerosol types, but may be complemented with spectral information that enables hypotheses about the nature of the particles. Comparison is restricted to cases with low Ångström exponent values in order to ensure that coarse mineral dust is the dominant aerosol type. Additionally to column dust load, it is important to evaluate dust surface concentration and dust vertical profiles. Air quality monitoring stations are the main source of data for the evaluation of surface concentration. However they are concentrated in populated and industrialized areas around the Mediterranean. In the present contribution, results of different models are compared with observations of PM10 from the Turkish air quality network for

  20. Long-Term Sunspot Number Prediction based on EMD Analysis and AR Model

    Institute of Scientific and Technical Information of China (English)

    Tong Xu; Jian Wu; Zhen-Sen Wu; Qiang Li

    2008-01-01

    The Empirical Mode Decomposition (EMD) and Auto-Regressive model (AR) are applied to a long-term prediction of sunspot numbers. With the sample data of sunspot numbers from 1848 to 1992, the method is evaluated by examining the measured data of the solar cycle 23 with the prediction: different time scale components are obtained by the EMD method and multi-step predicted values are combined to reconstruct the sunspot number time series. The result is remarkably good in comparison to the predictions made by the solar dynamo and precursor approaches for cycle 23. Sunspot numbers of the coming solar cycle 24 are obtained with the data from 1848 to 2007, the maximum amplitude of the next solar cycle is predicted to be about 112 in 2011-2012.

  1. Bending Angle Prediction Model Based on BPNN-Spline in Air Bending Springback Process

    Directory of Open Access Journals (Sweden)

    Zhefeng Guo

    2017-01-01

    Full Text Available In order to rapidly and accurately predict the springback bending angle in V-die air bending process, a springback bending angle prediction model on the combination of error back propagation neural network and spline function (BPNN-Spline is presented in this study. An orthogonal experimental sample set for training BPNN-Spline is obtained by finite element simulation. Through the analysis of network structure, the BPNN-Spline black box function of bending angle prediction is established, and the advantage of BPNN-Spline is discussed in comparison with traditional BPNN. The results show a close agreement with simulated and experimental results by application examples, which means that the BPNN-Spline model in this study has higher prediction accuracy and better applicable ability. Therefore, it could be adopted in a numerical control bending machine system.

  2. Predicting high-cost pediatric patients: derivation and validation of a population-based model.

    Science.gov (United States)

    Leininger, Lindsey J; Saloner, Brendan; Wherry, Laura R

    2015-08-01

    Health care administrators often lack feasible methods to prospectively identify new pediatric patients with high health care needs, precluding the ability to proactively target appropriate population health management programs to these children. To develop and validate a predictive model identifying high-cost pediatric patients using parent-reported health (PRH) measures that can be easily collected in clinical and administrative settings. Retrospective cohort study using 2-year panel data from the 2001 to 2011 rounds of the Medical Expenditure Panel Survey. A total of 24,163 children aged 5-17 with family incomes below 400% of the federal poverty line were included in this study. Predictive performance, including the c-statistic, sensitivity, specificity, and predictive values, of multivariate logistic regression models predicting top-decile health care expenditures over a 1-year period. Seven independent domains of PRH measures were tested for predictive capacity relative to basic sociodemographic information: the Children with Special Health Care Needs (CSHCN) Screener; subjectively rated health status; prior year health care utilization; behavioral problems; asthma diagnosis; access to health care; and parental health status and access to care. The CSHCN screener and prior year utilization domains exhibited the highest incremental predictive gains over the baseline model. A model including sociodemographic characteristics, the CSHCN screener, and prior year utilization had a c-statistic of 0.73 (95% confidence interval, 0.70-0.74), surpassing the commonly used threshold to establish sufficient predictive capacity (c-statistic>0.70). The proposed prediction tool, comprising a simple series of PRH measures, accurately stratifies pediatric populations by their risk of incurring high health care costs.

  3. Driver Vision Based Perception-Response Time Prediction and Assistance Model on Mountain Highway Curve

    Science.gov (United States)

    Li, Yi; Chen, Yuren

    2016-01-01

    To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers’ perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers’ vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers’ perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers’ perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers’ perception-response time. PMID:28042851

  4. Driver Vision Based Perception-Response Time Prediction and Assistance Model on Mountain Highway Curve

    Directory of Open Access Journals (Sweden)

    Yi Li

    2016-12-01

    Full Text Available To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers’ perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers’ vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second. A corresponding assistance model showed a positive impact on drivers’ perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers’ perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers’ perception-response time.

  5. Driver Vision Based Perception-Response Time Prediction and Assistance Model on Mountain Highway Curve.

    Science.gov (United States)

    Li, Yi; Chen, Yuren

    2016-12-30

    To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.

  6. Model-Based Predictive Control Scheme for Cost Optimization and Balancing Services for Supermarket Refrigeration Systems

    DEFF Research Database (Denmark)

    Weerts, Hermanus H. M.; Shafiei, Seyed Ehsan; Stoustrup, Jakob

    2014-01-01

    A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predictive...... control design. It is however shown that taking into account the knowledge of different time scales in the dynamical subsystems makes possible a linear formulation of a centralized predictive controller. A realistic scenario of regulatory power services in the smart grid is considered and formulated...

  7. To Set Up a Logistic Regression Prediction Model for Hepatotoxicity of Chinese Herbal Medicines Based on Traditional Chinese Medicine Theory

    Science.gov (United States)

    Liu, Hongjie; Li, Tianhao; Zhan, Sha; Pan, Meilan; Ma, Zhiguo; Li, Chenghua

    2016-01-01

    Aims. To establish a logistic regression (LR) prediction model for hepatotoxicity of Chinese herbal medicines (HMs) based on traditional Chinese medicine (TCM) theory and to provide a statistical basis for predicting hepatotoxicity of HMs. Methods. The correlations of hepatotoxic and nonhepatotoxic Chinese HMs with four properties, five flavors, and channel tropism were analyzed with chi-square test for two-way unordered categorical data. LR prediction model was established and the accuracy of the prediction by this model was evaluated. Results. The hepatotoxic and nonhepatotoxic Chinese HMs were related with four properties (p flavors (p 0.05). There were totally 12 variables from four properties and five flavors for the LR. Four variables, warm and neutral of the four properties and pungent and salty of five flavors, were selected to establish the LR prediction model, with the cutoff value being 0.204. Conclusions. Warm and neutral of the four properties and pungent and salty of five flavors were the variables to affect the hepatotoxicity. Based on such results, the established LR prediction model had some predictive power for hepatotoxicity of Chinese HMs. PMID:27656240

  8. To Set Up a Logistic Regression Prediction Model for Hepatotoxicity of Chinese Herbal Medicines Based on Traditional Chinese Medicine Theory.

    Science.gov (United States)

    Liu, Hongjie; Li, Tianhao; Chen, Lingxiu; Zhan, Sha; Pan, Meilan; Ma, Zhiguo; Li, Chenghua; Zhang, Zhe

    2016-01-01

    Aims. To establish a logistic regression (LR) prediction model for hepatotoxicity of Chinese herbal medicines (HMs) based on traditional Chinese medicine (TCM) theory and to provide a statistical basis for predicting hepatotoxicity of HMs. Methods. The correlations of hepatotoxic and nonhepatotoxic Chinese HMs with four properties, five flavors, and channel tropism were analyzed with chi-square test for two-way unordered categorical data. LR prediction model was established and the accuracy of the prediction by this model was evaluated. Results. The hepatotoxic and nonhepatotoxic Chinese HMs were related with four properties (p 0.05). There were totally 12 variables from four properties and five flavors for the LR. Four variables, warm and neutral of the four properties and pungent and salty of five flavors, were selected to establish the LR prediction model, with the cutoff value being 0.204. Conclusions. Warm and neutral of the four properties and pungent and salty of five flavors were the variables to affect the hepatotoxicity. Based on such results, the established LR prediction model had some predictive power for hepatotoxicity of Chinese HMs.

  9. Research on the Concentration Prediction of Nitrogen in Red Tide Based on an Optimal Grey Verhulst Model

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2016-01-01

    Full Text Available In order to reduce the harm of red tide to marine ecological balance, marine fisheries, aquatic resources, and human health, an optimal Grey Verhulst model is proposed to predict the concentration of nitrogen in seawater, which is the key factor in red tide. The Grey Verhulst model is established according to the existing concentration data series of nitrogen in seawater, which is then optimized based on background value and time response formula to predict the future changes in the nitrogen concentration in seawater. Finally, the accuracy of the model is tested by the posterior test. The results show that the prediction value based on the optimal Grey Verhulst model is in good agreement with the measured nitrogen concentration in seawater, which proves the effectiveness of the optimal Grey Verhulst model in the forecast of red tide.

  10. Self-adaptive prediction of cloud resource demands using ensemble model and subtractive-fuzzy clustering based fuzzy neural network.

    Science.gov (United States)

    Chen, Zhijia; Zhu, Yuanchang; Di, Yanqiang; Feng, Shaochong

    2015-01-01

    In IaaS (infrastructure as a service) cloud environment, users are provisioned with virtual machines (VMs). To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN). We analyze the characters of user preferences and demands. Then the architecture of the prediction model is constructed. We adopt some base predictors to compose the ensemble model. Then the structure and learning algorithm of fuzzy neural network is researched. To obtain the number of fuzzy rules and the initial value of the premise and consequent parameters, this paper proposes the fuzzy c-means combined with subtractive clustering algorithm, that is, the subtractive-fuzzy clustering. Finally, we adopt different criteria to evaluate the proposed method. The experiment results show that the method is accurate and effective in predicting the resource demands.

  11. Self-Adaptive Prediction of Cloud Resource Demands Using Ensemble Model and Subtractive-Fuzzy Clustering Based Fuzzy Neural Network

    Science.gov (United States)

    Chen, Zhijia; Zhu, Yuanchang; Di, Yanqiang; Feng, Shaochong

    2015-01-01

    In IaaS (infrastructure as a service) cloud environment, users are provisioned with virtual machines (VMs). To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN). We analyze the characters of user preferences and demands. Then the architecture of the prediction model is constructed. We adopt some base predictors to compose the ensemble model. Then the structure and learning algorithm of fuzzy neural network is researched. To obtain the number of fuzzy rules and the initial value of the premise and consequent parameters, this paper proposes the fuzzy c-means combined with subtractive clustering algorithm, that is, the subtractive-fuzzy clustering. Finally, we adopt different criteria to evaluate the proposed method. The experiment results show that the method is accurate and effective in predicting the resource demands. PMID:25691896

  12. Prediction of Combine Economic Life Based on Repair and Maintenance Costs Model

    Directory of Open Access Journals (Sweden)

    A Rohani

    2014-09-01

    Full Text Available Farm machinery managers often need to make complex economic decisions on machinery replacement. Repair and maintenance costs can have significant impacts on this economic decision. The farm manager must be able to predict farm machinery repair and maintenance costs. This study aimed to identify a regression model that can adequately represent the repair and maintenance costs in terms of machine age in cumulative hours of use. The regression model has the ability to predict the repair and maintenance costs for longer time periods. Therefore, it can be used for the estimation of the economic life. The study was conducted using field data collected from 11 John-Deer 955 combine harvesters used in several western provinces of Iran. It was found that power model has a better performance for the prediction of combine repair and maintenance costs. The results showed that the optimum replacement age of John-Deer 955 combine was 54300 cumulative hours.

  13. Genome-wide prediction, display and refinement of binding sites with information theory-based models

    Directory of Open Access Journals (Sweden)

    Leeder J Steven

    2003-09-01

    Full Text Available Abstract Background We present Delila-genome, a software system for identification, visualization and analysis of protein binding sites in complete genome sequences. Binding sites are predicted by scanning genomic sequences with information theory-based (or user-defined weight matrices. Matrices are refined by adding experimentally-defined binding sites to published binding sites. Delila-Genome was used to examine the accuracy of individual information contents of binding sites detected with refined matrices as a measure of the strengths of the corresponding protein-nucleic acid interactions. The software can then be used to predict novel sites by rescanning the genome with the refined matrices. Results Parameters for genome scans are entered using a Java-based GUI interface and backend scripts in Perl. Multi-processor CPU load-sharing minimized the average response time for scans of different chromosomes. Scans of human genome assemblies required 4–6 hours for transcription factor binding sites and 10–19 hours for splice sites, respectively, on 24- and 3-node Mosix and Beowulf clusters. Individual binding sites are displayed either as high-resolution sequence walkers or in low-resolution custom tracks in the UCSC genome browser. For large datasets, we applied a data reduction strategy that limited displays of binding sites exceeding a threshold information content to specific chromosomal regions within or adjacent to genes. An HTML document is produced listing binding sites ranked by binding site strength or chromosomal location hyperlinked to the UCSC custom track, other annotation databases and binding site sequences. Post-genome scan tools parse binding site annotations of selected chromosome intervals and compare the results of genome scans using different weight matrices. Comparisons of multiple genome scans can display binding sites that are unique to each scan and identify sites with significantly altered binding strengths

  14. 3D structure prediction of lignolytic enzymes lignin peroxidase and manganese peroxidase based on homology modelling

    Directory of Open Access Journals (Sweden)

    SWAPNIL K. KALE

    2016-04-01

    Full Text Available Lignolytic enzymes have great biotechnological value in biopulping, biobleaching, and bioremediation. Manganese peroxidase (EC 1:11:1:13 and lignin peroxidase (EC 1:11:1:14 are extracellular and hem-containing peroxidases that catalyze H2O2-dependent oxidation of lignin. Because of their ability to catalyse oxidation of a wide range of organic compounds and even some inorganic compounds, they got tremendous industrial importance. In this study, 3D structure of lignin and manganese peroxidase has been predicted on the basis of homology modeling using Swiss PDB workspace. The physicochemical properties like molecular weight, isoelectric point, Grand average of hydropathy, instability and aliphatic index of the target enzymes were performed using Protparam. The predicted secondary structure of MnP has 18 helices and 6 strands, while LiP has 20 helices and 4 strands. Generated 3D structure was visualized in Pymol. The generated model for MnP and LiP has Z-score Qmean of 0.01 and -0.71, respectively. The predicted models were validated through Ramachandran Plot, which indicated that 96.1 and 95.5% of the residues are in most favored regions for MnP and LiP respectively. The quality of predicted models were assessed and confirmed by VERIFY 3D, PROCHECK and ERRAT. The modeled structure of MnP and LiP were submitted to the Protein Model Database.

  15. Nonlinear quantitative radiation sensitivity prediction model based on NCI-60 cancer cell lines.

    Science.gov (United States)

    Zhang, Chunying; Girard, Luc; Das, Amit; Chen, Sun; Zheng, Guangqiang; Song, Kai

    2014-01-01

    We proposed a nonlinear model to perform a novel quantitative radiation sensitivity prediction. We used the NCI-60 panel, which consists of nine different cancer types, as the platform to train our model. Important radiation therapy (RT) related genes were selected by significance analysis of microarrays (SAM). Orthogonal latent variables (LVs) were then extracted by the partial least squares (PLS) method as the new compressive input variables. Finally, support vector machine (SVM) regression model was trained with these LVs to predict the SF2 (the surviving fraction of cells after a radiation dose of 2 Gy γ-ray) values of the cell lines. Comparison with the published results showed significant improvement of the new method in various ways: (a) reducing the root mean square error (RMSE) of the radiation sensitivity prediction model from 0.20 to 0.011; and (b) improving prediction accuracy from 62% to 91%. To test the predictive performance of the gene signature, three different types of cancer patient datasets were used. Survival analysis across these different types of cancer patients strongly confirmed the clinical potential utility of the signature genes as a general prognosis platform. The gene regulatory network analysis identified six hub genes that are involved in canonical cancer pathways.

  16. Nonlinear Quantitative Radiation Sensitivity Prediction Model Based on NCI-60 Cancer Cell Lines

    Directory of Open Access Journals (Sweden)

    Chunying Zhang

    2014-01-01

    Full Text Available We proposed a nonlinear model to perform a novel quantitative radiation sensitivity prediction. We used the NCI-60 panel, which consists of nine different cancer types, as the platform to train our model. Important radiation therapy (RT related genes were selected by significance analysis of microarrays (SAM. Orthogonal latent variables (LVs were then extracted by the partial least squares (PLS method as the new compressive input variables. Finally, support vector machine (SVM regression model was trained with these LVs to predict the SF2 (the surviving fraction of cells after a radiation dose of 2 Gy γ-ray values of the cell lines. Comparison with the published results showed significant improvement of the new method in various ways: (a reducing the root mean square error (RMSE of the radiation sensitivity prediction model from 0.20 to 0.011; and (b improving prediction accuracy from 62% to 91%. To test the predictive performance of the gene signature, three different types of cancer patient datasets were used. Survival analysis across these different types of cancer patients strongly confirmed the clinical potential utility of the signature genes as a general prognosis platform. The gene regulatory network analysis identified six hub genes that are involved in canonical cancer pathways.

  17. Predictions of fundamental frequency changes during phonation based on a biomechanical model of the vocal fold lamina propria

    OpenAIRE

    Zhang, Kai; Siegmund, Thomas; Chan, Roger W.; Fu, Min

    2008-01-01

    This study examines the local and global changes of fundamental frequency (F0) during phonation and proposes a biomechanical model of predictions of F0 contours based on the mechanics of vibration of vocal fold lamina propria. The biomechanical model integrates the constitutive description of the tissue mechanical response with a structural model of beam vibration. The constitutive model accounts for the nonlinear and time dependent response of the vocal fold cover and the vocal ligament. The...

  18. Geoelectrical parameter-based multivariate regression borehole yield model for predicting aquifer yield in managing groundwater resource sustainability

    Directory of Open Access Journals (Sweden)

    Kehinde Anthony Mogaji

    2016-07-01

    Full Text Available This study developed a GIS-based multivariate regression (MVR yield rate prediction model of groundwater resource sustainability in the hard-rock geology terrain of southwestern Nigeria. This model can economically manage the aquifer yield rate potential predictions that are often overlooked in groundwater resources development. The proposed model relates the borehole yield rate inventory of the area to geoelectrically derived parameters. Three sets of borehole yield rate conditioning geoelectrically derived parameters—aquifer unit resistivity (ρ, aquifer unit thickness (D and coefficient of anisotropy (λ—were determined from the acquired and interpreted geophysical data. The extracted borehole yield rate values and the geoelectrically derived parameter values were regressed to develop the MVR relationship model by applying linear regression and GIS techniques. The sensitivity analysis results of the MVR model evaluated at P ⩽ 0.05 for the predictors ρ, D and λ provided values of 2.68 × 10−05, 2 × 10−02 and 2.09 × 10−06, respectively. The accuracy and predictive power tests conducted on the MVR model using the Theil inequality coefficient measurement approach, coupled with the sensitivity analysis results, confirmed the model yield rate estimation and prediction capability. The MVR borehole yield prediction model estimates were processed in a GIS environment to model an aquifer yield potential prediction map of the area. The information on the prediction map can serve as a scientific basis for predicting aquifer yield potential rates relevant in groundwater resources sustainability management. The developed MVR borehole yield rate prediction mode provides a good alternative to other methods used for this purpose.

  19. Scenario-based, closed-loop model predictive control with application to emergency vehicle scheduling

    Science.gov (United States)

    Goodwin, Graham. C.; Medioli, Adrian. M.

    2013-08-01

    Model predictive control has been a major success story in process control. More recently, the methodology has been used in other contexts, including automotive engine control, power electronics and telecommunications. Most applications focus on set-point tracking and use single-sequence optimisation. Here we consider an alternative class of problems motivated by the scheduling of emergency vehicles. Here disturbances are the dominant feature. We develop a novel closed-loop model predictive control strategy aimed at this class of problems. We motivate, and illustrate, the ideas via the problem of fluid deployment of ambulance resources.

  20. Predictive model of nicotine dependence based on mental health indicators and self-concept

    Directory of Open Access Journals (Sweden)

    Hamid Kazemi Zahrani

    2014-12-01

    Full Text Available Background: The purpose of this research was to investigate the predictive power of anxiety, depression, stress and self-concept dimensions (Mental ability, job efficiency, physical attractiveness, social skills, and deficiencies and merits as predictors of nicotine dependency among university students in Isfahan. Methods: In this correlational study, 110 male nicotine-dependent students at Isfahan University were selected by convenience sampling. All samples were assessed by Depression Anxiety Stress Scale (DASS, self-concept test and Nicotine Dependence Syndrome Scale. Data were analyzed by Pearson correlation and stepwise regression. Results: The result showed that anxiety had the highest strength to predict nicotine dependence. In addition, the self-concept and its dimensions predicted only 12% of the variance in nicotine dependence, which was not significant. Conclusion: Emotional processing variables involved in mental health play an important role in presenting a model to predict students’ dependence on nicotine more than identity variables such as different dimensions of self-concept.

  1. Multiscale modeling of interwoven Kevlar fibers based on random walk to predict yarn structural response

    Science.gov (United States)

    Recchia, Stephen

    Kevlar is the most common high-end plastic filament yarn used in body armor, tire reinforcement, and wear resistant applications. Kevlar is a trade name for an aramid fiber. These are fibers in which the chain molecules are highly oriented along the fiber axis, so the strength of the chemical bond can be exploited. The bulk material is extruded into filaments that are bound together into yarn, which may be chorded with other materials as in car tires, woven into a fabric, or layered in an epoxy to make composite panels. The high tensile strength to low weight ratio makes this material ideal for designs that decrease weight and inertia, such as automobile tires, body panels, and body armor. For designs that use Kevlar, increasing the strength, or tenacity, to weight ratio would improve performance or reduce cost of all products that are based on this material. This thesis computationally and experimentally investigates the tenacity and stiffness of Kevlar yarns with varying twist ratios. The test boundary conditions were replicated with a geometrically accurate finite element model, resulting in a customized code that can reproduce tortuous filaments in a yarn was developed. The solid model geometry capturing filament tortuosity was implemented through a random walk method of axial geometry creation. A finite element analysis successfully recreated the yarn strength and stiffness dependency observed during the tests. The physics applied in the finite element model was reproduced in an analytical equation that was able to predict the failure strength and strain dependency of twist ratio. The analytical solution can be employed to optimize yarn design for high strength applications.

  2. A Comparison of Energy Consumption Prediction Models Based on Neural Networks of a Bioclimatic Building

    Directory of Open Access Journals (Sweden)

    Hamid R. Khosravani

    2016-01-01

    Full Text Available Energy consumption has been increasing steadily due to globalization and industrialization. Studies have shown that buildings are responsible for the biggest proportion of energy consumption; for example in European Union countries, energy consumption in buildings represents around 40% of the total energy consumption. In order to control energy consumption in buildings, different policies have been proposed, from utilizing bioclimatic architectures to the use of predictive models within control approaches. There are mainly three groups of predictive models including engineering, statistical and artificial intelligence models. Nowadays, artificial intelligence models such as neural networks and support vector machines have also been proposed because of their high potential capabilities of performing accurate nonlinear mappings between inputs and outputs in real environments which are not free of noise. The main objective of this paper is to compare a neural network model which was designed utilizing statistical and analytical methods, with a group of neural network models designed benefiting from a multi objective genetic algorithm. Moreover, the neural network models were compared to a naïve autoregressive baseline model. The models are intended to predict electric power demand at the Solar Energy Research Center (Centro de Investigación en Energía SOLar or CIESOL in Spanish bioclimatic building located at the University of Almeria, Spain. Experimental results show that the models obtained from the multi objective genetic algorithm (MOGA perform comparably to the model obtained through a statistical and analytical approach, but they use only 0.8% of data samples and have lower model complexity.

  3. Prediction of protein continuum secondary structure with probabilistic models based on NMR solved structures

    Directory of Open Access Journals (Sweden)

    Bailey Timothy L

    2006-02-01

    Full Text Available Abstract Background The structure of proteins may change as a result of the inherent flexibility of some protein regions. We develop and explore probabilistic machine learning methods for predicting a continuum secondary structure, i.e. assigning probabilities to the conformational states of a residue. We train our methods using data derived from high-quality NMR models. Results Several probabilistic models not only successfully estimate the continuum secondary structure, but also provide a categorical output on par with models directly trained on categorical data. Importantly, models trained on the continuum secondary structure are also better than their categorical counterparts at identifying the conformational state for structurally ambivalent residues. Conclusion Cascaded probabilistic neural networks trained on the continuum secondary structure exhibit better accuracy in structurally ambivalent regions of proteins, while sustaining an overall classification accuracy on par with standard, categorical prediction methods.

  4. Prediction of Boundary Layer Transition Based on Modeling of Laminar Fluctuations Using RANS Approach

    Institute of Scientific and Technical Information of China (English)

    Reza; Taghavi; Z.; Mahmood; Salary; Amir; Kolaei

    2009-01-01

    This article presents a linear eddy-viscosity turbulence model for predicting bypass and natural transition in boundary layers by using Reynolds-averaged Navier-Stokes (RANS) equations. The model includes three transport equations, separately, to compute laminar kinetic energy, turbulent kinetic energy, and dissipation rate in a flow field. It needs neither correlations of intermittency factors nor knowledge of the transition onset. Two transition tests are carried out: flat plate boundary layer under zero ...

  5. Financial crisis early-warning model of listed companies based on predicted value

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    To establish a financial early-warning model with high accuracy of discrimination and achieve the aim of long-term prediction, principal component analysis (PCA), Fisher discriminant, together with grey forecasting models are used at the same time. 110 A-share companies listed on the Shanghai and Shenzhen stock exchange are selected as research samples. And 10 extractive factors with 89.746% of all the original information are determined by applying PCA, which obtains the goal of dimension reduction without...

  6. A Comparison of Three Models to Predict Liquidity Flows between Banks Based on Daily Payments Transactions

    NARCIS (Netherlands)

    Triepels, Ron; Daniels, Hennie

    2016-01-01

    The analysis of payment data has become an important task for operators and overseers of financial market infrastructures. Payment data provide an accurate description of how banks manage their liquidity over time. In this paper we compare three models to predict future liquidity flows from payment

  7. Prediction of organ toxicity endpoints by QSAR modeling based on precise chemical-histopathology annotations.

    Science.gov (United States)

    Myshkin, Eugene; Brennan, Richard; Khasanova, Tatiana; Sitnik, Tatiana; Serebriyskaya, Tatiana; Litvinova, Elena; Guryanov, Alexey; Nikolsky, Yuri; Nikolskaya, Tatiana; Bureeva, Svetlana

    2012-09-01

    The ability to accurately predict the toxicity of drug candidates from their chemical structure is critical for guiding experimental drug discovery toward safer medicines. Under the guidance of the MetaTox consortium (Thomson Reuters, CA, USA), which comprised toxicologists from the pharmaceutical industry and government agencies, we created a comprehensive ontology of toxic pathologies for 19 organs, classifying pathology terms by pathology type and functional organ substructure. By manual annotation of full-text research articles, the ontology was populated with chemical compounds causing specific histopathologies. Annotated compound-toxicity associations defined histologically from rat and mouse experiments were used to build quantitative structure-activity relationship models predicting subcategories of liver and kidney toxicity: liver necrosis, liver relative weight gain, liver lipid accumulation, nephron injury, kidney relative weight gain, and kidney necrosis. All models were validated using two independent test sets and demonstrated overall good performance: initial validation showed 0.80-0.96 sensitivity (correctly predicted toxic compounds) and 0.85-1.00 specificity (correctly predicted non-toxic compounds). Later validation against a test set of compounds newly added to the database in the 2 years following initial model generation showed 75-87% sensitivity and 60-78% specificity. General hepatotoxicity and nephrotoxicity models were less accurate, as expected for more complex endpoints.

  8. Development and validation of a prediction model for long-term sickness absence based on occupational health survey variables.

    Science.gov (United States)

    Roelen, Corné; Thorsen, Sannie; Heymans, Martijn; Twisk, Jos; Bültmann, Ute; Bjørner, Jakob

    2016-11-10

    The purpose of this study is to develop and validate a prediction model for identifying employees at increased risk of long-term sickness absence (LTSA), by using variables commonly measured in occupational health surveys. Based on the literature, 15 predictor variables were retrieved from the DAnish National working Environment Survey (DANES) and included in a model predicting incident LTSA (≥4 consecutive weeks) during 1-year follow-up in a sample of 4000 DANES participants. The 15-predictor model was reduced by backward stepwise statistical techniques and then validated in a sample of 2524 DANES participants, not included in the development sample. Identification of employees at increased LTSA risk was investigated by receiver operating characteristic (ROC) analysis; the area-under-the-ROC-curve (AUC) reflected discrimination between employees with and without LTSA during follow-up. The 15-predictor model was reduced to a 9-predictor model including age, gender, education, self-rated health, mental health, prior LTSA, work ability, emotional job demands, and recognition by the management. Discrimination by the 9-predictor model was significant (AUC = 0.68; 95% CI 0.61-0.76), but not practically useful. A prediction model based on occupational health survey variables identified employees with an increased LTSA risk, but should be further developed into a practically useful tool to predict the risk of LTSA in the general working population. Implications for rehabilitation Long-term sickness absence risk predictions would enable healthcare providers to refer high-risk employees to rehabilitation programs aimed at preventing or reducing work disability. A prediction model based on health survey variables discriminates between employees at high and low risk of long-term sickness absence, but discrimination was not practically useful. Health survey variables provide insufficient information to determine long-term sickness absence risk profiles. There is a need for

  9. Wave modelling as a proxy for seagrass ecological modelling: Comparing fetch and process-based predictions for a bay and reef lagoon

    Science.gov (United States)

    Callaghan, David P.; Leon, Javier X.; Saunders, Megan I.

    2015-02-01

    The distribution, abundance, behaviour, and morphology of marine species is affected by spatial variability in the wave environment. Maps of wave metrics (e.g. significant wave height Hs, peak energy wave period Tp, and benthic wave orbital velocity URMS) are therefore useful for predictive ecological models of marine species and ecosystems. A number of techniques are available to generate maps of wave metrics, with varying levels of complexity in terms of input data requirements, operator knowledge, and computation time. Relatively simple "fetch-based" models are generated using geographic information system (GIS) layers of bathymetry and dominant wind speed and direction. More complex, but computationally expensive, "process-based" models are generated using numerical models such as the Simulating Waves Nearshore (SWAN) model. We generated maps of wave metrics based on both fetch-based and process-based models and asked whether predictive performance in models of benthic marine habitats differed. Predictive models of seagrass distribution for Moreton Bay, Southeast Queensland, and Lizard Island, Great Barrier Reef, Australia, were generated using maps based on each type of wave model. For Lizard Island, performance of the process-based wave maps was significantly better for describing the presence of seagrass, based on Hs, Tp, and URMS. Conversely, for the predictive model of seagrass in Moreton Bay, based on benthic light availability and Hs, there was no difference in performance using the maps of the different wave metrics. For predictive models where wave metrics are the dominant factor determining ecological processes it is recommended that process-based models be used. Our results suggest that for models where wave metrics provide secondarily useful information, either fetch- or process-based models may be equally useful.

  10. A New Predictive Model Based on the ABC Optimized Multivariate Adaptive Regression Splines Approach for Predicting the Remaining Useful Life in Aircraft Engines

    Directory of Open Access Journals (Sweden)

    Paulino José García Nieto

    2016-05-01

    Full Text Available Remaining useful life (RUL estimation is considered as one of the most central points in the prognostics and health management (PHM. The present paper describes a nonlinear hybrid ABC–MARS-based model for the prediction of the remaining useful life of aircraft engines. Indeed, it is well-known that an accurate RUL estimation allows failure prevention in a more controllable way so that the effective maintenance can be carried out in appropriate time to correct impending faults. The proposed hybrid model combines multivariate adaptive regression splines (MARS, which have been successfully adopted for regression problems, with the artificial bee colony (ABC technique. This optimization technique involves parameter setting in the MARS training procedure, which significantly influences the regression accuracy. However, its use in reliability applications has not yet been widely explored. Bearing this in mind, remaining useful life values have been predicted here by using the hybrid ABC–MARS-based model from the remaining measured parameters (input variables for aircraft engines with success. A correlation coefficient equal to 0.92 was obtained when this hybrid ABC–MARS-based model was applied to experimental data. The agreement of this model with experimental data confirmed its good performance. The main advantage of this predictive model is that it does not require information about the previous operation states of the aircraft engine.

  11. Network delay predictive compensation based on time-delay modelling as disturbance

    Science.gov (United States)

    Florin Caruntu, Constantin; Lazar, Corneliu

    2014-10-01

    In this paper, a control design methodology that can assure the closed-loop performances of a physical plant, while compensating the network-induced time-varying delays, is proposed. First, the error caused by the time-varying delays is modelled as a disturbance and a novel method of bounding the disturbance is proposed. Second, a robust one step ahead predictive controller based on flexible control Lyapunov functions is designed, which explicitly takes into account the bounds of the disturbances and guarantees also the input-to-state stability of the system in a non-conservative way. The methodology was tested on a vehicle drivetrain controlled through controller area network, with the aim of damping driveline oscillations. The comparison with a proportional-integral-derivative (PID) controller using TrueTime simulator shows that the proposed control scheme can outperform classical controllers and it can handle the performance/physical constraints. Moreover, the handling of the strict limitations on the computational complexity was tested using a real-time test-bench.

  12. Predictive Models for Music

    OpenAIRE

    Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy

    2008-01-01

    Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...

  13. Response surface and neural network based predictive models of cutting temperature in hard turning

    Directory of Open Access Journals (Sweden)

    Mozammel Mia

    2016-11-01

    Full Text Available The present study aimed to develop the predictive models of average tool-workpiece interface temperature in hard turning of AISI 1060 steels by coated carbide insert. The Response Surface Methodology (RSM and Artificial Neural Network (ANN were employed to predict the temperature in respect of cutting speed, feed rate and material hardness. The number and orientation of the experimental trials, conducted in both dry and high pressure coolant (HPC environments, were planned using full factorial design. The temperature was measured by using the tool-work thermocouple. In RSM model, two quadratic equations of temperature were derived from experimental data. The analysis of variance (ANOVA and mean absolute percentage error (MAPE were performed to suffice the adequacy of the models. In ANN model, 80% data were used to train and 20% data were employed for testing. Like RSM, herein, the error analysis was also conducted. The accuracy of the RSM and ANN model was found to be ⩾99%. The ANN models exhibit an error of ∼5% MAE for testing data. The regression coefficient was found to be greater than 99.9% for both dry and HPC. Both these models are acceptable, although the ANN model demonstrated a higher accuracy. These models, if employed, are expected to provide a better control of cutting temperature in turning of hardened steel.

  14. Establishment of Neural Network Prediction Model for Terminative Temperature Based on Grey Theory in Hot Metal Pretreatment

    Institute of Scientific and Technical Information of China (English)

    ZHANGHui—ning; XUAn-jun; CUIJian; HEDong—feng; TIANNai——yuan

    2012-01-01

    In order to improve the accuracy of model for terminative temperature in steelmaking, it is necessary to predict and control before decarburization. Thus, an optimization neural network model of terminative temperature in the process of dephosphorization by laying correlative degree weights to all input factors related was used. Then sim- ulation experiment of model newly established is conducted utilizing 210 data from a domestic steel plant. The results show that hit rate arrives at 56.45~~ when error is within plus or minus 5%, and the value is 100% when within ~10%. Comparing to the traditional neural network prediction model, the accuracy almost increases by 6. 839o//oo. Thus, the simulation prediction fits the real perfectly, which accounts for that neural network model for terminative tempera- ture based on grey theory can reflect accurately the practice in dephosphorization. Naturally, this method is effective and nraeticahle.

  15. Modeling and simulation of adaptive Neuro-fuzzy based intelligent system for predictive stabilization in structured overlay networks

    Directory of Open Access Journals (Sweden)

    Ramanpreet Kaur

    2017-02-01

    Full Text Available Intelligent prediction of neighboring node (k well defined neighbors as specified by the dht protocol dynamism is helpful to improve the resilience and can reduce the overhead associated with topology maintenance of structured overlay networks. The dynamic behavior of overlay nodes depends on many factors such as underlying user’s online behavior, geographical position, time of the day, day of the week etc. as reported in many applications. We can exploit these characteristics for efficient maintenance of structured overlay networks by implementing an intelligent predictive framework for setting stabilization parameters appropriately. Considering the fact that human driven behavior usually goes beyond intermittent availability patterns, we use a hybrid Neuro-fuzzy based predictor to enhance the accuracy of the predictions. In this paper, we discuss our predictive stabilization approach, implement Neuro-fuzzy based prediction in MATLAB simulation and apply this predictive stabilization model in a chord based overlay network using OverSim as a simulation tool. The MATLAB simulation results present that the behavior of neighboring nodes is predictable to a large extent as indicated by the very small RMSE. The OverSim based simulation results also observe significant improvements in the performance of chord based overlay network in terms of lookup success ratio, lookup hop count and maintenance overhead as compared to periodic stabilization approach.

  16. Dynamic Regional Viscosity Prediction Model of Blast Furnace Slag Based on the Partial Least-Squares Regression

    Science.gov (United States)

    Guo, Hongwei; Zhu, Mengyi; Yan, Bingji; Deng, Shichan; Li, Xinyu; Liu, Feng

    2016-11-01

    Viscosity is considered to be a significant indicator of the metallurgical property of blast furnace (BF) slag. A model for viscosity prediction based on the partial least-squares regression of varietal quantity reference points is presented in this article. The present model proposes a dynamic regional algorithm for reference point selection. The study applied the partial least-squares regression to establish the dynamic regional viscosity prediction model on the basis of limited discrete points data. Then an actual prediction was carried out with a large amount of viscosity data of real and synthesized BF slags that was obtained from a certain steel plant in China. The results show that this advanced method turns out to be satisfactory in the viscosity prediction of BF slags with a low averaging error and mean value deviation.

  17. Dynamic Regional Viscosity Prediction Model of Blast Furnace Slag Based on the Partial Least-Squares Regression

    Science.gov (United States)

    Guo, Hongwei; Zhu, Mengyi; Yan, Bingji; Deng, Shichan; Li, Xinyu; Liu, Feng

    2017-02-01

    Viscosity is considered to be a significant indicator of the metallurgical property of blast furnace (BF) slag. A model for viscosity prediction based on the partial least-squares regression of varietal quantity reference points is presented in this article. The present model proposes a dynamic regional algorithm for reference point selection. The study applied the partial least-squares regression to establish the dynamic regional viscosity prediction model on the basis of limited discrete points data. Then an actual prediction was carried out with a large amount of viscosity data of real and synthesized BF slags that was obtained from a certain steel plant in China. The results show that this advanced method turns out to be satisfactory in the viscosity prediction of BF slags with a low averaging error and mean value deviation.

  18. Prediction of Ketoconazole absorption using an updated in vitro transfer model coupled to physiologically based pharmacokinetic modelling.

    Science.gov (United States)

    Ruff, Aaron; Fiolka, Tom; Kostewicz, Edmund S

    2017-03-30

    The aim of this study was to optimize the in vitro transfer model and to increase its biorelevance to more accurately mimic the in vivo supersaturation and precipitation behaviour of weak basic drugs. Therefore, disintegration of the formulation, volumes of the stomach and intestinal compartments, transfer rate, bile salt concentration, pH range and paddle speed were varied over a physiological relevant range. The supersaturation and precipitation data from these experiments for Ketoconazole (KTZ) were coupled to physiologically based pharmacokinetic (PBPK) model using Stella® software, which also incorporated the disposition kinetics of KTZ taken from the literature, in order to simulate the oral absorption and plasma profile in humans. As expected for a poorly soluble weak base, KTZ demonstrated supersaturation followed by precipitation under various in vitro conditions simulating the proximal small intestine with the results influenced by transfer rate, hydrodynamics, volume, bile salt concentration and pH values. When the in vitro data representing the "average" GI conditions was coupled to the PBPK model, the simulated profiles came closest to the observed mean plasma profiles for KTZ. In line with the high permeability of KTZ, the simulated profiles were highly influenced by supersaturation whilst precipitation was not predicted to occur in vivo. A physiological relevant in vitro "standard" transfer model setup to investigate supersaturation and precipitation was established. For translating the in vitro data to the in vivo setting, it is important that permeability is considered which can be achieved by coupling the in vitro data to PBPK modelling. Copyright © 2016. Published by Elsevier B.V.

  19. Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction

    Science.gov (United States)

    Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.

    2015-12-01

    Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.

  20. Energy production and consumption prediction and their response to environment based on coupling model in China

    Institute of Scientific and Technical Information of China (English)

    LI Qiang; REN Zhiyuan

    2012-01-01

    The paper presents the prediction of total energy production and consumption in all provinces and autonomous regions as well as determination of the variation of gravity center of the energy production,consumption and total discharge of industrial waste water,gas and residue of China via the energy and environmental quality data from 1978 to 2009 in China by use of GM(1,1) model and gravity center model,based on which the paper also analyzes the dynamic variation in regional difference in energy production,consumption and environmental quality and their relationship.The results are shown as follows.1) The gravity center of energy production is gradually moving southwestward and the entire movement track approximates to linear variation,indicating that the difference of energy production between the east and west,south and north is narrowing to a certain extent,with the difference between the east and the west narrowing faster than that between the south and the north.2) The gravity center of energy consumption is moving southwestward with perceptible fluctuation,of which the gravity center position from 2000 to 2005 was relatively stable,with slight annual position variation,indicating that the growth rates of all provinces and autonomous regions are basically the same.3) The gravity center of the total discharge of industrial waste water,gas and residue is characterized by fluctuation in longitude and latitude to a certain degree.But,it shows a southwestward trend on the whole.4) There are common ground and discrepancy in the variation track of the gravity center of the energy production & consumption of China,and the comparative analysis of the gravity center of them and that of total discharge of industrial waste water,gas and residue shows that the environmental quality level is closely associated with the energy production and consumption (especially the energy consumption),indicating that the environment cost in economy of energy is higher in China.

  1. Molecular surface area based predictive models for the adsorption and diffusion of disperse dyes in polylactic acid matrix.

    Science.gov (United States)

    Xu, Suxin; Chen, Jiangang; Wang, Bijia; Yang, Yiqi

    2015-11-15

    Two predictive models were presented for the adsorption affinities and diffusion coefficients of disperse dyes in polylactic acid matrix. Quantitative structure-sorption behavior relationship would not only provide insights into sorption process, but also enable rational engineering for desired properties. The thermodynamic and kinetic parameters for three disperse dyes were measured. The predictive model for adsorption affinity was based on two linear relationships derived by interpreting the experimental measurements with molecular structural parameters and compensation effect: ΔH° vs. dye size and ΔS° vs. ΔH°. Similarly, the predictive model for diffusion coefficient was based on two derived linear relationships: activation energy of diffusion vs. dye size and logarithm of pre-exponential factor vs. activation energy of diffusion. The only required parameters for both models are temperature and solvent accessible surface area of the dye molecule. These two predictive models were validated by testing the adsorption and diffusion properties of new disperse dyes. The models offer fairly good predictive ability. The linkage between structural parameter of disperse dyes and sorption behaviors might be generalized and extended to other similar polymer-penetrant systems.

  2. Equivalent Alkane Carbon Number of Live Crude Oil: A Predictive Model Based on Thermodynamics

    Directory of Open Access Journals (Sweden)

    Creton Benoit

    2016-09-01

    Full Text Available We took advantage of recently published works and new experimental data to propose a model for the prediction of the Equivalent Alkane Carbon Number of live crude oil (EACNlo for EOR processes. The model necessitates the a priori knowledge of reservoir pressure and temperature conditions as well as the initial gas to oil ratio. Additionally, some required volumetric properties for hydrocarbons were predicted using an equation of state. The model has been validated both on our own experimental data and data from the literature. These various case studies cover broad ranges of conditions in terms of API gravity index, gas to oil ratio, reservoir pressure and temperature, and composition of representative gas. The predicted EACNlo values reasonably agree with experimental EACN values, i.e. determined by comparison with salinity scans for a series of n-alkanes from nC8 to nC18. The model has been used to generate high pressure high temperature data, showing competing effects of the gas to oil ratio, pressure and temperature. The proposed model allows to strongly narrow down the spectrum of possibilities in terms of EACNlo values, and thus a more rational use of equipments.

  3. Prediction Model Based on the Grey Theory for Tackling Wax Deposition in Oil Pipelines

    Institute of Scientific and Technical Information of China (English)

    Ming Wu; Shujuan Qiu; Jianfeng Liu; Ling Zhao

    2005-01-01

    Problems involving wax deposition threaten seriously crude pipelines both economically and operationally. Wax deposition in oil pipelines is a complicated problem having a number of uncertainties and indeterminations. The Grey System Theory is a suitable theory for coping with systems in which some information is clear and some is not, so it is an adequate model for studying the process of wax deposition.In order to predict accurately wax deposition along a pipeline, the Grey Model was applied to fit the data of wax deposition rate and the thickness of the deposited wax layer on the pipe-wall, and to give accurate forecast on wax deposition in oil pipelines. The results showed that the average residential error of the Grey Prediction Model is smaller than 2%. They further showed that this model exhibited high prediction accuracy. Our investigation proved that the Grey Model is a viable means for forecasting wax deposition.These findings offer valuable references for the oil industry and for firms dealing with wax cleaning in oil pipelines.

  4. A meteo-hydrological prediction system based on a multi-model approach for precipitation forecasting

    Directory of Open Access Journals (Sweden)

    S. Davolio

    2008-02-01

    Full Text Available The precipitation forecasted by a numerical weather prediction model, even at high resolution, suffers from errors which can be considerable at the scales of interest for hydrological purposes. In the present study, a fraction of the uncertainty related to meteorological prediction is taken into account by implementing a multi-model forecasting approach, aimed at providing multiple precipitation scenarios driving the same hydrological model. Therefore, the estimation of that uncertainty associated with the quantitative precipitation forecast (QPF, conveyed by the multi-model ensemble, can be exploited by the hydrological model, propagating the error into the hydrological forecast.

    The proposed meteo-hydrological forecasting system is implemented and tested in a real-time configuration for several episodes of intense precipitation affecting the Reno river basin, a medium-sized basin located in northern Italy (Apennines. These episodes are associated with flood events of different intensity and are representative of different meteorological configurations responsible for severe weather affecting northern Apennines.

    The simulation results show that the coupled system is promising in the prediction of discharge peaks (both in terms of amount and timing for warning purposes. The ensemble hydrological forecasts provide a range of possible flood scenarios that proved to be useful for the support of civil protection authorities in their decision.

  5. A Thermodynamically-Based Model For Predicting Microbial Growth And Community Composition Coupled To System Geochemistry

    Science.gov (United States)

    Istok, J. D.

    2007-12-01

    We present an approach that couples thermodynamic descriptions for microbial growth and geochemical reactions to provide quantitative predictions for the effects of substrate addition or other enviornmental perturbations on microbial community composition. A synthetic microbial community is defined as a collection of defined microbial groups; each with a growth equation derived from bioenergetic principles. The growth equations and standard-state free energy yields are appended to a thermodynamic database for geochemical reactions and the combined equations are solved simultaneously to predict coupled changes in microbial biomass, community composition, and system geochemistry. This approach, with a single set of thermodynamic parameters (one for each growth equation), was used to predict the results of laboratory and field experiments at three geochemically diverse research sites. Predicted effects of ethanol or acetate addition on radionuclide and heavy metal solubility, major ion geochemistry, mineralogy, microbial biomass and community composition were in general agreement with experimental observations although the available experimental data precluded rigorous model testing. Model simulations provide insight into the long-standing difficulty in transferring experimental results from the laboratory to the field and from one site to the next, especially if the form, concentration, or delivery of growth substrate is varied from one experiment to the next. Although originally developed for use in better understanding bioimmobilization of radionuclides and heavy metals via reductive precipitation, the modeling approach is potentially useful for exploring the coupling of microbial growth and geochemical reactions in a variety of basic and applied biotechnology research settings.

  6. New mechanistically based model for predicting reduction of biosolids waste by ozonation of return activated sludge.

    Science.gov (United States)

    Isazadeh, Siavash; Feng, Min; Urbina Rivas, Luis Enrique; Frigon, Dominic

    2014-04-15

    Two pilot-scale activated sludge reactors were operated for 98 days to provide the necessary data to develop and validate a new mathematical model predicting the reduction of biosolids production by ozonation of the return activated sludge (RAS). Three ozone doses were tested during the study. In addition to the pilot-scale study, laboratory-scale experiments were conducted with mixed liquor suspended solids and with pure cultures to parameterize the biomass inactivation process during exposure to ozone. The experiments revealed that biomass inactivation occurred even at the lowest doses, but that it was not associated with extensive COD solubilization. For validation, the model was used to simulate the temporal dynamics of the pilot-scale operational data. Increasing the description accuracy of the inactivation process improved the precision of the model in predicting the operational data.

  7. Ligand efficiency-based support vector regression models for predicting bioactivities of ligands to drug target proteins.

    Science.gov (United States)

    Sugaya, Nobuyoshi

    2014-10-27

    The concept of ligand efficiency (LE) indices is widely accepted throughout the drug design community and is frequently used in a retrospective manner in the process of drug development. For example, LE indices are used to investigate LE optimization processes of already-approved drugs and to re-evaluate hit compounds obtained from structure-based virtual screening methods and/or high-throughput experimental assays. However, LE indices could also be applied in a prospective manner to explore drug candidates. Here, we describe the construction of machine learning-based regression models in which LE indices are adopted as an end point and show that LE-based regression models can outperform regression models based on pIC50 values. In addition to pIC50 values traditionally used in machine learning studies based on chemogenomics data, three representative LE indices (ligand lipophilicity efficiency (LLE), binding efficiency index (BEI), and surface efficiency index (SEI)) were adopted, then used to create four types of training data. We constructed regression models by applying a support vector regression (SVR) method to the training data. In cross-validation tests of the SVR models, the LE-based SVR models showed higher correlations between the observed and predicted values than the pIC50-based models. Application tests to new data displayed that, generally, the predictive performance of SVR models follows the order SEI > BEI > LLE > pIC50. Close examination of the distributions of the activity values (pIC50, LLE, BEI, and SEI) in the training and validation data implied that the performance order of the SVR models may be ascribed to the much higher diversity of the LE-based training and validation data. In the application tests, the LE-based SVR models can offer better predictive performance of compound-protein pairs with a wider range of ligand potencies than the pIC50-based models. This finding strongly suggests that LE-based SVR models are better than pIC50-based

  8. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  9. Wave Disturbance Reduction of a Floating Wind Turbine Using a Reference Model-based Predictive Control

    DEFF Research Database (Denmark)

    Christiansen, Søren; Tabatabaeipour, Seyed Mojtaba; Bak, Thomas;

    2013-01-01

    Floating wind turbines are considered as a new and promising solution for reaching higher wind resources beyond the water depth restriction of monopile wind turbines. But on a floating structure, the wave-induced loads significantly increase the oscillations of the structure. Furthermore, using...... a controller designed for an onshore wind turbine yields instability in the fore-aft rotation. In this paper, we propose a general framework, where a reference model models the desired closed-loop behavior of the system. Model predictive control combined with a state estimator finds the optimal rotor blade...... compared to a baseline floating wind turbine controller at the cost of more pitch action....

  10. Predicting individual responses to pravastatin using a physiologically based kinetic model for plasma cholesterol concentrations

    NARCIS (Netherlands)

    Pas, N.C.A. van de; Rullmann, J.A.C.; Woutersen, R.A.; Ommen, B. van; Rietjens, I.M.C.M.; Graaf, A.A. de

    2014-01-01

    We used a previously developed physiologically based kinetic (PBK) model to analyze the effect of individual variations in metabolism and transport of cholesterol on pravastatin response. The PBK model is based on kinetic expressions for 21 reactions that interconnect eight different body

  11. Effect of experimental design on the prediction performance of calibration models based on near-infrared spectroscopy for pharmaceutical applications.

    Science.gov (United States)

    Bondi, Robert W; Igne, Benoît; Drennen, James K; Anderson, Carl A

    2012-12-01

    Near-infrared spectroscopy (NIRS) is a valuable tool in the pharmaceutical industry, presenting opportunities for online analyses to achieve real-time assessment of intermediates and finished dosage forms. The purpose of this work was to investigate the effect of experimental designs on prediction performance of quantitative models based on NIRS using a five-component formulation as a model system. The following experimental designs were evaluated: five-level, full factorial (5-L FF); three-level, full factorial (3-L FF); central composite; I-optimal; and D-optimal. The factors for all designs were acetaminophen content and the ratio of microcrystalline cellulose to lactose monohydrate. Other constituents included croscarmellose sodium and magnesium stearate (content remained constant). Partial least squares-based models were generated using data from individual experimental designs that related acetaminophen content to spectral data. The effect of each experimental design was evaluated by determining the statistical significance of the difference in bias and standard error of the prediction for that model's prediction performance. The calibration model derived from the I-optimal design had similar prediction performance as did the model derived from the 5-L FF design, despite containing 16 fewer design points. It also outperformed all other models estimated from designs with similar or fewer numbers of samples. This suggested that experimental-design selection for calibration-model development is critical, and optimum performance can be achieved with efficient experimental designs (i.e., optimal designs).

  12. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine.

    Science.gov (United States)

    Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.

  13. Prediction of SWCC of Saline Soil in Western Jilin Based on Arya-Paris Model

    Directory of Open Access Journals (Sweden)

    Bao Shuochao

    2016-01-01

    Full Text Available The saline soil distributed in Western Jilin Province could cause a serious of damages to local construction engineering and agriculture. The relationship between water content and soil suction has great influence on engineering properties, and effect the water migration and forming of saline soil. This paper aims to the saline soil in Zhenlai area of Western Jilin province, the basic properties test were taken in laboratory, and Arya-Paris prediction model were chosen to predict the SWCC of saline soil in Western Jilin. The results show that the 30cm soil sample has lower water holding capacity than the 50cm soil sample, which means the water migration rate is higher of 30cm. The results may provide theoretical support and beneficial reference for research and prediction of engineering properties and forming mechanism of saline soil.

  14. Prediction Model of Soil Nutrients Loss Based on Artificial Neural Network

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    On the basis of Artificial Neural Network theory, a back propagation neural network with one middle layer is building in this paper, and its algorithms is also given, Using this BP network model, study the case of Malian - River basin. The results by calculating show that the solution based on BP algorithms are consis tent with those based multiple-variables linear regression model. They also indicate that BP model in this paper is reasonable and BP algorithms are feasible.

  15. hβ2R-Gαs complex: prediction versus crystal structure--how valuable are predictions based on molecular modeling studies?

    Science.gov (United States)

    Straßer, Andrea; Wittmann, Hans-Joachim

    2012-07-01

    In 2010, we predicted two models for the hβ(2)R-Gα(s) complex by combining the technique of homology modeling with a potential energy surface scan, since a complete crystal structure of the hβ(2)R-Gα(s) complex was not available. The crystal structure of opsin co-crystallized with part of the C-terminus of Gα (3DQB) was used as a template to model the hβ(2)R, whereas the crystal structure of Gα (1AZT) was used as a template to model Gα(s). Utilizing a potential energy surface scan between hβ(2)R and Gα(s), a six-dimensional potential energy surface was obtained. Two significant minimum regions were located on this surface, and each was associated with a distinct hβ(2)R-Gα(s) complex, namely model I and model II [Straßer A, Wittmann H-J (2010) J Mol Model 16:1307-1318]. The crystal structure of the hβ(2)R-Gα(s)βγ complex has recently been published. Thus, the aim of the current study was, on the one hand, to compare our predicted structures with the true crystal structure, and on the other to discuss the question: how valuable are predictions based on molecular modeling studies?

  16. Predicting rice hybrid performance using univariate and multivariate GBLUP models based on North Carolina mating design II.

    Science.gov (United States)

    Wang, X; Li, L; Yang, Z; Zheng, X; Yu, S; Xu, C; Hu, Z

    2017-03-01

    Genomic selection (GS) is more efficient than traditional phenotype-based methods in hybrid breeding. The present study investigated the predictive ability of genomic best linear unbiased prediction models for rice hybrids based on the North Carolina mating design II, in which a total of 115 inbred rice lines were crossed with 5 male sterile lines. Using 8 traits of the 575 (115 × 5) hybrids from two environments, both univariate (UV) and multivariate (MV) prediction analyses, including additive and dominance effects, were performed. Using UV models, the prediction results of cross-validation indicated that including dominance effects could improve the predictive ability for some traits in rice hybrids. Additionally, we could take advantage of GS even for a low-heritability trait, such as grain yield per plant (GY), because a modest increase in the number of top selection could generate a higher, more stable mean phenotypic value for rice hybrids. Thus this strategy was used to select superior potential crosses between the 115 inbred lines and those between the 5 male sterile lines and other genotyped varieties. In our MV research, an MV model (MV-ADV) was developed utilizing a MV relationship matrix constructed with auxiliary variates. Based on joint analysis with multi-trait (MT) or with multi-environment, the prediction results confirmed the superiority of MV-ADV over an UV model, particularly in the MT scenario for a low-heritability target trait (such as GY), with highly correlated auxiliary traits. For a high-heritability trait (such as thousand-grain weight), MT prediction is unnecessary, and UV prediction is sufficient.

  17. One Prediction Model Based on BP Neural Network for Newcastle Disease

    Science.gov (United States)

    Wang, Hongbin; Gong, Duqiang; Xiao, Jianhua; Zhang, Ru; Li, Lin

    The purpose of this paper is to investigate the correlation between meteorological factors and Newcastle disease incidence, and to determine the key factors that affect Newcastle disease. Having built BP neural network forecasting model by Matlab 7.0 software, we tested the performance of the model according to the coefficient of determination (R2) and absolute values of the difference between predictive value and practical incidence. The result showed that 6 kinds of meteorological factors determined, and the model's coefficient of determination is 0.760, and the performance of the model is very good. Finally, we build Newcastle disease forecasting model, and apply BP neural network theory in animal disease forecasting research firstly.

  18. Improved prediction of higher heating value of biomass using an artificial neural network model based on proximate analysis.

    Science.gov (United States)

    Uzun, Harun; Yıldız, Zeynep; Goldfarb, Jillian L; Ceylan, Selim

    2017-06-01

    As biomass becomes more integrated into our energy feedstocks, the ability to predict its combustion enthalpies from routine data such as carbon, ash, and moisture content enables rapid decisions about utilization. The present work constructs a novel artificial neural network model with a 3-3-1 tangent sigmoid architecture to predict biomasses' higher heating values from only their proximate analyses, requiring minimal specificity as compared to models based on elemental composition. The model presented has a considerably higher correlation coefficient (0.963) and lower root mean square (0.375), mean absolute (0.328), and mean bias errors (0.010) than other models presented in the literature which, at least when applied to the present data set, tend to under-predict the combustion enthalpy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Prognostic model for predicting survival of patients with metastatic urothelial cancer treated with cisplatin-based chemotherapy.

    Science.gov (United States)

    Apolo, Andrea B; Ostrovnaya, Irina; Halabi, Susan; Iasonos, Alexia; Philips, George K; Rosenberg, Jonathan E; Riches, Jamie; Small, Eric J; Milowsky, Matthew I; Bajorin, Dean F

    2013-04-03

    A prognostic model that predicts overall survival (OS) for metastatic urothelial cancer (MetUC) patients treated with cisplatin-based chemotherapy was developed, validated, and compared with a commonly used Memorial Sloan-Kettering Cancer Center (MSKCC) risk-score model. Data from 7 protocols that enrolled 308 patients with MetUC were pooled. An external multi-institutional dataset was used to validate the model. The primary measurement of predictive discrimination was Harrell's c-index, computed with 95% confidence interval (CI). The final model included four pretreatment variables to predict OS: visceral metastases, albumin, performance status, and hemoglobin. The Harrell's c-index was 0.67 for the four-variable model and 0.64 for the MSKCC risk-score model, with a prediction improvement for OS (the U statistic and its standard deviation were used to calculate the two-sided P = .002). In the validation cohort, the c-indices for the four-variable and the MSKCC risk-score models were 0.63 (95% CI = 0.56 to 0.69) and 0.58 (95% CI = 0.52 to 0.65), respectively, with superiority of the four-variable model compared with the MSKCC risk-score model for OS (the U statistic and its standard deviation were used to calculate the two-sided P = .02).

  20. Mortality Prediction Model of Septic Shock Patients Based on Routinely Recorded Data.

    Science.gov (United States)

    Carrara, Marta; Baselli, Giuseppe; Ferrario, Manuela

    2015-01-01

    We studied the problem of mortality prediction in two datasets, the first composed of 23 septic shock patients and the second composed of 73 septic subjects selected from the public database MIMIC-II. For each patient we derived hemodynamic variables, laboratory results, and clinical information of the first 48 hours after shock onset and we performed univariate and multivariate analyses to predict mortality in the following 7 days. The results show interesting features that individually identify significant differences between survivors and nonsurvivors and features which gain importance only when considered together with the others in a multivariate regression model. This preliminary study on two small septic shock populations represents a novel contribution towards new personalized models for an integration of multiparameter patient information to improve critical care management of shock patients.

  1. Mortality Prediction Model of Septic Shock Patients Based on Routinely Recorded Data

    Directory of Open Access Journals (Sweden)

    Marta Carrara

    2015-01-01

    Full Text Available We studied the problem of mortality prediction in two datasets, the first composed of 23 septic shock patients and the second composed of 73 septic subjects selected from the public database MIMIC-II. For each patient we derived hemodynamic variables, laboratory results, and clinical information of the first 48 hours after shock onset and we performed univariate and multivariate analyses to predict mortality in the following 7 days. The results show interesting features that individually identify significant differences between survivors and nonsurvivors and features which gain importance only when considered together with the others in a multivariate regression model. This preliminary study on two small septic shock populations represents a novel contribution towards new personalized models for an integration of multiparameter patient information to improve critical care management of shock patients.

  2. Mortality Prediction Model of Septic Shock Patients Based on Routinely Recorded Data

    Science.gov (United States)

    Carrara, Marta; Baselli, Giuseppe; Ferrario, Manuela

    2015-01-01

    We studied the problem of mortality prediction in two datasets, the first composed of 23 septic shock patients and the second composed of 73 septic subjects selected from the public database MIMIC-II. For each patient we derived hemodynamic variables, laboratory results, and clinical information of the first 48 hours after shock onset and we performed univariate and multivariate analyses to predict mortality in the following 7 days. The results show interesting features that individually identify significant differences between survivors and nonsurvivors and features which gain importance only when considered together with the others in a multivariate regression model. This preliminary study on two small septic shock populations represents a novel contribution towards new personalized models for an integration of multiparameter patient information to improve critical care management of shock patients. PMID:26557154

  3. Uncertainty-based calibration and prediction with a stormwater surface accumulation-washoff model based on coverage of sampled Zn, Cu, Pb and Cd field data

    DEFF Research Database (Denmark)

    Lindblom, Erik Ulfson; Ahlman, S.; Mikkelsen, Peter Steen

    2011-01-01

    μg/l ±80% for Pb and 0.6 μg/l ±35% for Cd. This uncertainty-based calibration procedure adequately describes the prediction uncertainty conditioned on the used model and data, but seasonal and site-to-site variation is not considered, i.e. predicting metal concentrations in stormwater runoff from...

  4. Prediction Model for Fatigue Stiffness Decay of Concrete Beam Based on Neural Network

    Institute of Scientific and Technical Information of China (English)

    王海超; 何世钦; 贡金鑫

    2003-01-01

    With the method of neural network, the processes of fatigue stiffness decreasing and deflection increasing of reinforced concrete beams under cyclic loading were simulated. The simulating system was built with the given experimental data. The prediction model of neural network structure and the corresponding parameters were obtained. The precision and results were satisfied and could be used to investigate the fatigue properties of reinforced concrete beams in complex environment and under repeating loads.

  5. Prediction Model of Antibacterial Activities for Inorganic Antibacterial Agents Based on Artificial Neural Networks

    Institute of Scientific and Technical Information of China (English)

    刘雪峰; 张利; 涂铭旌

    2004-01-01

    Quantitatively evaluation of antibacterial activities of inorganic antibacterial agents is an urgent problem to be solved. Using experimental data by an orthogonal design, a prediction model of the relation between conditions of preparing inorganic antibacterial agents and their antibacterial activities has been developed. This is accomplished by introducing BP artificial neural networks in the study of inorganic antibacterial agents..It provides a theoretical support for the development and research on inorganic antibacterial agents.

  6. A tuning algorithm for model predictive controllers based on genetic algorithms and fuzzy decision making.

    Science.gov (United States)

    van der Lee, J H; Svrcek, W Y; Young, B R

    2008-01-01

    Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.

  7. Predictions on the Development Dimensions of Provincial Tourism Discipline Based on the Artificial Neural Network BP Model

    Science.gov (United States)

    Yang, Yang; Hu, Jun; Lv, Yingchun; Zhang, Mu

    2013-01-01

    As the tourism industry has gradually become the strategic mainstay industry of the national economy, the scope of the tourism discipline has developed rigorously. This paper makes a predictive study on the development of the scope of Guangdong provincial tourism discipline based on the artificial neural network BP model in order to find out how…

  8. The Relevance Voxel Machine (RVoxM): A Self-Tuning Bayesian Model for Informative Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2012-01-01

    This paper presents the relevance voxel machine (RVoxM), a dedicated Bayesian model for making predictions based on medical imaging data. In contrast to the generic machine learning algorithms that have often been used for this purpose, the method is designed to utilize a small number of spatially...

  9. A data-driven SVR model for long-term runoff prediction and uncertainty analysis based on the Bayesian framework

    Science.gov (United States)

    Liang, Zhongmin; Li, Yujie; Hu, Yiming; Li, Binquan; Wang, Jun

    2017-06-01

    Accurate and reliable long-term forecasting plays an important role in water resources management and utilization. In this paper, a hybrid model called SVR-HUP is presented to predict long-term runoff and quantify the prediction uncertainty. The model is created based on three steps. First, appropriate predictors are selected according to the correlations between meteorological factors and runoff. Second, a support vector regression (SVR) model is structured and optimized based on the LibSVM toolbox and a genetic algorithm. Finally, using forecasted and observed runoff, a hydrologic uncertainty processor (HUP) based on a Bayesian framework is used to estimate the posterior probability distribution of the simulated values, and the associated uncertainty of prediction was quantitatively analyzed. Six precision evaluation indexes, including the correlation coefficient (CC), relative root mean square error (RRMSE), relative error (RE), mean absolute percentage error (MAPE), Nash-Sutcliffe efficiency (NSE), and qualification rate (QR), are used to measure the prediction accuracy. As a case study, the proposed approach is applied in the Han River basin, South Central China. Three types of SVR models are established to forecast the monthly, flood season and annual runoff volumes. The results indicate that SVR yields satisfactory accuracy and reliability at all three scales. In addition, the results suggest that the HUP cannot only quantify the uncertainty of prediction based on a confidence interval but also provide a more accurate single value prediction than the initial SVR forecasting result. Thus, the SVR-HUP model provides an alternative method for long-term runoff forecasting.

  10. Prediction of Severe Acute Pancreatitis Using a Decision Tree Model Based on the Revised Atlanta Classification of Acute Pancreatitis.

    Directory of Open Access Journals (Sweden)

    Zhiyong Yang

    Full Text Available To develop a model for the early prediction of severe acute pancreatitis based on the revised Atlanta classification of acute pancreatitis.Clinical data of 1308 patients with acute pancreatitis (AP were included in the retrospective study. A total of 603 patients who were admitted to the hospital within 36 hours of the onset of the disease were included at last according to the inclusion criteria. The clinical data were collected within 12 hours after admission. All the patients were classified as having mild acute pancreatitis (MAP, moderately severe acute pancreatitis (MSAP and severe acute pancreatitis (SAP based on the revised Atlanta classification of acute pancreatitis. All the 603 patients were randomly divided into training group (402 cases and test group (201 cases. Univariate and multiple regression analyses were used to identify the independent risk factors for the development of SAP in the training group. Then the prediction model was constructed using the decision tree method, and this model was applied to the test group to evaluate its validity.The decision tree model was developed using creatinine, lactate dehydrogenase, and oxygenation index to predict SAP. The diagnostic sensitivity and specificity of SAP in the training group were 80.9% and 90.0%, respectively, and the sensitivity and specificity in the test group were 88.6% and 90.4%, respectively.The decision tree model based on creatinine, lactate dehydrogenase, and oxygenation index is more likely to predict the occurrence of SAP.

  11. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, J. D. (Prostat, Mesa, AZ); Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

    2006-10-01

    Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

  12. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.

  13. Model predictive control of attitude maneuver of a geostationary flexible satellite based on genetic algorithm

    Science.gov (United States)

    TayyebTaher, M.; Esmaeilzadeh, S. Majid

    2017-07-01

    This article presents an application of Model Predictive Controller (MPC) to the attitude control of a geostationary flexible satellite. SIMO model has been used for the geostationary satellite, using the Lagrange equations. Flexibility is also included in the modelling equations. The state space equations are expressed in order to simplify the controller. Naturally there is no specific tuning rule to find the best parameters of an MPC controller which fits the desired controller. Being an intelligence method for optimizing problem, Genetic Algorithm has been used for optimizing the performance of MPC controller by tuning the controller parameter due to minimum rise time, settling time, overshoot of the target point of the flexible structure and its mode shape amplitudes to make large attitude maneuvers possible. The model included geosynchronous orbit environment and geostationary satellite parameters. The simulation results of the flexible satellite with attitude maneuver shows the efficiency of proposed optimization method in comparison with LQR optimal controller.

  14. Group-based trajectory models: a new approach to classifying and predicting long-term medication adherence.

    Science.gov (United States)

    Franklin, Jessica M; Shrank, William H; Pakes, Juliana; Sanfélix-Gimeno, Gabriel; Matlin, Olga S; Brennan, Troyen A; Choudhry, Niteesh K

    2013-09-01

    Classifying medication adherence is important for efficiently targeting adherence improvement interventions. The purpose of this study was to evaluate the use of a novel method, group-based trajectory models, for classifying patients by their long-term adherence. We identified patients who initiated a statin between June 1, 2006 and May 30, 2007 in prescription claims from CVS Caremark and evaluated adherence over the subsequent 15 months. We compared several adherence summary measures, including proportion of days covered (PDC) and trajectory models with 2-6 groups, with the observed adherence pattern, defined by monthly indicators of full adherence (defined as having ≥24 d covered of 30). We also compared the accuracy of adherence prediction based on patient characteristics when adherence was defined by either a trajectory model or PDC. In 264,789 statin initiators, the 6-group trajectory model summarized long-term adherence best (C=0.938), whereas PDC summarized less well (C=0.881). The accuracy of adherence predictions was similar whether adherence was classified by PDC or by trajectory model. Trajectory models summarized adherence patterns better than traditional approaches and were similarly predicted by covariates. Group-based trajectory models may facilitate targeting of interventions and may be useful to adjust for confounding by health-seeking behavior.

  15. Preventive Maintenance Interval Prediction: a Spare Parts Inventory Cost and Lost Earning Based Model

    Directory of Open Access Journals (Sweden)

    O. A. Adebimpe

    2015-06-01

    Full Text Available In this paper, some preventive maintenance parameters in manufacturing firms were identified and used to develop cost based functions in terms of machine preventive maintenance. The proposed cost based model considers system’s reliability, cost of keeping spare parts inventory and lost earnings in deriving optimal maintenance interval. A case of a manufacturing firm in Nigeria was observed and the data was used to evaluate the model.

  16. [Prediction of potential distribution area of Erigeron philadelphicus in China based on MaxEnt model].

    Science.gov (United States)

    Zhang, Ying; Li, Jun; Lin, Wei; Qiang, Sheng

    2011-11-01

    Erigeron philadelphicus, an alien weed originated from North America, has already invaded in Shanghai, Jiangsu, Anhui, and some other places in China, caused harm on local ecosystem and demonstrated huge potential invasiveness. By using MaxEnt model and geographic information system (GIS), this paper analyzed the environmental variables affecting the distribution of E. philadelphicus, and intuitively and quantitatively predicted its potential distribution regions in China. The prediction was verified by the ROC curve, and the results showed that E. philadelphicus had a wide potential distribution range, with the main suitable distribution area in Shanghai, Jiangsu, Zhejiang, Anhui, Henan, Hubei, Hunan and Jiangxi. At present, the actual invasive range of E. philadelphicus was far narrower than its potential maximum invasive range, and likely to be continued to spread. The ROC curve test indicated that the prediction with MaxEnt model had a higher precision, and was credible. Air temperature and precipitation could be the main environmental variables affecting the potential distribution of E. philadelphicus. More attentions should be addressed to the harmfulness of the weed. To eradicate the existing E. philadelphicus populations and to strictly monitor the invasion of E. philadelphicus to its most suitable distribution area could be the effective measures to prevent and control the further invasion of the alien weed.

  17. Survival prediction based on compound covariate under Cox proportional hazard models.

    Directory of Open Access Journals (Sweden)

    Takeshi Emura

    Full Text Available Survival prediction from a large number of covariates is a current focus of statistical and medical research. In this paper, we study a methodology known as the compound covariate prediction performed under univariate Cox proportional hazard models. We demonstrate via simulations and real data analysis that the compound covariate method generally competes well with ridge regression and Lasso methods, both already well-studied methods for predicting survival outcomes with a large number of covariates. Furthermore, we develop a refinement of the compound covariate method by incorporating likelihood information from multivariate Cox models. The new proposal is an adaptive method that borrows information contained in both the univariate and multivariate Cox regression estimators. We show that the new proposal has a theoretical justification from a statistical large sample theory and is naturally interpreted as a shrinkage-type estimator, a popular class of estimators in statistical literature. Two datasets, the primary biliary cirrhosis of the liver data and the non-small-cell lung cancer data, are used for illustration. The proposed method is implemented in R package "compound.Cox" available in CRAN at http://cran.r-project.org/.

  18. A Model to Predict Nitrogen Losses in Advanced Soil-Based Wastewater Treatment Systems

    Science.gov (United States)

    Morales, I.; Cooper, J.; Loomis, G.; Kalen, D.; Amador, J.; Boving, T. B.

    2014-12-01

    Most of the non-point source Nitrogen (N) load in rural areas is attributed to onsite wastewater treatment systems (OWTS). Nitrogen compounds are considered environmental pollutants because they deplete the oxygen availability in water bodies and produce eutrophication. The objective of this study was to simulate the fate and transport of Nitrogen in OWTS. The commercially-available 2D/3D HYDRUS software was used to develop a transport and fate model. Experimental data from a laboratory meso-cosm study included the soil moisture content, NH4 and NO3- data. That data set was used to calibrate the model. Three types of OWTS were simulated: (1) pipe-and-stone (P&S), (2) advanced soil drainfields, pressurized shallow narrow drainfield (SND) and (3) Geomat (GEO), a variation of SND. To better understand the nitrogen removal mechanism and the performance of OWTS technologies, replicate (n = 3) intact soil mesocosms were used with 15N-labelled nitrogen inputs. As a result, it was estimated that N removal by denitrification was predominant in P&S. However, it is suggested that N was removed by nitrification in SND and GEO. The calibrated model was used to estimate Nitrogen fluxes for both conventional and advanced OWTS. Also, the model predicted the N losses from nitrification and denitrification in all OWTS. These findings help to provide practitioners with guidelines to estimate N removal efficiencies for OWTS, and predict N loads and spatial distribution for identifying non-point sources.

  19. An Analytical Model for Fatigue Life Prediction Based on Fracture Mechanics and Crack Closure

    DEFF Research Database (Denmark)

    Ibsø, Jan Behrend; Agerskov, Henning

    1996-01-01

    Fatigue in steel structures subjected to stochastic loading is studied. Of special interest is the problem of fatigue damage accumulation and in this connection, a comparison between experimental results and results obtained using fracture mechanics. Fatigue test results obtained for welded plate...... test specimens are compared with fatigue life predictions using a fracture mechanics approach. In the calculation of the fatigue life, the influence of the welding residual stresses and crack closure on the fatigue crack growth is considered. A description of the crack closure model for analytical...... of the analytical fatigue lives. Both the analytical and experimental results obtained show that the Miner rule may give quite unconservative predictions of the fatigue life for the types of stochastic loading studied....

  20. A Temperature Prediction Model for Oil-immersed Transformer Based on Thermal-circuit Theory

    Institute of Scientific and Technical Information of China (English)

    WEI Ben-gang; LIN Jun; TAN Li; GAO Kai; LIU Jia-yu; LI Hua-long; LI Jiang-tao

    2012-01-01

    This paper proposed an improved temperature prediction model for oil-immersed transformer.The influences of the environmental temperature and heat-sinking capability changing with temperature were considered.When calculating the heat dissipation from the transformer tank to surroundings,the average oil temperature was selected as the node value in the thermal circuit.The new thermal models will be validated with the delivery experimental data of three transformers:a 220 kV-300 MV·A unit,a 110 kV-40 MV ·A unit and a 220 kV-75 MV ·A unit.Meanwhile,the results from the proposed model were also compared with two methods recommended in the IEC loading guide.

  1. A multi-scale continuum model of skeletal muscle mechanics predicting force enhancement based on actin-titin interaction.

    Science.gov (United States)

    Heidlauf, Thomas; Klotz, Thomas; Rode, Christian; Altan, Ekin; Bleiler, Christian; Siebert, Tobias; Röhrle, Oliver

    2016-12-01

    Although recent research emphasises the possible role of titin in skeletal muscle force enhancement, this property is commonly ignored in current computational models. This work presents the first biophysically based continuum-mechanical model of skeletal muscle that considers, in addition to actin-myosin interactions, force enhancement based on actin-titin interactions. During activation, titin attaches to actin filaments, which results in a significant reduction in titin's free molecular spring length and therefore results in increased titin forces during a subsequent stretch. The mechanical behaviour of titin is included on the microscopic half-sarcomere level of a multi-scale chemo-electro-mechanical muscle model, which is based on the classic sliding-filament and cross-bridge theories. In addition to titin stress contributions in the muscle fibre direction, the continuum-mechanical constitutive relation accounts for geometrically motivated, titin-induced stresses acting in the muscle's cross-fibre directions. Representative simulations of active stretches under maximal and submaximal activation levels predict realistic magnitudes of force enhancement in fibre direction. For example, stretching the model by 20 % from optimal length increased the isometric force at the target length by about 30 %. Predicted titin-induced stresses in the muscle's cross-fibre directions are rather insignificant. Including the presented development in future continuum-mechanical models of muscle function in dynamic situations will lead to more accurate model predictions during and after lengthening contractions.

  2. Research on the Wire Network Signal Prediction Based on the Improved NNARX Model

    Science.gov (United States)

    Zhang, Zipeng; Fan, Tao; Wang, Shuqing

    It is difficult to obtain accurately the wire net signal of power system's high voltage power transmission lines in the process of monitoring and repairing. In order to solve this problem, the signal measured in remote substation or laboratory is employed to make multipoint prediction to gain the needed data. But, the obtained power grid frequency signal is delay. In order to solve the problem, an improved NNARX network which can predict frequency signal based on multi-point data collected by remote substation PMU is describes in this paper. As the error curved surface of the NNARX network is more complicated, this paper uses L-M algorithm to train the network. The result of the simulation shows that the NNARX network has preferable predication performance which provides accurate real time data for field testing and maintenance.

  3. Spatial characterization and prediction of Neanderthal sites based on environmental information and stochastic modelling

    Science.gov (United States)

    Maerker, Michael; Bolus, Michael

    2014-05-01

    We present a unique spatial dataset of Neanderthal sites in Europe that was used to train a set of stochastic models to reveal the correlations between the site locations and environmental indices. In order to assess the relations between the Neanderthal sites and environmental variables as described above we applied a boosted regression tree approach (TREENET) a statistical mechanics approach (MAXENT) and support vector machines. The stochastic models employ a learning algorithm to identify a model that best fits the relationship between the attribute set (predictor variables (environmental variables) and the classified response variable which is in this case the types of Neanderthal sites. A quantitative evaluation of model performance was done by determining the suitability of the model for the geo-archaeological applications and by helping to identify those aspects of the methodology that need improvements. The models' predictive performances were assessed by constructing the Receiver Operating Characteristics (ROC) curves for each Neanderthal class, both for training and test data. In a ROC curve the Sensitivity is plotted over the False Positive Rate (1-Specificity) for all possible cut-off points. The quality of a ROC curve is quantified by the measure of the parameter area under the ROC curve. The dependent variable or target variable in this study are the locations of Neanderthal sites described by latitude and longitude. The information on the site location was collected from literature and own research. All sites were checked for site accuracy using high resolution maps and google earth. The study illustrates that the models show a distinct ranking in model performance with TREENET outperforming the other approaches. Moreover Pre-Neanderthals, Early Neanderthals and Classic Neanderthals show a specific spatial distribution. However, all models show a wide correspondence in the selection of the most important predictor variables generally showing less

  4. Antioxidant-capacity-based models for the prediction of acrylamide reduction by flavonoids.

    Science.gov (United States)

    Cheng, Jun; Chen, Xinyu; Zhao, Sheng; Zhang, Yu

    2015-02-01

    The aim of this study was to investigate the applicability of artificial neural network (ANN) and multiple linear regression (MLR) models for the estimation of acrylamide reduction by flavonoids, using multiple antioxidant capacities of Maillard reaction products as variables via a microwave food processing workstation. The addition of selected flavonoids could effectively reduce acrylamide formation, which may be closely related to the number of phenolic hydroxyl groups of flavonoids (R: 0.735-0.951, Pacrylamide formation correlated well with the change of trolox equivalent antioxidant capacity (ΔTEAC) measured by DPPH (R(2)=0.833), ABTS (R(2)=0.860) or FRAP (R(2)=0.824) assay. Both ANN and MLR models could effectively serve as predictive tools for estimating the reduction of acrylamide affected by flavonoids. The current predictive model study provides a low-cost and easy-to-use approach to the estimation of rates at which acrylamide is degraded, while avoiding tedious sample pretreatment procedures and advanced instrumental analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Technical Note on a Track-pattern-based Model for Predicting Seasonal Tropical Cyclone Activity over the Western North Pacific

    Institute of Scientific and Technical Information of China (English)

    Chang-Hoi HO; Joo-Hong KIM; Hyeong-Seog KIM; Woosuk CHOI; Min-Hee LEE; Hee-Dong YOO; Tae-Ryong KIM

    2013-01-01

    Recently,the National Typhoon Center (NTC) at the Korea Meteorological Administration launched a track-pattern-based model that predicts the horizontal distribution of tropical cyclone (TC) track density from June to October.This model is the first approach to target seasonal TC track clusters covering the entire western North Pacific (WNP) basin,and may represent a milestone for seasonal TC forecasting,using a simple statistical method that can be applied at weather operation centers.In this note,we describe the procedure of the track-pattern-based model with brief technical background to provide practical information on the use and operation of the model.The model comprises three major steps.First,long-term data of WNP TC tracks reveal seven climatological track clusters.Second,the TC counts for each cluster are predicted using a hybrid statistical-dynamical method,using the seasonal prediction of large-scale environments.Third,the final forecast map of track density is constructed by merging the spatial probabilities of the seven clusters and applying necessary bias corrections.Although the model is developed to issue the seasonal forecast in mid-May,it can be applied to alternative dates and target seasons following the procedure described in this note.Work continues on establishing an automatic system for this model at the NTC.

  6. Predictive accuracy of the PanCan lung cancer risk prediction model - external validation based on CT from the Danish Lung Cancer Screening Trial

    Energy Technology Data Exchange (ETDEWEB)

    Winkler Wille, Mathilde M.; Dirksen, Asger [Gentofte Hospital, Department of Respiratory Medicine, Hellerup (Denmark); Riel, Sarah J. van; Jacobs, Colin; Scholten, Ernst T.; Ginneken, Bram van [Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Saghir, Zaigham [Herlev Hospital, Department of Respiratory Medicine, Herlev (Denmark); Pedersen, Jesper Holst [Copenhagen University Hospital, Department of Thoracic Surgery, Rigshospitalet, Koebenhavn Oe (Denmark); Hohwue Thomsen, Laura [Hvidovre Hospital, Department of Respiratory Medicine, Hvidovre (Denmark); Skovgaard, Lene T. [University of Copenhagen, Department of Biostatistics, Koebenhavn Oe (Denmark)

    2015-10-15

    Lung cancer risk models should be externally validated to test generalizability and clinical usefulness. The Danish Lung Cancer Screening Trial (DLCST) is a population-based prospective cohort study, used to assess the discriminative performances of the PanCan models. From the DLCST database, 1,152 nodules from 718 participants were included. Parsimonious and full PanCan risk prediction models were applied to DLCST data, and also coefficients of the model were recalculated using DLCST data. Receiver operating characteristics (ROC) curves and area under the curve (AUC) were used to evaluate risk discrimination. AUCs of 0.826-0.870 were found for DLCST data based on PanCan risk prediction models. In the DLCST, age and family history were significant predictors (p = 0.001 and p = 0.013). Female sex was not confirmed to be associated with higher risk of lung cancer; in fact opposing effects of sex were observed in the two cohorts. Thus, female sex appeared to lower the risk (p = 0.047 and p = 0.040) in the DLCST. High risk discrimination was validated in the DLCST cohort, mainly determined by nodule size. Age and family history of lung cancer were significant predictors and could be included in the parsimonious model. Sex appears to be a less useful predictor. (orig.)

  7. The Price Model of Aquatic Products Based on Predictive Control Theory

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    This paper discusses a disequilibrium cobweb model of price of aquatic products, and applies predictive control theory, so that the system operates stably, and the deviation between supply and demand of aquatic products smoothly tracks the pre-given target. It defines the supply and demand change model, and researches the impact of parameter selection in this model on dynamic state and robustness of the system. I conduct simulation by Matlab software, to get the response curve of this model. The results show that in the early period of commodities coming into the market, affected by lack of market information and many other factors, the price fluctuates greatly in a short time. The market will gradually achieve balance between supply and demand over time, and the price fluctuations in the neighbouring two periods are broadly consistent. The increase in model parameter can decrease overshoot, to promote the stability of system, but the slower the dynamic response, the longer the deviation between supply and demand to accurately track a given target. Therefore, by selecting different parameters, the decision-makers can establish different models of supply and demand changes to meet the actual needs, and ensure stable development of market. Simulation results verify the excellent performance of this algorithm.

  8. Modeling and Prediction of Coal Ash Fusion Temperature based on BP Neural Network

    Directory of Open Access Journals (Sweden)

    Miao Suzhen

    2016-01-01

    Full Text Available Coal ash is the residual generated from combustion of coal. The ash fusion temperature (AFT of coal gives detail information on the suitability of a coal source for gasification procedures, and specifically to which extent ash agglomeration or clinkering is likely to occur within the gasifier. To investigate the contribution of oxides in coal ash to AFT, data of coal ash chemical compositions and Softening Temperature (ST in different regions of China were collected in this work and a BP neural network model was established by XD-APC PLATFORM. In the BP model, the inputs were the ash compositions and the output was the ST. In addition, the ash fusion temperature prediction model was obtained by industrial data and the model was generalized by different industrial data. Compared to empirical formulas, the BP neural network obtained better results. By different tests, the best result and the best configurations for the model were obtained: hidden layer nodes of the BP network was setted as three, the component contents (SiO2, Al2O3, Fe2O3, CaO, MgO were used as inputs and ST was used as output of the model.

  9. Stability predictions for high-order ΣΔ modulators based on quasilinear modeling

    DEFF Research Database (Denmark)

    Risbo, Lars

    1994-01-01

    This paper introduces a novel interpretation of the instability mechanisms in high-order one-bit Sigma-Delta modulators. Furthermore, it is demonstrated how the maximum stable amplitude range can be predicted very well. The results are obtained using an extension of the well known quasilinear mod...... modeling of the one-bit quantizer. The theoretical results are verified by numerical simulations of a number of realistic 4th order modulators designed by means of standard filter design tools. The results are useful for automated design and optimization of loop filters...

  10. Boundary-layer transition prediction using a simplified correlation-based model

    Directory of Open Access Journals (Sweden)

    Xia Chenchao

    2016-02-01

    Full Text Available This paper describes a simplified transition model based on the recently developed correlation-based γ-Reθt transition model. The transport equation of transition momentum thickness Reynolds number is eliminated for simplicity, and new transition length function and critical Reynolds number correlation are proposed. The new model is implemented into an in-house computational fluid dynamics (CFD code and validated for low and high-speed flow cases, including the zero pressure flat plate, airfoils, hypersonic flat plate and double wedge. Comparisons between the simulation results and experimental data show that the boundary-layer transition phenomena can be reasonably illustrated by the new model, which gives rise to significant improvements over the fully laminar and fully turbulent results. Moreover, the new model has comparable features of accuracy and applicability when compared with the original γ-Reθt model. In the meantime, the newly proposed model takes only one transport equation of intermittency factor and requires fewer correlations, which simplifies the original model greatly. Further studies, especially on separation-induced transition flows, are required for the improvement of the new model.

  11. Boundary-layer transition prediction using a simplified correlation-based model

    Institute of Scientific and Technical Information of China (English)

    Xia Chenchao; Chen Weifang

    2016-01-01

    This paper describes a simplified transition model based on the recently developed correlation-based c ? Reht transition model. The transport equation of transition momentum thick-ness Reynolds number is eliminated for simplicity, and new transition length function and critical Reynolds number correlation are proposed. The new model is implemented into an in-house com-putational fluid dynamics (CFD) code and validated for low and high-speed flow cases, including the zero pressure flat plate, airfoils, hypersonic flat plate and double wedge. Comparisons between the simulation results and experimental data show that the boundary-layer transition phenomena can be reasonably illustrated by the new model, which gives rise to significant improvements over the fully laminar and fully turbulent results. Moreover, the new model has comparable features of accuracy and applicability when compared with the original c ? Reht model. In the meantime, the newly proposed model takes only one transport equation of intermittency factor and requires fewer correlations, which simplifies the original model greatly. Further studies, especially on separation-induced transition flows, are required for the improvement of the new model.

  12. A GIS based spatial prediction model for human – elephant conflicts (HEC

    Directory of Open Access Journals (Sweden)

    G Prasad

    2011-09-01

    Full Text Available A spatially explicit model was used to predict the probable human-elephant conflict (HEC zones in the Agali range of Mannarghat forest division, Western Ghats, India. In the study, various geo-environmental factors such as DEM, slope, aspect, canopy density, distance to forest, distance to hamlets, distance to drainage and distance to river were used for the analysis. Weights-of-Evidence analysis has been applied to find the HEC probability zone using Arc SDM, an extension of Arc GIS. Weight tables were created for each factor, which gave the information about the contrast, showing its positive or negative influence. Using these weight tables, a final predicted map, the posterior probability map was created which predicts the future human- elephant conflict zones. The class (-0.29-0.30 in the ‘canopy density’ shows a contrast value of 2.8515 which is the highest positive influence , while class 1000-1500 (m in the ‘distance to forest’ shows the least positive influence value of 0.8462. The final posterior probability map shows positive trend towards north-east and negative trends towards south. The relationships between the different zones in the map were compared with, each classes of the eight multiclass geo-environmental variables and land use. We found a strong co-relation of HEC occurrence with distance to hamlets, agricultural land etc.

  13. Prediction of shear bands in sand based on granular flow model and two-phase equilibrium

    Institute of Scientific and Technical Information of China (English)

    张义同; 齐德瑄; 杜如虚; 任述光

    2008-01-01

    In contrast to the traditional interpretation of shear bands in sand as a bifurcation problem in continuum mechanics,shear bands in sand are considered as high-strain phase(plastic phase) of sand and the materials outside the bands are still in low-strain phase(elastic phase),namely,the two phases of sand can coexist under certain condition.As a one-dimensional example,the results show that,for materials with strain-softening behavior,the two-phase solution is a stable branch of solutions,but the method to find two-phase solutions is very different from the one for bifurcation analysis.The theory of multi-phase equilibrium and the slow plastic flow model are applied to predict the formation and patterns of shear bands in sand specimens,discontinuity of deformation gradient and stress across interfaces between shear bands and other regions is considered,the continuity of displacements and traction across interfaces is imposed,and the Maxwell relation is satisfied.The governing equations are deduced.The critical stress for the formation of a shear band,both the stresses and strains inside the band and outside the band,and the inclination angle of the band can all be predicted.The predicted results are consistent with experimental measurements.

  14. Scale-up of a physiologically-based pharmacokinetic model to predict the disposition of monoclonal antibodies in monkeys.

    Science.gov (United States)

    Glassman, Patrick M; Chen, Yang; Balthasar, Joseph P

    2015-10-01

    Preclinical assessment of monoclonal antibody (mAb) disposition during drug development often includes investigations in non-human primate models. In many cases, mAb exhibit non-linear disposition that relates to mAb-target binding [i.e., target-mediated disposition (TMD)]. The goal of this work was to develop a physiologically-based pharmacokinetic (PBPK) model to predict non-linear mAb disposition in plasma and in tissues in monkeys. Physiological parameters for monkeys were collected from several sources, and plasma data for several mAbs associated with linear pharmacokinetics were digitized from prior literature reports. The digitized data displayed great variability; therefore, parameters describing inter-antibody variability in the rates of pinocytosis and convection were estimated. For prediction of the disposition of individual antibodies, we incorporated tissue concentrations of target proteins, where concentrations were estimated based on categorical immunohistochemistry scores, and with assumed localization of target within the interstitial space of each organ. Kinetics of target-mAb binding and target turnover, in the presence or absence of mAb, were implemented. The model was then employed to predict concentration versus time data, via Monte Carlo simulation, for two mAb that have been shown to exhibit TMD (2F8 and tocilizumab). Model predictions, performed a priori with no parameter fitting, were found to provide good prediction of dose-dependencies in plasma clearance, the areas under plasma concentration versu time curves, and the time-course of plasma concentration data. This PBPK model may find utility in predicting plasma and tissue concentration versus time data and, potentially, the time-course of receptor occupancy (i.e., mAb-target binding) to support the design and interpretation of preclinical pharmacokinetic-pharmacodynamic investigations in non-human primates.

  15. A universal computational model for predicting antigenic variants of influenza A virus based on conserved antigenic structures

    Science.gov (United States)

    Peng, Yousong; Wang, Dayan; Wang, Jianhong; Li, Kenli; Tan, Zhongyang; Shu, Yuelong; Jiang, Taijiao

    2017-01-01

    Rapid determination of the antigenicity of influenza A virus could help identify the antigenic variants in time. Currently, there is a lack of computational models for predicting antigenic variants of some common hemagglutinin (HA) subtypes of influenza A viruses. By means of sequence analysis, we demonstrate here that multiple HA subtypes of influenza A virus undergo similar mutation patterns of HA1 protein (the immunogenic part of HA). Further analysis on the antigenic variation of influenza A virus H1N1, H3N2 and H5N1 showed that the amino acid residues’ contribution to antigenic variation highly differed in these subtypes, while the regional bands, defined based on their distance to the top of HA1, played conserved roles in antigenic variation of these subtypes. Moreover, the computational models for predicting antigenic variants based on regional bands performed much better in the testing HA subtype than those did based on amino acid residues. Therefore, a universal computational model, named PREDAV-FluA, was built based on the regional bands to predict the antigenic variants for all HA subtypes of influenza A viruses. The model achieved an accuracy of 0.77 when tested with avian influenza H9N2 viruses. It may help for rapid identification of antigenic variants in influenza surveillance. PMID:28165025

  16. Urban climate model MUKLIMO_3 in prediction mode - evaluation of model performance based on the case study of Vienna

    Science.gov (United States)

    Hollosi, Brigitta; Zuvela-Aloise, Maja

    2017-04-01

    To reduce negative health impacts of extreme heat load in urban areas is the application of early warning systems that use weather forecast models to predict forthcoming heat events of utmost importance. In the state-of-the-art operational heat warning systems the meteorological information relies on the weather forecast from the regional numerical models and monitoring stations that do not include details of urban structure. In this study, the dynamical urban climate model MUKLIMO3 (horizontal resolution of 100 - 200 m) is initialized with the vertical profiles from the archived daily forecast data of the ZAMG from the hydrostatic ALARO numerical weather prediction model run at 0600 UTC to simulate the development of the urban heat island in Vienna on a daily basis. The aim is to evaluate the performance of the urban climate model, so far applied only for climatological studies, in a weather prediction mode using the summer period 2011-2015 as a test period. The focus of the investigation is on assessment of the urban heat load during the day-time. The model output has been evaluated against the monitoring data at the weather stations in the area of the city. The model results for daily maximum temperature show good agreement with the observations, especially at the urban and suburban stations where the mean bias is low. The results are highly dependent on the input data from the meso-scale model that leads to larger deviation from observations if the prediction is not representative for the given day. This study can be used to support urban planning strategies and to improve existing practices to alert decision-makers and the public to impending dangers of excessive heat.

  17. A new structure-based QSAR method affords both descriptive and predictive models for phosphodiesterase-4 inhibitors.

    Science.gov (United States)

    Dong, Xialan; Zheng, Weifan

    2008-11-06

    We describe the application of a new QSAR (quantitative structure-activity relationship) formalism to the analysis and modeling of PDE-4 inhibitors. This new method takes advantage of the X-ray structural information of the PDE-4 enzyme to characterize the small molecule inhibitors. It calculates molecular descriptors based on the matching of their pharmacophore feature pairs with those (the reference) of the target binding pocket. Since the reference is derived from the X-ray crystal structures of the target under study, these descriptors are target-specific and easy to interpret. We have analyzed 35 indole derivative-based PDE-4 inhibitors where Partial Least Square (PLS) analysis has been employed to obtain the predictive models. Compared to traditional QSAR methods such as CoMFA and CoMSIA, our models are more robust and predictive measured by statistics for both the training and test sets of molecules. Our method can also identify critical pharmacophore features that are responsible for the inhibitory potency of the small molecules. Thus, this structure-based QSAR method affords both descriptive and predictive models for phosphodiesterase-4 inhibitors. The success of this study has also laid a solid foundation for systematic QSAR modeling of the PDE family of enzymes, which will ultimately contribute to chemical genomics research and drug discovery targeting the PDE enzymes.

  18. Predictive Skill of Meteorological Drought Based on Multi-Model Ensemble Forecasts: A Real-Time Assessment

    Science.gov (United States)

    Chen, L. C.; Mo, K. C.; Zhang, Q.; Huang, J.

    2014-12-01

    Drought prediction from monthly to seasonal time scales is of critical importance to disaster mitigation, agricultural planning, and multi-purpose reservoir management. Starting in December 2012, NOAA Climate Prediction Center (CPC) has been providing operational Standardized Precipitation Index (SPI) Outlooks using the North American Multi-Model Ensemble (NMME) forecasts, to support CPC's monthly drought outlooks and briefing activities. The current NMME system consists of six model forecasts from U.S. and Canada modeling centers, including the CFSv2, CM2.1, GEOS-5, CCSM3.0, CanCM3, and CanCM4 models. In this study, we conduct an assessment of the predictive skill of meteorological drought using real-time NMME forecasts for the period from May 2012 to May 2014. The ensemble SPI forecasts are the equally weighted mean of the six model forecasts. Two performance measures, the anomaly correlation coefficient and root-mean-square errors against the observations, are used to evaluate forecast skill.Similar to the assessment based on NMME retrospective forecasts, predictive skill of monthly-mean precipitation (P) forecasts is generally low after the second month and errors vary among models. Although P forecast skill is not large, SPI predictive skill is high and the differences among models are small. The skill mainly comes from the P observations appended to the model forecasts. This factor also contributes to the similarity of SPI prediction among the six models. Still, NMME SPI ensemble forecasts have higher skill than those based on individual models or persistence, and the 6-month SPI forecasts are skillful out to four months. The three major drought events occurred during the 2012-2014 period, the 2012 Central Great Plains drought, the 2013 Upper Midwest flash drought, and 2013-2014 California drought, are used as examples to illustrate the system's strength and limitations. For precipitation-driven drought events, such as the 2012 Central Great Plains drought

  19. Warranty optimisation based on the prediction of costs to the manufacturer using neural network model and Monte Carlo simulation

    Science.gov (United States)

    Stamenkovic, Dragan D.; Popovic, Vladimir M.

    2015-02-01

    Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.

  20. Nominal model predictive control

    OpenAIRE

    Grüne, Lars

    2013-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  1. Nominal Model Predictive Control

    OpenAIRE

    Grüne, Lars

    2014-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  2. Effective automated prediction of vertebral column pathologies based on logistic model tree with SMOTE preprocessing.

    Science.gov (United States)

    Karabulut, Esra Mahsereci; Ibrikci, Turgay

    2014-05-01

    This study develops a logistic model tree based automation system based on for accurate recognition of types of vertebral column pathologies. Six biomechanical measures are used for this purpose: pelvic incidence, pelvic tilt, lumbar lordosis angle, sacral slope, pelvic radius and grade of spondylolisthesis. A two-phase classification model is employed in which the first step is preprocessing the data by use of Synthetic Minority Over-sampling Technique (SMOTE), and the second one is feeding the classifier Logistic Model Tree (LMT) with the preprocessed data. We have achieved an accuracy of 89.73 %, and 0.964 Area Under Curve (AUC) in computer based automatic detection of the pathology. This was validated via a 10-fold-cross-validation experiment conducted on clinical records of 310 patients. The study also presents a comparative analysis of the vertebral column data with the use of several machine learning algorithms.

  3. A unified model for prediction of CSOR in steam-based bitumen recovery

    Energy Technology Data Exchange (ETDEWEB)

    Edmunds, N.; Peterson, J. [Laricina Energy Ltd., Calgary, AB (Canada)

    2007-07-01

    A model was used to estimate the cumulative steam-oil ratio (CSOR) of steam assisted gravity drainage (SAGD) processes in high-permeability reservoirs. The aim of the model was to correlate the CSOR performance of current SAGD projects against single reservoir variables, and was designed for use in evaluations and economic analyses. The model assumed that depletion was gravity driven, and that produced oil from the steam zone was uniformly reduced to residual oil saturation. An empirical constant was used to account for heat stored below the chamber as a factor of the overburden transient losses. CSOR was predicted as a function of time. Model parameters were tested at EnCana's Foster Creek McMurray project and at Canadian Natural Resources' Primrose Grand Rapids site, while CSS data were obtained from the Cold Lake Clearwater project. The model was then used to compare SAGD and CSS projects. Results showed that vertical-radial geometry was thermally inefficient when compared to the horizontal trough used in SAGD operations. Vertical CSS methods also made inefficient use of the gravity potential of oil, due to the radial inflow constriction in the near-wellbore region. It was concluded that horizontal drainage geometry was able to sustain higher production rates at higher bottom hole viscosities.10 refs., 2 tabs., 9 figs.

  4. MLP based models to predict PM10, O3 concentrations, in Sines industrial area

    Science.gov (United States)

    Durao, R.; Pereira, M. J.

    2012-04-01

    Sines is an important Portuguese industrial area located southwest cost of Portugal with important nearby protected natural areas. The main economical activities are related with this industrial area, the deep-water port, petrochemical and thermo-electric industry. Nevertheless, tourism is also an important economic activity especially in summer time with potential to grow. The aim of this study is to develop prediction models of pollutant concentration categories (e.g. low concentration and high concentration) in order to provide early warnings to the competent authorities who are responsible for the air quality management. The knowledge in advanced of pollutant high concentrations occurrence will allow the implementation of mitigation actions and the release of precautionary alerts to population. The regional air quality monitoring network consists in three monitoring stations where a set of pollutants' concentrations are registered on a continuous basis. From this set stands out the tropospheric ozone (O3) and particulate matter (PM10) due to the high concentrations occurring in the region and their adverse effects on human health. Moreover, the major industrial plants of the region monitor SO2, NO2 and particles emitted flows at the principal chimneys (point sources), also on a continuous basis,. Therefore Artificial neuronal networks (ANN) were the applied methodology to predict next day pollutant concentrations; due to the ANNs structure they have the ability to capture the non-linear relationships between predictor variables. Hence the first step of this study was to apply multivariate exploratory techniques to select the best predictor variables. The classification trees methodology (CART) was revealed to be the most appropriate in this case.. Results shown that pollutants atmospheric concentrations are mainly dependent on industrial emissions and a complex combination of meteorological factors and the time of the year. In the second step, the Multi

  5. Security Solutions for Networked Control Systems Based on DES Algorithm and Improved Grey Prediction Model

    Directory of Open Access Journals (Sweden)

    Liying Zhang

    2013-11-01

    Full Text Available Compared with the conventional control systems, networked control systems (NCSs are more open to the external network. As a result, they are more vulnerable to attacks from disgruntled insiders or malicious cyber-terrorist organizations. Therefore, the security issues of NCSs have been receiving a lot of attention recently. In this brief, we review the existing literature on security issues of NCSs and propose some security solutions for the DC motor networked control system. The typical Data Encryption Standard (DES algorithm is adopted to implement data encryption and decryption. Furthermore, we design a Detection and Reaction Mechanism (DARM on the basis of DES algorithm and the improved grey prediction model. Finally, our proposed security solutions are tested with the established models of deception and DOS attacks. According to the results of numerical experiments, it's clear to see the great feasibility and effectiveness of the proposed solutions above.

  6. An Analytical Model for Fatigue Life Prediction Based on Fracture Mechanics and Crack Closure

    DEFF Research Database (Denmark)

    Ibsø, Jan Behrend; Agerskov, Henning

    1996-01-01

    Fatigue in steel structures subjected to stochastic loading is studied. Of special interest is the problem of fatigue damage accumulation and in this connection, a comparison between experimental results and results obtained using fracture mechanics. Fatigue test results obtained for welded plate...... test specimens are compared with fatigue life predictions using a fracture mechanics approach. In the calculation of the fatigue life, the influence of the welding residual stresses and crack closure on the fatigue crack growth is considered. A description of the crack closure model for analytical...... determination of the fatigue life is included. Furthermore, the results obtained in studies of the various parameters that have an influence on the fatigue life, are given. A very good agreement between experimental and analytical results is obtained, when the crack closure model is used in determination...

  7. State-Space Modeling and Performance Analysis of Variable-Speed Wind Turbine Based on a Model Predictive Control Approach

    Directory of Open Access Journals (Sweden)

    H. Bassi

    2017-04-01

    Full Text Available Advancements in wind energy technologies have led wind turbines from fixed speed to variable speed operation. This paper introduces an innovative version of a variable-speed wind turbine based on a model predictive control (MPC approach. The proposed approach provides maximum power point tracking (MPPT, whose main objective is to capture the maximum wind energy in spite of the variable nature of the wind’s speed. The proposed MPC approach also reduces the constraints of the two main functional parts of the wind turbine: the full load and partial load segments. The pitch angle for full load and the rotating force for the partial load have been fixed concurrently in order to balance power generation as well as to reduce the operations of the pitch angle. A mathematical analysis of the proposed system using state-space approach is introduced. The simulation results using MATLAB/SIMULINK show that the performance of the wind turbine with the MPC approach is improved compared to the traditional PID controller in both low and high wind speeds.

  8. Advancing hydrometeorological prediction capabilities through standards-based cyberinfrastructure development: The community WRF-Hydro modeling system

    Science.gov (United States)

    gochis, David; Parodi, Antonio; Hooper, Rick; Jha, Shantenu; Zaslavsky, Ilya

    2013-04-01

    The need for improved assessments and predictions of many key environmental variables is driving a multitude of model development efforts in the geosciences. The proliferation of weather and climate impacts research is driving a host of new environmental prediction model development efforts as society seeks to understand how climate does and will impact key societal activities and resources and, in turn, how human activities influence climate and the environment. This surge in model development has highlighted the role of model coupling as a fundamental activity itself and, at times, a significant bottleneck in weather and climate impacts research. This talk explores some of the recent activities and progress that has been made in assessing the attributes of various approaches to the coupling of physics-based process models for hydrometeorology. One example modeling system that is emerging from these efforts is the community 'WRF-Hydro' modeling system which is based on the modeling architecture of the Weather Research and Forecasting (WRF). An overview of the structural components of WRF-Hydro will be presented as will results from several recent applications which include the prediction of flash flooding events in the Rocky Mountain Front Range region of the U.S. and along the Ligurian coastline in the northern Mediterranean. Efficient integration of the coupled modeling system with distributed infrastructure for collecting and sharing hydrometeorological observations is one of core themes of the work. Specifically, we aim to demonstrate how data management infrastructures used in the US and Europe, in particular data sharing technologies developed within the CUAHSI Hydrologic Information System and UNIDATA, can interoperate based on international standards for data discovery and exchange, such as standards developed by the Open Geospatial Consortium and adopted by GEOSS. The data system we envision will help manage WRF-Hydro prediction model data flows, enabling

  9. Combining process-based and correlative models improves predictions of climate change effects on Schistosoma mansoni transmission in eastern Africa

    Directory of Open Access Journals (Sweden)

    Anna-Sofie Stensgaard

    2016-03-01

    Full Text Available Currently, two broad types of approach for predicting the impact of climate change on vector-borne diseases can be distinguished: i empirical-statistical (correlative approaches that use statistical models of relationships between vector and/or pathogen presence and environmental factors; and ii process-based (mechanistic approaches that seek to simulate detailed biological or epidemiological processes that explicitly describe system behavior. Both have advantages and disadvantages, but it is generally acknowledged that both approaches have value in assessing the response of species in general to climate change. Here, we combine a previously developed dynamic, agentbased model of the temperature-sensitive stages of the Schistosoma mansoni and intermediate host snail lifecycles, with a statistical model of snail habitat suitability for eastern Africa. Baseline model output compared to empirical prevalence data suggest that the combined model performs better than a temperature-driven model alone, and highlights the importance of including snail habitat suitability when modeling schistosomiasis risk. There was general agreement among models in predicting changes in risk, with 24-36% of the eastern Africa region predicted to experience an increase in risk of up-to 20% as a result of increasing temperatures over the next 50 years. Vice versa the models predicted a general decrease in risk in 30-37% of the study area. The snail habitat suitability models also suggest that anthropogenically altered habitat play a vital role for the current distribution of the intermediate snail host, and hence we stress the importance of accounting for land use changes in models of future changes in schistosomiasis risk.

  10. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  11. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    Science.gov (United States)

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  12. Multi-gene genetic programming based predictive models for municipal solid waste gasification in a fluidized bed gasifier.

    Science.gov (United States)

    Pandey, Daya Shankar; Pan, Indranil; Das, Saptarshi; Leahy, James J; Kwapinski, Witold

    2015-03-01

    A multi-gene genetic programming technique is proposed as a new method to predict syngas yield production and th