WorldWideScience

Sample records for model predictions based

  1. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  2. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....

  3. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  4. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...

  5. Modeling of Complex Life Cycle Prediction Based on Cell Division

    Directory of Open Access Journals (Sweden)

    Fucheng Zhang

    2017-01-01

    Full Text Available Effective fault diagnosis and reasonable life expectancy are of great significance and practical engineering value for the safety, reliability, and maintenance cost of equipment and working environment. At present, the life prediction methods of the equipment are equipment life prediction based on condition monitoring, combined forecasting model, and driven data. Most of them need to be based on a large amount of data to achieve the problem. For this issue, we propose learning from the mechanism of cell division in the organism. We have established a moderate complexity of life prediction model across studying the complex multifactor correlation life model. In this paper, we model the life prediction of cell division. Experiments show that our model can effectively simulate the state of cell division. Through the model of reference, we will use it for the equipment of the complex life prediction.

  6. Statistical model based gender prediction for targeted NGS clinical panels

    Directory of Open Access Journals (Sweden)

    Palani Kannan Kandavel

    2017-12-01

    The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.

  7. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  8. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  9. Moment based model predictive control for systems with additive uncertainty

    NARCIS (Netherlands)

    Saltik, M.B.; Ozkan, L.; Weiland, S.; Ludlage, J.H.A.

    2017-01-01

    In this paper, we present a model predictive control (MPC) strategy based on the moments of the state variables and the cost functional. The statistical properties of the state predictions are calculated through the open loop iteration of dynamics and used in the formulation of MPC cost function. We

  10. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  11. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  12. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  13. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  14. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  15. Driver's mental workload prediction model based on physiological indices.

    Science.gov (United States)

    Yan, Shengyuan; Tran, Cong Chi; Wei, Yingying; Habiyaremye, Jean Luc

    2017-09-15

    Developing an early warning model to predict the driver's mental workload (MWL) is critical and helpful, especially for new or less experienced drivers. The present study aims to investigate the correlation between new drivers' MWL and their work performance, regarding the number of errors. Additionally, the group method of data handling is used to establish the driver's MWL predictive model based on subjective rating (NASA task load index [NASA-TLX]) and six physiological indices. The results indicate that the NASA-TLX and the number of errors are positively correlated, and the predictive model shows the validity of the proposed model with an R 2 value of 0.745. The proposed model is expected to provide a reference value for the new drivers of their MWL by providing the physiological indices, and the driving lesson plans can be proposed to sustain an appropriate MWL as well as improve the driver's work performance.

  16. Cloud Based Metalearning System for Predictive Modeling of Biomedical Data

    Directory of Open Access Journals (Sweden)

    Milan Vukićević

    2014-01-01

    Full Text Available Rapid growth and storage of biomedical data enabled many opportunities for predictive modeling and improvement of healthcare processes. On the other side analysis of such large amounts of data is a difficult and computationally intensive task for most existing data mining algorithms. This problem is addressed by proposing a cloud based system that integrates metalearning framework for ranking and selection of best predictive algorithms for data at hand and open source big data technologies for analysis of biomedical data.

  17. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    Science.gov (United States)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  18. PNN-based Rockburst Prediction Model and Its Applications

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2017-07-01

    Full Text Available Rock burst is one of main engineering geological problems significantly threatening the safety of construction. Prediction of rock burst is always an important issue concerning the safety of workers and equipment in tunnels. In this paper, a novel PNN-based rock burst prediction model is proposed to determine whether rock burst will happen in the underground rock projects and how much the intensity of rock burst is. The probabilistic neural network (PNN is developed based on Bayesian criteria of multivariate pattern classification. Because PNN has the advantages of low training complexity, high stability, quick convergence, and simple construction, it can be well applied in the prediction of rock burst. Some main control factors, such as rocks’ maximum tangential stress, rocks’ uniaxial compressive strength, rocks’ uniaxial tensile strength, and elastic energy index of rock are chosen as the characteristic vector of PNN. PNN model is obtained through training data sets of rock burst samples which come from underground rock project in domestic and abroad. Other samples are tested with the model. The testing results agree with the practical records. At the same time, two real-world applications are used to verify the proposed method. The results of prediction are same as the results of existing methods, just same as what happened in the scene, which verifies the effectiveness and applicability of our proposed work.

  19. Construction Worker Fatigue Prediction Model Based on System Dynamic

    Directory of Open Access Journals (Sweden)

    Wahyu Adi Tri Joko

    2017-01-01

    Full Text Available Construction accident can be caused by internal and external factors such as worker fatigue and unsafe project environment. Tight schedule of construction project forcing construction worker to work overtime in long period. This situation leads to worker fatigue. This paper proposes a model to predict construction worker fatigue based on system dynamic (SD. System dynamic is used to represent correlation among internal and external factors and to simulate level of worker fatigue. To validate the model, 93 construction workers whom worked in a high rise building construction projects, were used as case study. The result shows that excessive workload, working elevation and age, are the main factors lead to construction worker fatigue. Simulation result also shows that these factors can increase worker fatigue level to 21.2% times compared to normal condition. Beside predicting worker fatigue level this model can also be used as early warning system to prevent construction worker accident

  20. Construction Worker Fatigue Prediction Model Based on System Dynamic

    OpenAIRE

    Wahyu Adi Tri Joko; Ayu Ratnawinanda Lila

    2017-01-01

    Construction accident can be caused by internal and external factors such as worker fatigue and unsafe project environment. Tight schedule of construction project forcing construction worker to work overtime in long period. This situation leads to worker fatigue. This paper proposes a model to predict construction worker fatigue based on system dynamic (SD). System dynamic is used to represent correlation among internal and external factors and to simulate level of worker fatigue. To validate...

  1. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  2. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  3. Decline curve based models for predicting natural gas well performance

    Directory of Open Access Journals (Sweden)

    Arash Kamari

    2017-06-01

    Full Text Available The productivity of a gas well declines over its production life as cannot cover economic policies. To overcome such problems, the production performance of gas wells should be predicted by applying reliable methods to analyse the decline trend. Therefore, reliable models are developed in this study on the basis of powerful artificial intelligence techniques viz. the artificial neural network (ANN modelling strategy, least square support vector machine (LSSVM approach, adaptive neuro-fuzzy inference system (ANFIS, and decision tree (DT method for the prediction of cumulative gas production as well as initial decline rate multiplied by time as a function of the Arps' decline curve exponent and ratio of initial gas flow rate over total gas flow rate. It was concluded that the results obtained based on the models developed in current study are in satisfactory agreement with the actual gas well production data. Furthermore, the results of comparative study performed demonstrates that the LSSVM strategy is superior to the other models investigated for the prediction of both cumulative gas production, and initial decline rate multiplied by time.

  4. Coal demand prediction based on a support vector machine model

    Energy Technology Data Exchange (ETDEWEB)

    Jia, Cun-liang; Wu, Hai-shan; Gong, Dun-wei [China University of Mining & Technology, Xuzhou (China). School of Information and Electronic Engineering

    2007-01-15

    A forecasting model for coal demand of China using a support vector regression was constructed. With the selected embedding dimension, the output vectors and input vectors were constructed based on the coal demand of China from 1980 to 2002. After compared with lineal kernel and Sigmoid kernel, a radial basis function(RBF) was adopted as the kernel function. By analyzing the relationship between the error margin of prediction and the model parameters, the proper parameters were chosen. The support vector machines (SVM) model with multi-input and single output was proposed. Compared the predictor based on RBF neural networks with test datasets, the results show that the SVM predictor has higher precision and greater generalization ability. In the end, the coal demand from 2003 to 2006 is accurately forecasted. l0 refs., 2 figs., 4 tabs.

  5. Optimization of arterial age prediction models based in pulse wave

    Energy Technology Data Exchange (ETDEWEB)

    Scandurra, A G [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Meschino, G J [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Passoni, L I [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Dai Pra, A L [Engineering Aplied Artificial Intelligence Group, Mathematics Department, Mar del Plata University (Argentina); Introzzi, A R [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Clara, F M [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina)

    2007-11-15

    We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff.

  6. Optimization of arterial age prediction models based in pulse wave

    International Nuclear Information System (INIS)

    Scandurra, A G; Meschino, G J; Passoni, L I; Dai Pra, A L; Introzzi, A R; Clara, F M

    2007-01-01

    We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff

  7. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  9. Human Posture and Movement Prediction based on Musculoskeletal Modeling

    DEFF Research Database (Denmark)

    Farahani, Saeed Davoudabadi

    2014-01-01

    Abstract This thesis explores an optimization-based formulation, so-called inverse-inverse dynamics, for the prediction of human posture and motion dynamics performing various tasks. It is explained how this technique enables us to predict natural kinematic and kinetic patterns for human posture...... and motion using AnyBody Modeling System (AMS). AMS uses inverse dynamics to analyze musculoskeletal systems and is, therefore, limited by its dependency on input kinematics. We propose to alleviate this dependency by assuming that voluntary postures and movement strategies in humans are guided by a desire...... expenditure, joint forces and other physiological properties derived from the detailed musculoskeletal analysis. Several attempts have been made to uncover the principles underlying motion control strategies in the literature. In case of some movements, like human squat jumping, there is almost no doubt...

  10. Predicting chick body mass by artificial intelligence-based models

    Directory of Open Access Journals (Sweden)

    Patricia Ferreira Ponciano Ferraz

    2014-07-01

    Full Text Available The objective of this work was to develop, validate, and compare 190 artificial intelligence-based models for predicting the body mass of chicks from 2 to 21 days of age subjected to different duration and intensities of thermal challenge. The experiment was conducted inside four climate-controlled wind tunnels using 210 chicks. A database containing 840 datasets (from 2 to 21-day-old chicks - with the variables dry-bulb air temperature, duration of thermal stress (days, chick age (days, and the daily body mass of chicks - was used for network training, validation, and tests of models based on artificial neural networks (ANNs and neuro-fuzzy networks (NFNs. The ANNs were most accurate in predicting the body mass of chicks from 2 to 21 days of age after they were subjected to the input variables, and they showed an R² of 0.9993 and a standard error of 4.62 g. The ANNs enable the simulation of different scenarios, which can assist in managerial decision-making, and they can be embedded in the heating control systems.

  11. Predictive Multiscale Modeling of Nanocellulose Based Materials and Systems

    International Nuclear Information System (INIS)

    Kovalenko, Andriy

    2014-01-01

    Cellulose Nanocrysals (CNC) is a renewable biodegradable biopolymer with outstanding mechanical properties made from highly abundant natural source, and therefore is very attractive as reinforcing additive to replace petroleum-based plastics in biocomposite materials, foams, and gels. Large-scale applications of CNC are currently limited due to its low solubility in non-polar organic solvents used in existing polymerization technologies. The solvation properties of CNC can be improved by chemical modification of its surface. Development of effective surface modifications has been rather slow because extensive chemical modifications destabilize the hydrogen bonding network of cellulose and deteriorate the mechanical properties of CNC. We employ predictive multiscale theory, modeling, and simulation to gain a fundamental insight into the effect of CNC surface modifications on hydrogen bonding, CNC crystallinity, solvation thermodynamics, and CNC compatibilization with the existing polymerization technologies, so as to rationally design green nanomaterials with improved solubility in non-polar solvents, controlled liquid crystal ordering and optimized extrusion properties. An essential part of this multiscale modeling approach is the statistical- mechanical 3D-RISM-KH molecular theory of solvation, coupled with quantum mechanics, molecular mechanics, and multistep molecular dynamics simulation. The 3D-RISM-KH theory provides predictive modeling of both polar and non-polar solvents, solvent mixtures, and electrolyte solutions in a wide range of concentrations and thermodynamic states. It properly accounts for effective interactions in solution such as steric effects, hydrophobicity and hydrophilicity, hydrogen bonding, salt bridges, buffer, co-solvent, and successfully predicts solvation effects and processes in bulk liquids, solvation layers at solid surface, and in pockets and other inner spaces of macromolecules and supramolecular assemblies. This methodology

  12. Predictive Multiscale Modeling of Nanocellulose Based Materials and Systems

    Science.gov (United States)

    Kovalenko, Andriy

    2014-08-01

    Cellulose Nanocrysals (CNC) is a renewable biodegradable biopolymer with outstanding mechanical properties made from highly abundant natural source, and therefore is very attractive as reinforcing additive to replace petroleum-based plastics in biocomposite materials, foams, and gels. Large-scale applications of CNC are currently limited due to its low solubility in non-polar organic solvents used in existing polymerization technologies. The solvation properties of CNC can be improved by chemical modification of its surface. Development of effective surface modifications has been rather slow because extensive chemical modifications destabilize the hydrogen bonding network of cellulose and deteriorate the mechanical properties of CNC. We employ predictive multiscale theory, modeling, and simulation to gain a fundamental insight into the effect of CNC surface modifications on hydrogen bonding, CNC crystallinity, solvation thermodynamics, and CNC compatibilization with the existing polymerization technologies, so as to rationally design green nanomaterials with improved solubility in non-polar solvents, controlled liquid crystal ordering and optimized extrusion properties. An essential part of this multiscale modeling approach is the statistical- mechanical 3D-RISM-KH molecular theory of solvation, coupled with quantum mechanics, molecular mechanics, and multistep molecular dynamics simulation. The 3D-RISM-KH theory provides predictive modeling of both polar and non-polar solvents, solvent mixtures, and electrolyte solutions in a wide range of concentrations and thermodynamic states. It properly accounts for effective interactions in solution such as steric effects, hydrophobicity and hydrophilicity, hydrogen bonding, salt bridges, buffer, co-solvent, and successfully predicts solvation effects and processes in bulk liquids, solvation layers at solid surface, and in pockets and other inner spaces of macromolecules and supramolecular assemblies. This methodology

  13. Stand diameter distribution modelling and prediction based on Richards function.

    Directory of Open Access Journals (Sweden)

    Ai-guo Duan

    Full Text Available The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM or maximum likelihood estimates method (MLEM were applied to estimate the parameters of models, and the parameter prediction method (PPM and parameter recovery method (PRM were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1 R distribution presented a more accurate simulation than three-parametric Weibull function; (2 the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3 the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4 the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.

  14. Modeling and Control of CSTR using Model based Neural Network Predictive Control

    OpenAIRE

    Shrivastava, Piyush

    2012-01-01

    This paper presents a predictive control strategy based on neural network model of the plant is applied to Continuous Stirred Tank Reactor (CSTR). This system is a highly nonlinear process; therefore, a nonlinear predictive method, e.g., neural network predictive control, can be a better match to govern the system dynamics. In the paper, the NN model and the way in which it can be used to predict the behavior of the CSTR process over a certain prediction horizon are described, and some commen...

  15. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Scanpath Based N-Gram Models for Predicting Reading Behavior

    DEFF Research Database (Denmark)

    Mishra, Abhijit; Bhattacharyya, Pushpak; Carl, Michael

    2013-01-01

    Predicting reading behavior is a difficult task. Reading behavior depends on various linguistic factors (e.g. sentence length, structural complexity etc.) and other factors (e.g individual's reading style, age etc.). Ideally, a reading model should be similar to a language model where the model i...

  17. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  18. Embryo quality predictive models based on cumulus cells gene expression

    Directory of Open Access Journals (Sweden)

    Devjak R

    2016-06-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in clinical practice of infertility treatment, the indicators for high quality embryos were investigated. Cumulus cells (CC have a specific gene expression profile according to the developmental potential of the oocyte they are surrounding, and therefore, specific gene expression could be used as a biomarker. The aim of our study was to combine more than one biomarker to observe improvement in prediction value of embryo development. In this study, 58 CC samples from 17 IVF patients were analyzed. This study was approved by the Republic of Slovenia National Medical Ethics Committee. Gene expression analysis [quantitative real time polymerase chain reaction (qPCR] for five genes, analyzed according to embryo quality level, was performed. Two prediction models were tested for embryo quality prediction: a binary logistic and a decision tree model. As the main outcome, gene expression levels for five genes were taken and the area under the curve (AUC for two prediction models were calculated. Among tested genes, AMHR2 and LIF showed significant expression difference between high quality and low quality embryos. These two genes were used for the construction of two prediction models: the binary logistic model yielded an AUC of 0.72 ± 0.08 and the decision tree model yielded an AUC of 0.73 ± 0.03. Two different prediction models yielded similar predictive power to differentiate high and low quality embryos. In terms of eventual clinical decision making, the decision tree model resulted in easy-to-interpret rules that are highly applicable in clinical practice.

  19. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    in noise experiment was used for training and an ideal binary mask experiment was used for evaluation. All three models were able to capture the trends in the speech in noise training data well, but the proposed model provides a better prediction of the binary mask test data, particularly when the binary...... masks degenerate to a noise vocoder....

  20. Model predictive control based on reduced order models applied to belt conveyor system.

    Science.gov (United States)

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Decline Curve Based Models for Predicting Natural Gas Well Performance

    OpenAIRE

    Kamari, Arash; Mohammadi, Amir H.; Lee, Moonyong; Mahmood, Tariq; Bahadori, Alireza

    2016-01-01

    The productivity of a gas well declines over its production life as cannot cover economic policies. To overcome such problems, the production performance of gas wells should be predicted by applying reliable methods to analyse the decline trend. Therefore, reliable models are developed in this study on the basis of powerful artificial intelligence techniques viz. the artificial neural network (ANN) modelling strategy, least square support vector machine (LSSVM) approach, adaptive neuro-fuzzy ...

  2. Model-based prediction of myelosuppression and recovery based on frequent neutrophil monitoring.

    Science.gov (United States)

    Netterberg, Ida; Nielsen, Elisabet I; Friberg, Lena E; Karlsson, Mats O

    2017-08-01

    To investigate whether a more frequent monitoring of the absolute neutrophil counts (ANC) during myelosuppressive chemotherapy, together with model-based predictions, can improve therapy management, compared to the limited clinical monitoring typically applied today. Daily ANC in chemotherapy-treated cancer patients were simulated from a previously published population model describing docetaxel-induced myelosuppression. The simulated values were used to generate predictions of the individual ANC time-courses, given the myelosuppression model. The accuracy of the predicted ANC was evaluated under a range of conditions with reduced amount of ANC measurements. The predictions were most accurate when more data were available for generating the predictions and when making short forecasts. The inaccuracy of ANC predictions was highest around nadir, although a high sensitivity (≥90%) was demonstrated to forecast Grade 4 neutropenia before it occurred. The time for a patient to recover to baseline could be well forecasted 6 days (±1 day) before the typical value occurred on day 17. Daily monitoring of the ANC, together with model-based predictions, could improve anticancer drug treatment by identifying patients at risk for severe neutropenia and predicting when the next cycle could be initiated.

  3. Cloud-based Predictive Modeling System and its Application to Asthma Readmission Prediction

    Science.gov (United States)

    Chen, Robert; Su, Hang; Khalilia, Mohammed; Lin, Sizhe; Peng, Yue; Davis, Tod; Hirsh, Daniel A; Searles, Elizabeth; Tejedor-Sojo, Javier; Thompson, Michael; Sun, Jimeng

    2015-01-01

    The predictive modeling process is time consuming and requires clinical researchers to handle complex electronic health record (EHR) data in restricted computational environments. To address this problem, we implemented a cloud-based predictive modeling system via a hybrid setup combining a secure private server with the Amazon Web Services (AWS) Elastic MapReduce platform. EHR data is preprocessed on a private server and the resulting de-identified event sequences are hosted on AWS. Based on user-specified modeling configurations, an on-demand web service launches a cluster of Elastic Compute 2 (EC2) instances on AWS to perform feature selection and classification algorithms in a distributed fashion. Afterwards, the secure private server aggregates results and displays them via interactive visualization. We tested the system on a pediatric asthma readmission task on a de-identified EHR dataset of 2,967 patients. We conduct a larger scale experiment on the CMS Linkable 2008–2010 Medicare Data Entrepreneurs’ Synthetic Public Use File dataset of 2 million patients, which achieves over 25-fold speedup compared to sequential execution. PMID:26958172

  4. Model-predictive control based on Takagi-Sugeno fuzzy model for electrical vehicles delayed model

    DEFF Research Database (Denmark)

    Khooban, Mohammad-Hassan; Vafamand, Navid; Niknam, Taher

    2017-01-01

    Electric vehicles (EVs) play a significant role in different applications, such as commuter vehicles and short distance transport applications. This study presents a new structure of model-predictive control based on the Takagi-Sugeno fuzzy model, linear matrix inequalities, and a non......-quadratic Lyapunov function for the speed control of EVs including time-delay states and parameter uncertainty. Experimental data, using the Federal Test Procedure (FTP-75), is applied to test the performance and robustness of the suggested controller in the presence of time-varying parameters. Besides, a comparison...... is made between the results of the suggested robust strategy and those obtained from some of the most recent studies on the same topic, to assess the efficiency of the suggested controller. Finally, the experimental results based on a TMS320F28335 DSP are performed on a direct current motor. Simulation...

  5. Prediction of Geological Subsurfaces Based on Gaussian Random Field Models

    Energy Technology Data Exchange (ETDEWEB)

    Abrahamsen, Petter

    1997-12-31

    During the sixties, random functions became practical tools for predicting ore reserves with associated precision measures in the mining industry. This was the start of the geostatistical methods called kriging. These methods are used, for example, in petroleum exploration. This thesis reviews the possibilities for using Gaussian random functions in modelling of geological subsurfaces. It develops methods for including many sources of information and observations for precise prediction of the depth of geological subsurfaces. The simple properties of Gaussian distributions make it possible to calculate optimal predictors in the mean square sense. This is done in a discussion of kriging predictors. These predictors are then extended to deal with several subsurfaces simultaneously. It is shown how additional velocity observations can be used to improve predictions. The use of gradient data and even higher order derivatives are also considered and gradient data are used in an example. 130 refs., 44 figs., 12 tabs.

  6. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  7. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    Science.gov (United States)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  8. In Silico Modeling of Gastrointestinal Drug Absorption: Predictive Performance of Three Physiologically Based Absorption Models.

    Science.gov (United States)

    Sjögren, Erik; Thörn, Helena; Tannergren, Christer

    2016-06-06

    Gastrointestinal (GI) drug absorption is a complex process determined by formulation, physicochemical and biopharmaceutical factors, and GI physiology. Physiologically based in silico absorption models have emerged as a widely used and promising supplement to traditional in vitro assays and preclinical in vivo studies. However, there remains a lack of comparative studies between different models. The aim of this study was to explore the strengths and limitations of the in silico absorption models Simcyp 13.1, GastroPlus 8.0, and GI-Sim 4.1, with respect to their performance in predicting human intestinal drug absorption. This was achieved by adopting an a priori modeling approach and using well-defined input data for 12 drugs associated with incomplete GI absorption and related challenges in predicting the extent of absorption. This approach better mimics the real situation during formulation development where predictive in silico models would be beneficial. Plasma concentration-time profiles for 44 oral drug administrations were calculated by convolution of model-predicted absorption-time profiles and reported pharmacokinetic parameters. Model performance was evaluated by comparing the predicted plasma concentration-time profiles, Cmax, tmax, and exposure (AUC) with observations from clinical studies. The overall prediction accuracies for AUC, given as the absolute average fold error (AAFE) values, were 2.2, 1.6, and 1.3 for Simcyp, GastroPlus, and GI-Sim, respectively. The corresponding AAFE values for Cmax were 2.2, 1.6, and 1.3, respectively, and those for tmax were 1.7, 1.5, and 1.4, respectively. Simcyp was associated with underprediction of AUC and Cmax; the accuracy decreased with decreasing predicted fabs. A tendency for underprediction was also observed for GastroPlus, but there was no correlation with predicted fabs. There were no obvious trends for over- or underprediction for GI-Sim. The models performed similarly in capturing dependencies on dose and

  9. Evaluation of Artificial Intelligence Based Models for Chemical Biodegradability Prediction

    Directory of Open Access Journals (Sweden)

    Aleksandar Sabljic

    2004-12-01

    Full Text Available This study presents a review of biodegradability modeling efforts including a detailed assessment of two models developed using an artificial intelligence based methodology. Validation results for these models using an independent, quality reviewed database, demonstrate that the models perform well when compared to another commonly used biodegradability model, against the same data. The ability of models induced by an artificial intelligence methodology to accommodate complex interactions in detailed systems, and the demonstrated reliability of the approach evaluated by this study, indicate that the methodology may have application in broadening the scope of biodegradability models. Given adequate data for biodegradability of chemicals under environmental conditions, this may allow for the development of future models that include such things as surface interface impacts on biodegradability for example.

  10. Output-Feedback Model Predictive Control of a Pasteurization Pilot Plant based on an LPV model

    Science.gov (United States)

    Karimi Pour, Fatemeh; Ocampo-Martinez, Carlos; Puig, Vicenç

    2017-01-01

    This paper presents a model predictive control (MPC) of a pasteurization pilot plant based on an LPV model. Since not all the states are measured, an observer is also designed, which allows implementing an output-feedback MPC scheme. However, the model of the plant is not completely observable when augmented with the disturbance models. In order to solve this problem, the following strategies are used: (i) the whole system is decoupled into two subsystems, (ii) an inner state-feedback controller is implemented into the MPC control scheme. A real-time example based on the pasteurization pilot plant is simulated as a case study for testing the behavior of the approaches.

  11. Tuning SISO offset-free Model Predictive Control based on ARX models

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay

    2012-01-01

    , the proposed controller is simple to tune as it has only one free tuning parameter. These two features are advantageous in predictive process control as they simplify industrial commissioning of MPC. Disturbance rejection and offset-free control is important in industrial process control. To achieve offset......In this paper, we present a tuning methodology for a simple offset-free SISO Model Predictive Controller (MPC) based on autoregressive models with exogenous inputs (ARX models). ARX models simplify system identification as they can be identified from data using convex optimization. Furthermore......-free control in face of unknown disturbances or model-plant mismatch, integrators must be introduced in either the estimator or the regulator. Traditionally, offset-free control is achieved using Brownian disturbance models in the estimator. In this paper we achieve offset-free control by extending the noise...

  12. Feature-Based and String-Based Models for Predicting RNA-Protein Interaction

    Directory of Open Access Journals (Sweden)

    Donald Adjeroh

    2018-03-01

    Full Text Available In this work, we study two approaches for the problem of RNA-Protein Interaction (RPI. In the first approach, we use a feature-based technique by combining extracted features from both sequences and secondary structures. The feature-based approach enhanced the prediction accuracy as it included much more available information about the RNA-protein pairs. In the second approach, we apply search algorithms and data structures to extract effective string patterns for prediction of RPI, using both sequence information (protein and RNA sequences, and structure information (protein and RNA secondary structures. This led to different string-based models for predicting interacting RNA-protein pairs. We show results that demonstrate the effectiveness of the proposed approaches, including comparative results against leading state-of-the-art methods.

  13. Optimization of condition-based asset management using a predictive health model

    NARCIS (Netherlands)

    Bajracharya, G.; Koltunowicz, T.; Negenborn, R.R.; Papp, Z.; Djairam, D.; De Schutter, B.; Smit, J.J.

    2009-01-01

    In this paper, a model predictive framework is used to optimize the operation and maintenance actions of power system equipment based on the predicted health sate of this equipment. In particular, this framework is used to predict the health state of transformers based on their usage. The health

  14. The prediction of surface temperature in the new seasonal prediction system based on the MPI-ESM coupled climate model

    Science.gov (United States)

    Baehr, J.; Fröhlich, K.; Botzet, M.; Domeisen, D. I. V.; Kornblueh, L.; Notz, D.; Piontek, R.; Pohlmann, H.; Tietsche, S.; Müller, W. A.

    2015-05-01

    A seasonal forecast system is presented, based on the global coupled climate model MPI-ESM as used for CMIP5 simulations. We describe the initialisation of the system and analyse its predictive skill for surface temperature. The presented system is initialised in the atmospheric, oceanic, and sea ice component of the model from reanalysis/observations with full field nudging in all three components. For the initialisation of the ensemble, bred vectors with a vertically varying norm are implemented in the ocean component to generate initial perturbations. In a set of ensemble hindcast simulations, starting each May and November between 1982 and 2010, we analyse the predictive skill. Bias-corrected ensemble forecasts for each start date reproduce the observed surface temperature anomalies at 2-4 months lead time, particularly in the tropics. Niño3.4 sea surface temperature anomalies show a small root-mean-square error and predictive skill up to 6 months. Away from the tropics, predictive skill is mostly limited to the ocean, and to regions which are strongly influenced by ENSO teleconnections. In summary, the presented seasonal prediction system based on a coupled climate model shows predictive skill for surface temperature at seasonal time scales comparable to other seasonal prediction systems using different underlying models and initialisation strategies. As the same model underlying our seasonal prediction system—with a different initialisation—is presently also used for decadal predictions, this is an important step towards seamless seasonal-to-decadal climate predictions.

  15. Efficient predictive model-based and fuzzy control for green urban mobility

    NARCIS (Netherlands)

    Jamshidnejad, A.

    2017-01-01

    In this thesis, we develop efficient predictive model-based control approaches, including model-predictive control (MPC) andmodel-based fuzzy control, for application in urban traffic networks with the aim of reducing a combination of the total time spent by the vehicles within the network and the

  16. Quality prediction modeling for sintered ores based on mechanism models of sintering and extreme learning machine based error compensation

    Science.gov (United States)

    Tiebin, Wu; Yunlian, Liu; Xinjun, Li; Yi, Yu; Bin, Zhang

    2018-06-01

    Aiming at the difficulty in quality prediction of sintered ores, a hybrid prediction model is established based on mechanism models of sintering and time-weighted error compensation on the basis of the extreme learning machine (ELM). At first, mechanism models of drum index, total iron, and alkalinity are constructed according to the chemical reaction mechanism and conservation of matter in the sintering process. As the process is simplified in the mechanism models, these models are not able to describe high nonlinearity. Therefore, errors are inevitable. For this reason, the time-weighted ELM based error compensation model is established. Simulation results verify that the hybrid model has a high accuracy and can meet the requirement for industrial applications.

  17. A rule-based backchannel prediction model using pitch and pause information

    NARCIS (Netherlands)

    Truong, Khiet Phuong; Poppe, Ronald Walter; Heylen, Dirk K.J.

    We manually designed rules for a backchannel (BC) prediction model based on pitch and pause information. In short, the model predicts a BC when there is a pause of a certain length that is preceded by a falling or rising pitch. This model was validated against the Dutch IFADV Corpus in a

  18. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  19. Comparison of short term rainfall forecasts for model based flow prediction in urban drainage systems

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Poulsen, Troels Sander; Bøvith, Thomas

    2012-01-01

    Forecast based flow prediction in drainage systems can be used to implement real time control of drainage systems. This study compares two different types of rainfall forecasts – a radar rainfall extrapolation based nowcast model and a numerical weather prediction model. The models are applied...... performance of the system is found using the radar nowcast for the short leadtimes and weather model for larger lead times....

  20. Predictive analytics technology review: Similarity-based modeling and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, James; Doan, Don; Gandhi, Devang; Nieman, Bill

    2010-09-15

    Over 11 years ago, SmartSignal introduced Predictive Analytics for eliminating equipment failures, using its patented SBM technology. SmartSignal continues to lead and dominate the market and, in 2010, went one step further and introduced Predictive Diagnostics. Now, SmartSignal is combining Predictive Diagnostics with RCM methodology and industry expertise. FMEA logic reengineers maintenance work management, eliminates unneeded inspections, and focuses efforts on the real issues. This integrated solution significantly lowers maintenance costs, protects against critical asset failures, and improves commercial availability, and reduces work orders 20-40%. Learn how.

  1. An Efficient Deterministic Approach to Model-based Prediction Uncertainty

    Data.gov (United States)

    National Aeronautics and Space Administration — Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the...

  2. Characteristic Model-Based Robust Model Predictive Control for Hypersonic Vehicles with Constraints

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2017-06-01

    Full Text Available Designing robust control for hypersonic vehicles in reentry is difficult, due to the features of the vehicles including strong coupling, non-linearity, and multiple constraints. This paper proposed a characteristic model-based robust model predictive control (MPC for hypersonic vehicles with reentry constraints. First, the hypersonic vehicle is modeled by a characteristic model composed of a linear time-varying system and a lumped disturbance. Then, the identification data are regenerated by the accumulative sum idea in the gray theory, which weakens effects of the random noises and strengthens regularity of the identification data. Based on the regenerated data, the time-varying parameters and the disturbance are online estimated according to the gray identification. At last, the mixed H2/H∞ robust predictive control law is proposed based on linear matrix inequalities (LMIs and receding horizon optimization techniques. Using active tackling system constraints of MPC, the input and state constraints are satisfied in the closed-loop control system. The validity of the proposed control is verified theoretically according to Lyapunov theory and illustrated by simulation results.

  3. BAYESIAN FORECASTS COMBINATION TO IMPROVE THE ROMANIAN INFLATION PREDICTIONS BASED ON ECONOMETRIC MODELS

    Directory of Open Access Journals (Sweden)

    Mihaela Simionescu

    2014-12-01

    Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.

  4. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  5. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  6. A CBR-Based and MAHP-Based Customer Value Prediction Model for New Product Development

    Science.gov (United States)

    Zhao, Yu-Jie; Luo, Xin-xing; Deng, Li

    2014-01-01

    In the fierce market environment, the enterprise which wants to meet customer needs and boost its market profit and share must focus on the new product development. To overcome the limitations of previous research, Chan et al. proposed a dynamic decision support system to predict the customer lifetime value (CLV) for new product development. However, to better meet the customer needs, there are still some deficiencies in their model, so this study proposes a CBR-based and MAHP-based customer value prediction model for a new product (C&M-CVPM). CBR (case based reasoning) can reduce experts' workload and evaluation time, while MAHP (multiplicative analytic hierarchy process) can use actual but average influencing factor's effectiveness in stimulation, and at same time C&M-CVPM uses dynamic customers' transition probability which is more close to reality. This study not only introduces the realization of CBR and MAHP, but also elaborates C&M-CVPM's three main modules. The application of the proposed model is illustrated and confirmed to be sensible and convincing through a stimulation experiment. PMID:25162050

  7. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  8. Finite element prediction of the swift effect based on Taylor-type polycrystal plasticity models

    OpenAIRE

    Duchene, Laurent; Delannay, L.; Habraken, Anne

    2004-01-01

    This paper describes the main concepts of the stress-strain interpolation model that has been implemented in the non-linear finite element code Lagamine. This model consists in a local description of the yield locus based on the texture of the material through the full constraints Taylor’s model. The prediction of the Swift effect is investigated: the influence of the texture evolution is shown up. The LAMEL model is also investigated for the Swift effect prediction. Peer reviewed

  9. Population PK modelling and simulation based on fluoxetine and norfluoxetine concentrations in milk: a milk concentration-based prediction model.

    Science.gov (United States)

    Tanoshima, Reo; Bournissen, Facundo Garcia; Tanigawara, Yusuke; Kristensen, Judith H; Taddio, Anna; Ilett, Kenneth F; Begg, Evan J; Wallach, Izhar; Ito, Shinya

    2014-10-01

    Population pharmacokinetic (pop PK) modelling can be used for PK assessment of drugs in breast milk. However, complex mechanistic modelling of a parent and an active metabolite using both blood and milk samples is challenging. We aimed to develop a simple predictive pop PK model for milk concentration-time profiles of a parent and a metabolite, using data on fluoxetine (FX) and its active metabolite, norfluoxetine (NFX), in milk. Using a previously published data set of drug concentrations in milk from 25 women treated with FX, a pop PK model predictive of milk concentration-time profiles of FX and NFX was developed. Simulation was performed with the model to generate FX and NFX concentration-time profiles in milk of 1000 mothers. This milk concentration-based pop PK model was compared with the previously validated plasma/milk concentration-based pop PK model of FX. Milk FX and NFX concentration-time profiles were described reasonably well by a one compartment model with a FX-to-NFX conversion coefficient. Median values of the simulated relative infant dose on a weight basis (sRID: weight-adjusted daily doses of FX and NFX through breastmilk to the infant, expressed as a fraction of therapeutic FX daily dose per body weight) were 0.028 for FX and 0.029 for NFX. The FX sRID estimates were consistent with those of the plasma/milk-based pop PK model. A predictive pop PK model based on only milk concentrations can be developed for simultaneous estimation of milk concentration-time profiles of a parent (FX) and an active metabolite (NFX). © 2014 The British Pharmacological Society.

  10. A Method for Driving Route Predictions Based on Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Ning Ye

    2015-01-01

    Full Text Available We present a driving route prediction method that is based on Hidden Markov Model (HMM. This method can accurately predict a vehicle’s entire route as early in a trip’s lifetime as possible without inputting origins and destinations beforehand. Firstly, we propose the route recommendation system architecture, where route predictions play important role in the system. Secondly, we define a road network model, normalize each of driving routes in the rectangular coordinate system, and build the HMM to make preparation for route predictions using a method of training set extension based on K-means++ and the add-one (Laplace smoothing technique. Thirdly, we present the route prediction algorithm. Finally, the experimental results of the effectiveness of the route predictions that is based on HMM are shown.

  11. Physics-based Modeling Tools for Life Prediction and Durability Assessment of Advanced Materials, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The technical objectives of this program are: (1) to develop a set of physics-based modeling tools to predict the initiation of hot corrosion and to address pit and...

  12. Using physiologically based models for clinical translation: predictive modelling, data interpretation or something in-between?

    Science.gov (United States)

    Niederer, Steven A; Smith, Nic P

    2016-12-01

    Heart disease continues to be a significant clinical problem in Western society. Predictive models and simulations that integrate physiological understanding with patient information derived from clinical data have huge potential to contribute to improving our understanding of both the progression and treatment of heart disease. In particular they provide the potential to improve patient selection and optimisation of cardiovascular interventions across a range of pathologies. Currently a significant proportion of this potential is still to be realised. In this paper we discuss the opportunities and challenges associated with this realisation. Reviewing the successful elements of model translation for biophysically based models and the emerging supporting technologies, we propose three distinct modes of clinical translation. Finally we outline the challenges ahead that will be fundamental to overcome if the ultimate goal of fully personalised clinical cardiac care is to be achieved. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  13. Thematic and spatial resolutions affect model-based predictions of tree species distribution.

    Science.gov (United States)

    Liang, Yu; He, Hong S; Fraser, Jacob S; Wu, ZhiWei

    2013-01-01

    Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution.

  14. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  15. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    Science.gov (United States)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  16. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  17. Ontological model for predicting cyberattacks based on virtualized Honeynets

    Directory of Open Access Journals (Sweden)

    Gaona-García, Pablo

    2016-12-01

    Full Text Available The honeynets security tools are widely used today for the purpose of gathering information from potential attackers about vulnerabilities in our network. For performing correct use of them is necessary to understand the existing types, structures raised, the tools used and current developments. However, poor planning honeypot or honeynet one could provide unwanted users an access point to the network we want to protect. The purpose of this article is to carry out the approach of an ontological model for identifying the most common attacks types from the use of honeynets, and its implementation on working scenarios. This model will facilitate decision-making for the location of elements and components to computer level in an organization.

  18. A Rule-Based Model for Bankruptcy Prediction Based on an Improved Genetic Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2013-01-01

    Full Text Available In this paper, we proposed a hybrid system to predict corporate bankruptcy. The whole procedure consists of the following four stages: first, sequential forward selection was used to extract the most important features; second, a rule-based model was chosen to fit the given dataset since it can present physical meaning; third, a genetic ant colony algorithm (GACA was introduced; the fitness scaling strategy and the chaotic operator were incorporated with GACA, forming a new algorithm—fitness-scaling chaotic GACA (FSCGACA, which was used to seek the optimal parameters of the rule-based model; and finally, the stratified K-fold cross-validation technique was used to enhance the generalization of the model. Simulation experiments of 1000 corporations’ data collected from 2006 to 2009 demonstrated that the proposed model was effective. It selected the 5 most important factors as “net income to stock broker’s equality,” “quick ratio,” “retained earnings to total assets,” “stockholders’ equity to total assets,” and “financial expenses to sales.” The total misclassification error of the proposed FSCGACA was only 7.9%, exceeding the results of genetic algorithm (GA, ant colony algorithm (ACA, and GACA. The average computation time of the model is 2.02 s.

  19. Can multivariate models based on MOAKS predict OA knee pain? Data from the Osteoarthritis Initiative

    Science.gov (United States)

    Luna-Gómez, Carlos D.; Zanella-Calzada, Laura A.; Galván-Tejada, Jorge I.; Galván-Tejada, Carlos E.; Celaya-Padilla, José M.

    2017-03-01

    Osteoarthritis is the most common rheumatic disease in the world. Knee pain is the most disabling symptom in the disease, the prediction of pain is one of the targets in preventive medicine, this can be applied to new therapies or treatments. Using the magnetic resonance imaging and the grading scales, a multivariate model based on genetic algorithms is presented. Using a predictive model can be useful to associate minor structure changes in the joint with the future knee pain. Results suggest that multivariate models can be predictive with future knee chronic pain. All models; T0, T1 and T2, were statistically significant, all p values were 0.60.

  20. Development of a noise prediction model based on advanced fuzzy approaches in typical industrial workrooms.

    Science.gov (United States)

    Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir

    2014-01-01

    Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.

  1. Occupant feedback based model predictive control for thermal comfort and energy optimization: A chamber experimental evaluation

    International Nuclear Information System (INIS)

    Chen, Xiao; Wang, Qian; Srebric, Jelena

    2016-01-01

    Highlights: • This study evaluates an occupant-feedback driven Model Predictive Controller (MPC). • The MPC adjusts indoor temperature based on a dynamic thermal sensation (DTS) model. • A chamber model for predicting chamber air temperature is developed and validated. • Experiments show that MPC using DTS performs better than using Predicted Mean Vote. - Abstract: In current centralized building climate control, occupants do not have much opportunity to intervene the automated control system. This study explores the benefit of using thermal comfort feedback from occupants in the model predictive control (MPC) design based on a novel dynamic thermal sensation (DTS) model. This DTS model based MPC was evaluated in chamber experiments. A hierarchical structure for thermal control was adopted in the chamber experiments. At the high level, an MPC controller calculates the optimal supply air temperature of the chamber heating, ventilation, and air conditioning (HVAC) system, using the feedback of occupants’ votes on thermal sensation. At the low level, the actual supply air temperature is controlled by the chiller/heater using a PI control to achieve the optimal set point. This DTS-based MPC was also compared to an MPC designed based on the Predicted Mean Vote (PMV) model for thermal sensation. The experiment results demonstrated that the DTS-based MPC using occupant feedback allows significant energy saving while maintaining occupant thermal comfort compared to the PMV-based MPC.

  2. [Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].

    Science.gov (United States)

    Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang

    2016-07-12

    To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.

  3. Bearing Degradation Process Prediction Based on the Support Vector Machine and Markov Model

    Directory of Open Access Journals (Sweden)

    Shaojiang Dong

    2014-01-01

    Full Text Available Predicting the degradation process of bearings before they reach the failure threshold is extremely important in industry. This paper proposed a novel method based on the support vector machine (SVM and the Markov model to achieve this goal. Firstly, the features are extracted by time and time-frequency domain methods. However, the extracted original features are still with high dimensional and include superfluous information, and the nonlinear multifeatures fusion technique LTSA is used to merge the features and reduces the dimension. Then, based on the extracted features, the SVM model is used to predict the bearings degradation process, and the CAO method is used to determine the embedding dimension of the SVM model. After the bearing degradation process is predicted by SVM model, the Markov model is used to improve the prediction accuracy. The proposed method was validated by two bearing run-to-failure experiments, and the results proved the effectiveness of the methodology.

  4. Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model

    Directory of Open Access Journals (Sweden)

    C. Narendra Babu

    2015-07-01

    Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.

  5. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  6. Predictive model for early math skills based on structural equations.

    Science.gov (United States)

    Aragón, Estíbaliz; Navarro, José I; Aguilar, Manuel; Cerda, Gamal; García-Sedeño, Manuel

    2016-12-01

    Early math skills are determined by higher cognitive processes that are particularly important for acquiring and developing skills during a child's early education. Such processes could be a critical target for identifying students at risk for math learning difficulties. Few studies have considered the use of a structural equation method to rationalize these relations. Participating in this study were 207 preschool students ages 59 to 72 months, 108 boys and 99 girls. Performance with respect to early math skills, early literacy, general intelligence, working memory, and short-term memory was assessed. A structural equation model explaining 64.3% of the variance in early math skills was applied. Early literacy exhibited the highest statistical significance (β = 0.443, p < 0.05), followed by intelligence (β = 0.286, p < 0.05), working memory (β = 0.220, p < 0.05), and short-term memory (β = 0.213, p < 0.05). Correlations between the independent variables were also significant (p < 0.05). According to the results, cognitive variables should be included in remedial intervention programs. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  7. A Deep Learning Prediction Model Based on Extreme-Point Symmetric Mode Decomposition and Cluster Analysis

    OpenAIRE

    Li, Guohui; Zhang, Songling; Yang, Hong

    2017-01-01

    Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD) and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN) is used to predict it. Finally, the reconstructed ...

  8. Probability-based collaborative filtering model for predicting gene–disease associations

    OpenAIRE

    Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan

    2017-01-01

    Background Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene–disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. Methods We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our mo...

  9. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud; Sørensen, John Dalsgaard

    2018-01-01

    The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...... monitoring, fault prediction and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution...

  10. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud; Sørensen, John Dalsgaard

    2018-01-01

    monitoring, fault prediction and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution......The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...

  11. Comparison of RNA-seq and microarray-based models for clinical endpoint prediction.

    Science.gov (United States)

    Zhang, Wenqian; Yu, Ying; Hertwig, Falk; Thierry-Mieg, Jean; Zhang, Wenwei; Thierry-Mieg, Danielle; Wang, Jian; Furlanello, Cesare; Devanarayan, Viswanath; Cheng, Jie; Deng, Youping; Hero, Barbara; Hong, Huixiao; Jia, Meiwen; Li, Li; Lin, Simon M; Nikolsky, Yuri; Oberthuer, André; Qing, Tao; Su, Zhenqiang; Volland, Ruth; Wang, Charles; Wang, May D; Ai, Junmei; Albanese, Davide; Asgharzadeh, Shahab; Avigad, Smadar; Bao, Wenjun; Bessarabova, Marina; Brilliant, Murray H; Brors, Benedikt; Chierici, Marco; Chu, Tzu-Ming; Zhang, Jibin; Grundy, Richard G; He, Min Max; Hebbring, Scott; Kaufman, Howard L; Lababidi, Samir; Lancashire, Lee J; Li, Yan; Lu, Xin X; Luo, Heng; Ma, Xiwen; Ning, Baitang; Noguera, Rosa; Peifer, Martin; Phan, John H; Roels, Frederik; Rosswog, Carolina; Shao, Susan; Shen, Jie; Theissen, Jessica; Tonini, Gian Paolo; Vandesompele, Jo; Wu, Po-Yen; Xiao, Wenzhong; Xu, Joshua; Xu, Weihong; Xuan, Jiekun; Yang, Yong; Ye, Zhan; Dong, Zirui; Zhang, Ke K; Yin, Ye; Zhao, Chen; Zheng, Yuanting; Wolfinger, Russell D; Shi, Tieliu; Malkas, Linda H; Berthold, Frank; Wang, Jun; Tong, Weida; Shi, Leming; Peng, Zhiyu; Fischer, Matthias

    2015-06-25

    Gene expression profiling is being widely applied in cancer research to identify biomarkers for clinical endpoint prediction. Since RNA-seq provides a powerful tool for transcriptome-based applications beyond the limitations of microarrays, we sought to systematically evaluate the performance of RNA-seq-based and microarray-based classifiers in this MAQC-III/SEQC study for clinical endpoint prediction using neuroblastoma as a model. We generate gene expression profiles from 498 primary neuroblastomas using both RNA-seq and 44 k microarrays. Characterization of the neuroblastoma transcriptome by RNA-seq reveals that more than 48,000 genes and 200,000 transcripts are being expressed in this malignancy. We also find that RNA-seq provides much more detailed information on specific transcript expression patterns in clinico-genetic neuroblastoma subgroups than microarrays. To systematically compare the power of RNA-seq and microarray-based models in predicting clinical endpoints, we divide the cohort randomly into training and validation sets and develop 360 predictive models on six clinical endpoints of varying predictability. Evaluation of factors potentially affecting model performances reveals that prediction accuracies are most strongly influenced by the nature of the clinical endpoint, whereas technological platforms (RNA-seq vs. microarrays), RNA-seq data analysis pipelines, and feature levels (gene vs. transcript vs. exon-junction level) do not significantly affect performances of the models. We demonstrate that RNA-seq outperforms microarrays in determining the transcriptomic characteristics of cancer, while RNA-seq and microarray-based models perform similarly in clinical endpoint prediction. Our findings may be valuable to guide future studies on the development of gene expression-based predictive models and their implementation in clinical practice.

  12. Short-term wind power prediction based on LSSVM–GSA model

    International Nuclear Information System (INIS)

    Yuan, Xiaohui; Chen, Chen; Yuan, Yanbin; Huang, Yuehua; Tan, Qingxiong

    2015-01-01

    Highlights: • A hybrid model is developed for short-term wind power prediction. • The model is based on LSSVM and gravitational search algorithm. • Gravitational search algorithm is used to optimize parameters of LSSVM. • Effect of different kernel function of LSSVM on wind power prediction is discussed. • Comparative studies show that prediction accuracy of wind power is improved. - Abstract: Wind power forecasting can improve the economical and technical integration of wind energy into the existing electricity grid. Due to its intermittency and randomness, it is hard to forecast wind power accurately. For the purpose of utilizing wind power to the utmost extent, it is very important to make an accurate prediction of the output power of a wind farm under the premise of guaranteeing the security and the stability of the operation of the power system. In this paper, a hybrid model (LSSVM–GSA) based on the least squares support vector machine (LSSVM) and gravitational search algorithm (GSA) is proposed to forecast the short-term wind power. As the kernel function and the related parameters of the LSSVM have a great influence on the performance of the prediction model, the paper establishes LSSVM model based on different kernel functions for short-term wind power prediction. And then an optimal kernel function is determined and the parameters of the LSSVM model are optimized by using GSA. Compared with the Back Propagation (BP) neural network and support vector machine (SVM) model, the simulation results show that the hybrid LSSVM–GSA model based on exponential radial basis kernel function and GSA has higher accuracy for short-term wind power prediction. Therefore, the proposed LSSVM–GSA is a better model for short-term wind power prediction

  13. Copula based prediction models: an application to an aortic regurgitation study

    Directory of Open Access Journals (Sweden)

    Shoukri Mohamed M

    2007-06-01

    Full Text Available Abstract Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction; p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808. From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0

  14. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.

    Science.gov (United States)

    Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu

    2018-01-19

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing

  15. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model

    Directory of Open Access Journals (Sweden)

    Jingzhou Xin

    2018-01-01

    Full Text Available Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA, and generalized autoregressive conditional heteroskedasticity (GARCH. Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS deformation monitoring system demonstrated that: (1 the Kalman filter is capable of denoising the bridge deformation monitoring data; (2 the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3 in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity; the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data

  16. A polynomial based model for cell fate prediction in human diseases.

    Science.gov (United States)

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  17. Multirule Based Diagnostic Approach for the Fog Predictions Using WRF Modelling Tool

    Directory of Open Access Journals (Sweden)

    Swagata Payra

    2014-01-01

    Full Text Available The prediction of fog onset remains difficult despite the progress in numerical weather prediction. It is a complex process and requires adequate representation of the local perturbations in weather prediction models. It mainly depends upon microphysical and mesoscale processes that act within the boundary layer. This study utilizes a multirule based diagnostic (MRD approach using postprocessing of the model simulations for fog predictions. The empiricism involved in this approach is mainly to bridge the gap between mesoscale and microscale variables, which are related to mechanism of the fog formation. Fog occurrence is a common phenomenon during winter season over Delhi, India, with the passage of the western disturbances across northwestern part of the country accompanied with significant amount of moisture. This study implements the above cited approach for the prediction of occurrences of fog and its onset time over Delhi. For this purpose, a high resolution weather research and forecasting (WRF model is used for fog simulations. The study involves depiction of model validation and postprocessing of the model simulations for MRD approach and its subsequent application to fog predictions. Through this approach model identified foggy and nonfoggy days successfully 94% of the time. Further, the onset of fog events is well captured within an accuracy of 30–90 minutes. This study demonstrates that the multirule based postprocessing approach is a useful and highly promising tool in improving the fog predictions.

  18. Addressing issues associated with evaluating prediction models for survival endpoints based on the concordance statistic.

    Science.gov (United States)

    Wang, Ming; Long, Qi

    2016-09-01

    Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.

  19. Capacity Prediction Model Based on Limited Priority Gap-Acceptance Theory at Multilane Roundabouts

    Directory of Open Access Journals (Sweden)

    Zhaowei Qu

    2014-01-01

    Full Text Available Capacity is an important design parameter for roundabouts, and it is the premise of computing their delay and queue. Roundabout capacity has been studied for decades, and empirical regression model and gap-acceptance model are the two main methods to predict it. Based on gap-acceptance theory, by considering the effect of limited priority, especially the relationship between limited priority factor and critical gap, a modified model was built to predict the roundabout capacity. We then compare the results between Raff’s method and maximum likelihood estimation (MLE method, and the MLE method was used to predict the critical gaps. Finally, the predicted capacities from different models were compared, with the observed capacity by field surveys, which verifies the performance of the proposed model.

  20. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    Science.gov (United States)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  1. Spherical and cylindrical cavity expansion models based prediction of penetration depths of concrete targets.

    Directory of Open Access Journals (Sweden)

    Xiaochao Jin

    Full Text Available The cavity expansion theory is most widely used to predict the depth of penetration of concrete targets. The main purpose of this work is to clarify the differences between the spherical and cylindrical cavity expansion models and their scope of application in predicting the penetration depths of concrete targets. The factors that influence the dynamic cavity expansion process of concrete materials were first examined. Based on numerical results, the relationship between expansion pressure and velocity was established. Then the parameters in the Forrestal's formula were fitted to have a convenient and effective prediction of the penetration depth. Results showed that both the spherical and cylindrical cavity expansion models can accurately predict the depth of penetration when the initial velocity is lower than 800 m/s. However, the prediction accuracy decreases with the increasing of the initial velocity and diameters of the projectiles. Based on our results, it can be concluded that when the initial velocity is higher than the critical velocity, the cylindrical cavity expansion model performs better than the spherical cavity expansion model in predicting the penetration depth, while when the initial velocity is lower than the critical velocity the conclusion is quite the contrary. This work provides a basic principle for selecting the spherical or cylindrical cavity expansion model to predict the penetration depth of concrete targets.

  2. Accuracy of depolarization and delay spread predictions using advanced ray-based modeling in indoor scenarios

    Directory of Open Access Journals (Sweden)

    Mani Francesco

    2011-01-01

    Full Text Available Abstract This article investigates the prediction accuracy of an advanced deterministic propagation model in terms of channel depolarization and frequency selectivity for indoor wireless propagation. In addition to specular reflection and diffraction, the developed ray tracing tool considers penetration through dielectric blocks and/or diffuse scattering mechanisms. The sensitivity and prediction accuracy analysis is based on two measurement campaigns carried out in a warehouse and an office building. It is shown that the implementation of diffuse scattering into RT significantly increases the accuracy of the cross-polar discrimination prediction, whereas the delay-spread prediction is only marginally improved.

  3. Modeling and prediction of extraction profile for microwave-assisted extraction based on absorbed microwave energy.

    Science.gov (United States)

    Chan, Chung-Hung; Yusoff, Rozita; Ngoh, Gek-Cheng

    2013-09-01

    A modeling technique based on absorbed microwave energy was proposed to model microwave-assisted extraction (MAE) of antioxidant compounds from cocoa (Theobroma cacao L.) leaves. By adapting suitable extraction model at the basis of microwave energy absorbed during extraction, the model can be developed to predict extraction profile of MAE at various microwave irradiation power (100-600 W) and solvent loading (100-300 ml). Verification with experimental data confirmed that the prediction was accurate in capturing the extraction profile of MAE (R-square value greater than 0.87). Besides, the predicted yields from the model showed good agreement with the experimental results with less than 10% deviation observed. Furthermore, suitable extraction times to ensure high extraction yield at various MAE conditions can be estimated based on absorbed microwave energy. The estimation is feasible as more than 85% of active compounds can be extracted when compared with the conventional extraction technique. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Spatial downscaling of soil prediction models based on weighted generalized additive models in smallholder farm settings.

    Science.gov (United States)

    Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P; Nair, Vimala D

    2017-09-11

    Digital soil mapping (DSM) is gaining momentum as a technique to help smallholder farmers secure soil security and food security in developing regions. However, communications of the digital soil mapping information between diverse audiences become problematic due to the inconsistent scale of DSM information. Spatial downscaling can make use of accessible soil information at relatively coarse spatial resolution to provide valuable soil information at relatively fine spatial resolution. The objective of this research was to disaggregate the coarse spatial resolution soil exchangeable potassium (K ex ) and soil total nitrogen (TN) base map into fine spatial resolution soil downscaled map using weighted generalized additive models (GAMs) in two smallholder villages in South India. By incorporating fine spatial resolution spectral indices in the downscaling process, the soil downscaled maps not only conserve the spatial information of coarse spatial resolution soil maps but also depict the spatial details of soil properties at fine spatial resolution. The results of this study demonstrated difference between the fine spatial resolution downscaled maps and fine spatial resolution base maps is smaller than the difference between coarse spatial resolution base maps and fine spatial resolution base maps. The appropriate and economical strategy to promote the DSM technique in smallholder farms is to develop the relatively coarse spatial resolution soil prediction maps or utilize available coarse spatial resolution soil maps at the regional scale and to disaggregate these maps to the fine spatial resolution downscaled soil maps at farm scale.

  5. Enhanced Voltage Control of VSC-HVDC Connected Offshore Wind Farms Based on Model Predictive Control

    DEFF Research Database (Denmark)

    Guo, Yifei; Gao, Houlei; Wu, Qiuwei

    2018-01-01

    This paper proposes an enhanced voltage control strategy (EVCS) based on model predictive control (MPC) for voltage source converter based high voltage direct current (VSCHVDC) connected offshore wind farms (OWFs). In the proposed MPC based EVCS, all wind turbine generators (WTGs) as well...... as the wind farm side VSC are optimally coordinated to keep voltages within the feasible range and reduce system power losses. Considering the high ratio of the OWF collector system, the effects of active power outputs of WTGs on voltage control are also taken into consideration. The predictive model of VSC...

  6. Research on a Novel Kernel Based Grey Prediction Model and Its Applications

    Directory of Open Access Journals (Sweden)

    Xin Ma

    2016-01-01

    Full Text Available The discrete grey prediction models have attracted considerable interest of research due to its effectiveness to improve the modelling accuracy of the traditional grey prediction models. The autoregressive GM(1,1 model, abbreviated as ARGM(1,1, is a novel discrete grey model which is easy to use and accurate in prediction of approximate nonhomogeneous exponential time series. However, the ARGM(1,1 is essentially a linear model; thus, its applicability is still limited. In this paper a novel kernel based ARGM(1,1 model is proposed, abbreviated as KARGM(1,1. The KARGM(1,1 has a nonlinear function which can be expressed by a kernel function using the kernel method, and its modelling procedures are presented in details. Two case studies of predicting the monthly gas well production are carried out with the real world production data. The results of KARGM(1,1 model are compared to the existing discrete univariate grey prediction models, including ARGM(1,1, NDGM(1,1,k, DGM(1,1, and NGBMOP, and it is shown that the KARGM(1,1 outperforms the other four models.

  7. A state-based probabilistic model for tumor respiratory motion prediction

    International Nuclear Information System (INIS)

    Kalet, Alan; Sandison, George; Schmitz, Ruth; Wu Huanmei

    2010-01-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more

  8. Degradation Prediction Model Based on a Neural Network with Dynamic Windows

    Science.gov (United States)

    Zhang, Xinghui; Xiao, Lei; Kang, Jianshe

    2015-01-01

    Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873

  9. Swarm Intelligence-Based Hybrid Models for Short-Term Power Load Prediction

    Directory of Open Access Journals (Sweden)

    Jianzhou Wang

    2014-01-01

    Full Text Available Swarm intelligence (SI is widely and successfully applied in the engineering field to solve practical optimization problems because various hybrid models, which are based on the SI algorithm and statistical models, are developed to further improve the predictive abilities. In this paper, hybrid intelligent forecasting models based on the cuckoo search (CS as well as the singular spectrum analysis (SSA, time series, and machine learning methods are proposed to conduct short-term power load prediction. The forecasting performance of the proposed models is augmented by a rolling multistep strategy over the prediction horizon. The test results are representative of the out-performance of the SSA and CS in tuning the seasonal autoregressive integrated moving average (SARIMA and support vector regression (SVR in improving load forecasting, which indicates that both the SSA-based data denoising and SI-based intelligent optimization strategy can effectively improve the model’s predictive performance. Additionally, the proposed CS-SSA-SARIMA and CS-SSA-SVR models provide very impressive forecasting results, demonstrating their strong robustness and universal forecasting capacities in terms of short-term power load prediction 24 hours in advance.

  10. A hybrid model for dissolved oxygen prediction in aquaculture based on multi-scale features

    Directory of Open Access Journals (Sweden)

    Chen Li

    2018-03-01

    Full Text Available To increase prediction accuracy of dissolved oxygen (DO in aquaculture, a hybrid model based on multi-scale features using ensemble empirical mode decomposition (EEMD is proposed. Firstly, original DO datasets are decomposed by EEMD and we get several components. Secondly, these components are used to reconstruct four terms including high frequency term, intermediate frequency term, low frequency term and trend term. Thirdly, according to the characteristics of high and intermediate frequency terms, which fluctuate violently, the least squares support vector machine (LSSVR is used to predict the two terms. The fluctuation of low frequency term is gentle and periodic, so it can be modeled by BP neural network with an optimal mind evolutionary computation (MEC-BP. Then, the trend term is predicted using grey model (GM because it is nearly linear. Finally, the prediction values of DO datasets are calculated by the sum of the forecasting values of all terms. The experimental results demonstrate that our hybrid model outperforms EEMD-ELM (extreme learning machine based on EEMD, EEMD-BP and MEC-BP models based on the mean absolute error (MAE, mean absolute percentage error (MAPE, mean square error (MSE and root mean square error (RMSE. Our hybrid model is proven to be an effective approach to predict aquaculture DO.

  11. Patient Similarity in Prediction Models Based on Health Data: A Scoping Review

    Science.gov (United States)

    Sharafoddini, Anis; Dubin, Joel A

    2017-01-01

    Background Physicians and health policy makers are required to make predictions during their decision making in various medical problems. Many advances have been made in predictive modeling toward outcome prediction, but these innovations target an average patient and are insufficiently adjustable for individual patients. One developing idea in this field is individualized predictive analytics based on patient similarity. The goal of this approach is to identify patients who are similar to an index patient and derive insights from the records of similar patients to provide personalized predictions.. Objective The aim is to summarize and review published studies describing computer-based approaches for predicting patients’ future health status based on health data and patient similarity, identify gaps, and provide a starting point for related future research. Methods The method involved (1) conducting the review by performing automated searches in Scopus, PubMed, and ISI Web of Science, selecting relevant studies by first screening titles and abstracts then analyzing full-texts, and (2) documenting by extracting publication details and information on context, predictors, missing data, modeling algorithm, outcome, and evaluation methods into a matrix table, synthesizing data, and reporting results. Results After duplicate removal, 1339 articles were screened in abstracts and titles and 67 were selected for full-text review. In total, 22 articles met the inclusion criteria. Within included articles, hospitals were the main source of data (n=10). Cardiovascular disease (n=7) and diabetes (n=4) were the dominant patient diseases. Most studies (n=18) used neighborhood-based approaches in devising prediction models. Two studies showed that patient similarity-based modeling outperformed population-based predictive methods. Conclusions Interest in patient similarity-based predictive modeling for diagnosis and prognosis has been growing. In addition to raw/coded health

  12. Improving stability of prediction models based on correlated omics data by using network approaches.

    Directory of Open Access Journals (Sweden)

    Renaud Tissier

    Full Text Available Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1 network construction, 2 clustering to empirically derive modules or pathways, and 3 building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  13. Comparison of ITER performance predicted by semi-empirical and theory-based transport models

    International Nuclear Information System (INIS)

    Mukhovatov, V.; Shimomura, Y.; Polevoi, A.

    2003-01-01

    The values of Q=(fusion power)/(auxiliary heating power) predicted for ITER by three different methods, i.e., transport model based on empirical confinement scaling, dimensionless scaling technique, and theory-based transport models are compared. The energy confinement time given by the ITERH-98(y,2) scaling for an inductive scenario with plasma current of 15 MA and plasma density 15% below the Greenwald value is 3.6 s with one technical standard deviation of ±14%. These data are translated into a Q interval of [7-13] at the auxiliary heating power P aux = 40 MW and [7-28] at the minimum heating power satisfying a good confinement ELMy H-mode. Predictions of dimensionless scalings and theory-based transport models such as Weiland, MMM and IFS/PPPL overlap with the empirical scaling predictions within the margins of uncertainty. (author)

  14. Data Analytics Based Dual-Optimized Adaptive Model Predictive Control for the Power Plant Boiler

    Directory of Open Access Journals (Sweden)

    Zhenhao Tang

    2017-01-01

    Full Text Available To control the furnace temperature of a power plant boiler precisely, a dual-optimized adaptive model predictive control (DoAMPC method is designed based on the data analytics. In the proposed DoAMPC, an accurate predictive model is constructed adaptively by the hybrid algorithm of the least squares support vector machine and differential evolution method. Then, an optimization problem is constructed based on the predictive model and many constraint conditions. To control the boiler furnace temperature, the differential evolution method is utilized to decide the control variables by solving the optimization problem. The proposed method can adapt to the time-varying situation by updating the sample data. The experimental results based on practical data illustrate that the DoAMPC can control the boiler furnace temperature with errors of less than 1.5% which can meet the requirements of the real production process.

  15. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    Science.gov (United States)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  16. Validation of water sorption-based clay prediction models for calcareous soils

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Razzaghi, Fatemeh; Moosavi, Ali

    2017-01-01

    on prediction accuracy. The soils had clay content ranging from 9 to 61% and CaCO3 from 24 to 97%. The three water sorption models considered showed a reasonably fair prediction of the clay content from water sorption at 28% relative humidity (RMSE and ME values ranging from 10.6 to 12.1 and −8.1 to −4......Soil particle size distribution (PSD), particularly the active clay fraction, mediates soil engineering, agronomic and environmental functions. The tedious and costly nature of traditional methods of determining PSD prompted the development of water sorption-based models for determining the clay...... fraction. The applicability of such models to semi-arid soils with significant amounts of calcium carbonate and/or gypsum is unknown. The objective of this study was to validate three water sorption-based clay prediction models for 30 calcareous soils from Iran and identify the effect of CaCO3...

  17. Time Prediction Models for Echinococcosis Based on Gray System Theory and Epidemic Dynamics.

    Science.gov (United States)

    Zhang, Liping; Wang, Li; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian

    2017-03-04

    Echinococcosis, which can seriously harm human health and animal husbandry production, has become an endemic in the Xinjiang Uygur Autonomous Region of China. In order to explore an effective human Echinococcosis forecasting model in Xinjiang, three grey models, namely, the traditional grey GM(1,1) model, the Grey-Periodic Extensional Combinatorial Model (PECGM(1,1)), and the Modified Grey Model using Fourier Series (FGM(1,1)), in addition to a multiplicative seasonal ARIMA(1,0,1)(1,1,0)₄ model, are applied in this study for short-term predictions. The accuracy of the different grey models is also investigated. The simulation results show that the FGM(1,1) model has a higher performance ability, not only for model fitting, but also for forecasting. Furthermore, considering the stability and the modeling precision in the long run, a dynamic epidemic prediction model based on the transmission mechanism of Echinococcosis is also established for long-term predictions. Results demonstrate that the dynamic epidemic prediction model is capable of identifying the future tendency. The number of human Echinococcosis cases will increase steadily over the next 25 years, reaching a peak of about 1250 cases, before eventually witnessing a slow decline, until it finally ends.

  18. Time Prediction Models for Echinococcosis Based on Gray System Theory and Epidemic Dynamics

    Directory of Open Access Journals (Sweden)

    Liping Zhang

    2017-03-01

    Full Text Available Echinococcosis, which can seriously harm human health and animal husbandry production, has become an endemic in the Xinjiang Uygur Autonomous Region of China. In order to explore an effective human Echinococcosis forecasting model in Xinjiang, three grey models, namely, the traditional grey GM(1,1 model, the Grey-Periodic Extensional Combinatorial Model (PECGM(1,1, and the Modified Grey Model using Fourier Series (FGM(1,1, in addition to a multiplicative seasonal ARIMA(1,0,1(1,1,04 model, are applied in this study for short-term predictions. The accuracy of the different grey models is also investigated. The simulation results show that the FGM(1,1 model has a higher performance ability, not only for model fitting, but also for forecasting. Furthermore, considering the stability and the modeling precision in the long run, a dynamic epidemic prediction model based on the transmission mechanism of Echinococcosis is also established for long-term predictions. Results demonstrate that the dynamic epidemic prediction model is capable of identifying the future tendency. The number of human Echinococcosis cases will increase steadily over the next 25 years, reaching a peak of about 1250 cases, before eventually witnessing a slow decline, until it finally ends.

  19. Model-free prediction and regression a transformation-based approach to inference

    CERN Document Server

    Politis, Dimitris N

    2015-01-01

    The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, co...

  20. On the comparison of stochastic model predictive control strategies applied to a hydrogen-based microgrid

    Science.gov (United States)

    Velarde, P.; Valverde, L.; Maestre, J. M.; Ocampo-Martinez, C.; Bordons, C.

    2017-03-01

    In this paper, a performance comparison among three well-known stochastic model predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained model predictive control is presented. To this end, three predictive controllers have been designed and implemented in a real renewable-hydrogen-based microgrid. The experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel cell as main equipment. The real experimental results show significant differences from the plant components, mainly in terms of use of energy, for each implemented technique. Effectiveness, performance, advantages, and disadvantages of these techniques are extensively discussed and analyzed to give some valid criteria when selecting an appropriate stochastic predictive controller.

  1. Validation of Energy Expenditure Prediction Models Using Real-Time Shoe-Based Motion Detectors.

    Science.gov (United States)

    Lin, Shih-Yun; Lai, Ying-Chih; Hsia, Chi-Chun; Su, Pei-Fang; Chang, Chih-Han

    2017-09-01

    This study aimed to verify and compare the accuracy of energy expenditure (EE) prediction models using shoe-based motion detectors with embedded accelerometers. Three physical activity (PA) datasets (unclassified, recognition, and intensity segmentation) were used to develop three prediction models. A multiple classification flow and these models were used to estimate EE. The "unclassified" dataset was defined as the data without PA recognition, the "recognition" as the data classified with PA recognition, and the "intensity segmentation" as the data with intensity segmentation. The three datasets contained accelerometer signals (quantified as signal magnitude area (SMA)) and net heart rate (HR net ). The accuracy of these models was assessed according to the deviation between physically measured EE and model-estimated EE. The variance between physically measured EE and model-estimated EE expressed by simple linear regressions was increased by 63% and 13% using SMA and HR net , respectively. The accuracy of the EE predicted from accelerometer signals is influenced by the different activities that exhibit different count-EE relationships within the same prediction model. The recognition model provides a better estimation and lower variability of EE compared with the unclassified and intensity segmentation models. The proposed shoe-based motion detectors can improve the accuracy of EE estimation and has great potential to be used to manage everyday exercise in real time.

  2. Genomic prediction based on data from three layer lines using non-linear regression models.

    Science.gov (United States)

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional

  3. Not just the norm: exemplar-based models also predict face aftereffects.

    Science.gov (United States)

    Ross, David A; Deroche, Mickael; Palmeri, Thomas J

    2014-02-01

    The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted toward a face with attributes opposite to those of the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here, we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation.

  4. Nonparametric Tree-Based Predictive Modeling of Storm Outages on an Electric Distribution Network.

    Science.gov (United States)

    He, Jichao; Wanik, David W; Hartman, Brian M; Anagnostou, Emmanouil N; Astitha, Marina; Frediani, Maria E B

    2017-03-01

    This article compares two nonparametric tree-based models, quantile regression forests (QRF) and Bayesian additive regression trees (BART), for predicting storm outages on an electric distribution network in Connecticut, USA. We evaluated point estimates and prediction intervals of outage predictions for both models using high-resolution weather, infrastructure, and land use data for 89 storm events (including hurricanes, blizzards, and thunderstorms). We found that spatially BART predicted more accurate point estimates than QRF. However, QRF produced better prediction intervals for high spatial resolutions (2-km grid cells and towns), while BART predictions aggregated to coarser resolutions (divisions and service territory) more effectively. We also found that the predictive accuracy was dependent on the season (e.g., tree-leaf condition, storm characteristics), and that the predictions were most accurate for winter storms. Given the merits of each individual model, we suggest that BART and QRF be implemented together to show the complete picture of a storm's potential impact on the electric distribution network, which would allow for a utility to make better decisions about allocating prestorm resources. © 2016 Society for Risk Analysis.

  5. Predicting adsorptive removal of chlorophenol from aqueous solution using artificial intelligence based modeling approaches.

    Science.gov (United States)

    Singh, Kunwar P; Gupta, Shikha; Ojha, Priyanka; Rai, Premanjali

    2013-04-01

    The research aims to develop artificial intelligence (AI)-based model to predict the adsorptive removal of 2-chlorophenol (CP) in aqueous solution by coconut shell carbon (CSC) using four operational variables (pH of solution, adsorbate concentration, temperature, and contact time), and to investigate their effects on the adsorption process. Accordingly, based on a factorial design, 640 batch experiments were conducted. Nonlinearities in experimental data were checked using Brock-Dechert-Scheimkman (BDS) statistics. Five nonlinear models were constructed to predict the adsorptive removal of CP in aqueous solution by CSC using four variables as input. Performances of the constructed models were evaluated and compared using statistical criteria. BDS statistics revealed strong nonlinearity in experimental data. Performance of all the models constructed here was satisfactory. Radial basis function network (RBFN) and multilayer perceptron network (MLPN) models performed better than generalized regression neural network, support vector machines, and gene expression programming models. Sensitivity analysis revealed that the contact time had highest effect on adsorption followed by the solution pH, temperature, and CP concentration. The study concluded that all the models constructed here were capable of capturing the nonlinearity in data. A better generalization and predictive performance of RBFN and MLPN models suggested that these can be used to predict the adsorption of CP in aqueous solution using CSC.

  6. Assessing the model transferability for prediction of transcription factor binding sites based on chromatin accessibility.

    Science.gov (United States)

    Liu, Sheng; Zibetti, Cristina; Wan, Jun; Wang, Guohua; Blackshaw, Seth; Qian, Jiang

    2017-07-27

    Computational prediction of transcription factor (TF) binding sites in different cell types is challenging. Recent technology development allows us to determine the genome-wide chromatin accessibility in various cellular and developmental contexts. The chromatin accessibility profiles provide useful information in prediction of TF binding events in various physiological conditions. Furthermore, ChIP-Seq analysis was used to determine genome-wide binding sites for a range of different TFs in multiple cell types. Integration of these two types of genomic information can improve the prediction of TF binding events. We assessed to what extent a model built upon on other TFs and/or other cell types could be used to predict the binding sites of TFs of interest. A random forest model was built using a set of cell type-independent features such as specific sequences recognized by the TFs and evolutionary conservation, as well as cell type-specific features derived from chromatin accessibility data. Our analysis suggested that the models learned from other TFs and/or cell lines performed almost as well as the model learned from the target TF in the cell type of interest. Interestingly, models based on multiple TFs performed better than single-TF models. Finally, we proposed a universal model, BPAC, which was generated using ChIP-Seq data from multiple TFs in various cell types. Integrating chromatin accessibility information with sequence information improves prediction of TF binding.The prediction of TF binding is transferable across TFs and/or cell lines suggesting there are a set of universal "rules". A computational tool was developed to predict TF binding sites based on the universal "rules".

  7. Prediction of CO concentrations based on a hybrid Partial Least Square and Support Vector Machine model

    Science.gov (United States)

    Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.

    2012-08-01

    Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.

  8. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling.

    Science.gov (United States)

    Knoops, Paul G M; Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W F; Jeelani, Owase; Dunaway, David J; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face.

  9. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud

    2017-01-01

    monitoring, fault detection and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution......The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...

  10. Composition-Based Prediction of Temperature-Dependent Thermophysical Food Properties: Reevaluating Component Groups and Prediction Models.

    Science.gov (United States)

    Phinney, David Martin; Frelka, John C; Heldman, Dennis Ray

    2017-01-01

    Prediction of temperature-dependent thermophysical properties (thermal conductivity, density, specific heat, and thermal diffusivity) is an important component of process design for food manufacturing. Current models for prediction of thermophysical properties of foods are based on the composition, specifically, fat, carbohydrate, protein, fiber, water, and ash contents, all of which change with temperature. The objectives of this investigation were to reevaluate and improve the prediction expressions for thermophysical properties. Previously published data were analyzed over the temperature range from 10 to 150 °C. These data were analyzed to create a series of relationships between the thermophysical properties and temperature for each food component, as well as to identify the dependence of the thermophysical properties on more specific structural properties of the fats, carbohydrates, and proteins. Results from this investigation revealed that the relationships between the thermophysical properties of the major constituents of foods and temperature can be statistically described by linear expressions, in contrast to the current polynomial models. Links between variability in thermophysical properties and structural properties were observed. Relationships for several thermophysical properties based on more specific constituents have been identified. Distinctions between simple sugars (fructose, glucose, and lactose) and complex carbohydrates (starch, pectin, and cellulose) have been proposed. The relationships between the thermophysical properties and proteins revealed a potential correlation with the molecular weight of the protein. The significance of relating variability in constituent thermophysical properties with structural properties--such as molecular mass--could significantly improve composition-based prediction models and, consequently, the effectiveness of process design. © 2016 Institute of Food Technologists®.

  11. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    Science.gov (United States)

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  12. Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Jina

    2016-01-01

    Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.

  13. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    Science.gov (United States)

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  14. On Feature Relevance in Image-Based Prediction Models: An Empirical Study

    DEFF Research Database (Denmark)

    Konukoglu, E.; Ganz, Melanie; Van Leemput, Koen

    2013-01-01

    Determining disease-related variations of the anatomy and function is an important step in better understanding diseases and developing early diagnostic systems. In particular, image-based multivariate prediction models and the “relevant features” they produce are attracting attention from the co...

  15. Stabilizing model predictive control of a gantry crane based on flexible set-membership constraints

    NARCIS (Netherlands)

    Iles, Sandor; Lazar, M.; Kolonic, Fetah; Jadranko, Matusko

    2015-01-01

    This paper presents a stabilizing distributed model predictive control of a gantry crane taking into account the variation of cable length. The proposed algorithm is based on the off-line computation of a sequence of 1-step controllable sets and a condition that enables flexible convergence towards

  16. A Risk Prediction Model for Sporadic CRC Based on Routine Lab Results.

    Science.gov (United States)

    Boursi, Ben; Mamtani, Ronac; Hwang, Wei-Ting; Haynes, Kevin; Yang, Yu-Xiao

    2016-07-01

    Current risk scores for colorectal cancer (CRC) are based on demographic and behavioral factors and have limited predictive values. To develop a novel risk prediction model for sporadic CRC using clinical and laboratory data in electronic medical records. We conducted a nested case-control study in a UK primary care database. Cases included those with a diagnostic code of CRC, aged 50-85. Each case was matched with four controls using incidence density sampling. CRC predictors were examined using univariate conditional logistic regression. Variables with p value CRC prediction models which included age, sex, height, obesity, ever smoking, alcohol dependence, and previous screening colonoscopy had an AUC of 0.58 (0.57-0.59) with poor goodness of fit. A laboratory-based model including hematocrit, MCV, lymphocytes, and neutrophil-lymphocyte ratio (NLR) had an AUC of 0.76 (0.76-0.77) and a McFadden's R2 of 0.21 with a NRI of 47.6 %. A combined model including sex, hemoglobin, MCV, white blood cells, platelets, NLR, and oral hypoglycemic use had an AUC of 0.80 (0.79-0.81) with a McFadden's R2 of 0.27 and a NRI of 60.7 %. Similar results were shown in an internal validation set. A laboratory-based risk model had good predictive power for sporadic CRC risk.

  17. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    Directory of Open Access Journals (Sweden)

    Lei Jia

    Full Text Available Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG and melting temperature change (dTm were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  18. Researches of fruit quality prediction model based on near infrared spectrum

    Science.gov (United States)

    Shen, Yulin; Li, Lian

    2018-04-01

    With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.

  19. Probability-based collaborative filtering model for predicting gene-disease associations.

    Science.gov (United States)

    Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan

    2017-12-28

    Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene-disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our model. Firstly, on the basis of a typical latent factorization model, we propose model I with an average heterogeneous regularization. Secondly, we develop modified model II with personal heterogeneous regularization to enhance the accuracy of aforementioned models. In this model, vector space similarity or Pearson correlation coefficient metrics and data on related species are also used. We compared the results of PCFM with the results of four state-of-arts approaches. The results show that PCFM performs better than other advanced approaches. PCFM model can be leveraged for predictions of disease genes, especially for new human genes or diseases with no known relationships.

  20. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    Directory of Open Access Journals (Sweden)

    Li Deng

    2015-01-01

    Full Text Available In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming, using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model’s input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators’ operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  1. Linear Model-Based Predictive Control of the LHC 1.8 K Cryogenic Loop

    CERN Document Server

    Blanco-Viñuela, E; De Prada-Moraga, C

    1999-01-01

    The LHC accelerator will employ 1800 superconducting magnets (for guidance and focusing of the particle beams) in a pressurized superfluid helium bath at 1.9 K. This temperature is a severely constrained control parameter in order to avoid the transition from the superconducting to the normal state. Cryogenic processes are difficult to regulate due to their highly non-linear physical parameters (heat capacity, thermal conductance, etc.) and undesirable peculiarities like non self-regulating process, inverse response and variable dead time. To reduce the requirements on either temperature sensor or cryogenic system performance, various control strategies have been investigated on a reduced-scale LHC prototype built at CERN (String Test). Model Based Predictive Control (MBPC) is a regulation algorithm based on the explicit use of a process model to forecast the plant output over a certain prediction horizon. This predicted controlled variable is used in an on-line optimization procedure that minimizes an approp...

  2. Fast integration-based prediction bands for ordinary differential equation models.

    Science.gov (United States)

    Hass, Helge; Kreutz, Clemens; Timmer, Jens; Kaschek, Daniel

    2016-04-15

    To gain a deeper understanding of biological processes and their relevance in disease, mathematical models are built upon experimental data. Uncertainty in the data leads to uncertainties of the model's parameters and in turn to uncertainties of predictions. Mechanistic dynamic models of biochemical networks are frequently based on nonlinear differential equation systems and feature a large number of parameters, sparse observations of the model components and lack of information in the available data. Due to the curse of dimensionality, classical and sampling approaches propagating parameter uncertainties to predictions are hardly feasible and insufficient. However, for experimental design and to discriminate between competing models, prediction and confidence bands are essential. To circumvent the hurdles of the former methods, an approach to calculate a profile likelihood on arbitrary observations for a specific time point has been introduced, which provides accurate confidence and prediction intervals for nonlinear models and is computationally feasible for high-dimensional models. In this article, reliable and smooth point-wise prediction and confidence bands to assess the model's uncertainty on the whole time-course are achieved via explicit integration with elaborate correction mechanisms. The corresponding system of ordinary differential equations is derived and tested on three established models for cellular signalling. An efficiency analysis is performed to illustrate the computational benefit compared with repeated profile likelihood calculations at multiple time points. The integration framework and the examples used in this article are provided with the software package Data2Dynamics, which is based on MATLAB and freely available at http://www.data2dynamics.org helge.hass@fdm.uni-freiburg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e

  3. A Network-Based Approach to Modeling and Predicting Product Coconsideration Relations

    Directory of Open Access Journals (Sweden)

    Zhenghui Sha

    2018-01-01

    Full Text Available Understanding customer preferences in consideration decisions is critical to choice modeling in engineering design. While existing literature has shown that the exogenous effects (e.g., product and customer attributes are deciding factors in customers’ consideration decisions, it is not clear how the endogenous effects (e.g., the intercompetition among products would influence such decisions. This paper presents a network-based approach based on Exponential Random Graph Models to study customers’ consideration behaviors according to engineering design. Our proposed approach is capable of modeling the endogenous effects among products through various network structures (e.g., stars and triangles besides the exogenous effects and predicting whether two products would be conisdered together. To assess the proposed model, we compare it against the dyadic network model that only considers exogenous effects. Using buyer survey data from the China automarket in 2013 and 2014, we evaluate the goodness of fit and the predictive power of the two models. The results show that our model has a better fit and predictive accuracy than the dyadic network model. This underscores the importance of the endogenous effects on customers’ consideration decisions. The insights gained from this research help explain how endogenous effects interact with exogeous effects in affecting customers’ decision-making.

  4. Activity Prediction of Schiff Base Compounds using Improved QSAR Models of Cinnamaldehyde Analogues and Derivatives

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2015-10-01

    Full Text Available In past work, QSAR (quantitative structure-activity relationship models of cinnamaldehyde analogues and derivatives (CADs have been used to predict the activities of new chemicals based on their mass concentrations, but these approaches are not without shortcomings. Therefore, molar concentrations were used instead of mass concentrations to determine antifungal activity. New QSAR models of CADs against Aspergillus niger and Penicillium citrinum were established, and the molecular design of new CADs was performed. The antifungal properties of the designed CADs were tested, and the experimental Log AR values were in agreement with the predicted Log AR values. The results indicate that the improved QSAR models are more reliable and can be effectively used for CADs molecular design and prediction of the activity of CADs. These findings provide new insight into the development and utilization of cinnamaldehyde compounds.

  5. Development of Demonstrably Predictive Models for Emissions from Alternative Fuels Based Aircraft Engines

    Science.gov (United States)

    2017-05-01

    Engineering Chemistry Fundamentals, Vol. 5, No. 3, 1966, pp. 356–363. [14] Burns, R. A., Development of scalar and velocity imaging diagnostics...in an Aero- Engine Model Combustor at Elevated Pressure Using URANS and Finite- Rate Chemistry ,” 50th AIAA/ASME/SAE/ASEE Joint Propulsion Conference...FINAL REPORT Development of Demonstrably Predictive Models for Emissions from Alternative Fuels Based Aircraft Engines SERDP Project WP-2151

  6. Composite control for raymond mill based on model predictive control and disturbance observer

    Directory of Open Access Journals (Sweden)

    Dan Niu

    2016-03-01

    Full Text Available In the raymond mill grinding process, precise control of operating load is vital for the high product quality. However, strong external disturbances, such as variations of ore size and ore hardness, usually cause great performance degradation. It is not easy to control the current of raymond mill constant. Several control strategies have been proposed. However, most of them (such as proportional–integral–derivative and model predictive control reject disturbances just through feedback regulation, which may lead to poor control performance in the presence of strong disturbances. For improving disturbance rejection, a control method based on model predictive control and disturbance observer is put forward in this article. The scheme employs disturbance observer as feedforward compensation and model predictive control controller as feedback regulation. The test results illustrate that compared with model predictive control method, the proposed disturbance observer–model predictive control method can obtain significant superiority in disturbance rejection, such as shorter settling time and smaller peak overshoot under strong disturbances.

  7. A deep learning-based multi-model ensemble method for cancer prediction.

    Science.gov (United States)

    Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong

    2018-01-01

    Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  9. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection.

    Directory of Open Access Journals (Sweden)

    Su Yang

    Full Text Available Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1 Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2 The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3 The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.

  10. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection.

    Science.gov (United States)

    Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie

    2015-01-01

    Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.

  11. Research on prediction of agricultural machinery total power based on grey model optimized by genetic algorithm

    Science.gov (United States)

    Xie, Yan; Li, Mu; Zhou, Jin; Zheng, Chang-zheng

    2009-07-01

    Agricultural machinery total power is an important index to reflex and evaluate the level of agricultural mechanization. It is the power source of agricultural production, and is the main factors to enhance the comprehensive agricultural production capacity expand production scale and increase the income of the farmers. Its demand is affected by natural, economic, technological and social and other "grey" factors. Therefore, grey system theory can be used to analyze the development of agricultural machinery total power. A method based on genetic algorithm optimizing grey modeling process is introduced in this paper. This method makes full use of the advantages of the grey prediction model and characteristics of genetic algorithm to find global optimization. So the prediction model is more accurate. According to data from a province, the GM (1, 1) model for predicting agricultural machinery total power was given based on the grey system theories and genetic algorithm. The result indicates that the model can be used as agricultural machinery total power an effective tool for prediction.

  12. Knowledge-based artificial neural network model to predict the properties of alpha+ beta titanium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Banu, P. S. Noori; Rani, S. Devaki [Dept. of Metallurgical Engineering, Jawaharlal Nehru Technological University, HyderabadI (India)

    2016-08-15

    In view of emerging applications of alpha+beta titanium alloys in aerospace and defense, we have aimed to develop a Back propagation neural network (BPNN) model capable of predicting the properties of these alloys as functions of alloy composition and/or thermomechanical processing parameters. The optimized BPNN model architecture was based on the sigmoid transfer function and has one hidden layer with ten nodes. The BPNN model showed excellent predictability of five properties: Tensile strength (r: 0.96), yield strength (r: 0.93), beta transus (r: 0.96), specific heat capacity (r: 1.00) and density (r: 0.99). The developed BPNN model was in agreement with the experimental data in demonstrating the individual effects of alloying elements in modulating the above properties. This model can serve as the platform for the design and development of new alpha+beta titanium alloys in order to attain desired strength, density and specific heat capacity.

  13. A Validation of Subchannel Based CHF Prediction Model for Rod Bundles

    International Nuclear Information System (INIS)

    Hwang, Dae-Hyun; Kim, Seong-Jin

    2015-01-01

    A large number of CHF data base were procured from various sources which included square and non-square lattice test bundles. CHF prediction accuracy was evaluated for various models including CHF lookup table method, empirical correlations, and phenomenological DNB models. The parametric effect of the mass velocity and unheated wall has been investigated from the experimental result, and incorporated into the development of local parameter CHF correlation applicable to APWR conditions. According to the CHF design criterion, the CHF should not occur at the hottest rod in the reactor core during normal operation and anticipated operational occurrences with at least a 95% probability at a 95% confidence level. This is accomplished by assuring that the minimum DNBR (Departure from Nucleate Boiling Ratio) in the reactor core is greater than the limit DNBR which accounts for the accuracy of CHF prediction model. The limit DNBR can be determined from the inverse of the lower tolerance limit of M/P that is evaluated from the measured-to-predicted CHF ratios for the relevant CHF data base. It is important to evaluate an adequacy of the CHF prediction model for application to the actual reactor core conditions. Validation of CHF prediction model provides the degree of accuracy inferred from the comparison of solution and data. To achieve a required accuracy for the CHF prediction model, it may be necessary to calibrate the model parameters by employing the validation results. If the accuracy of the model is acceptable, then it is applied to the real complex system with the inferred accuracy of the model. In a conventional approach, the accuracy of CHF prediction model was evaluated from the M/P statistics for relevant CHF data base, which was evaluated by comparing the nominal values of the predicted and measured CHFs. The experimental uncertainty for the CHF data was not considered in this approach to determine the limit DNBR. When a subchannel based CHF prediction model

  14. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  15. Reliability prediction system based on the failure rate model for electronic components

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  16. Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval

    Science.gov (United States)

    Kasiviswanathan, K.; Sudheer, K.

    2013-05-01

    Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived

  17. Predictive sensor based x-ray calibration using a physical model

    International Nuclear Information System (INIS)

    Fuente, Matias de la; Lutz, Peter; Wirtz, Dieter C.; Radermacher, Klaus

    2007-01-01

    Many computer assisted surgery systems are based on intraoperative x-ray images. To achieve reliable and accurate results these images have to be calibrated concerning geometric distortions, which can be distinguished between constant distortions and distortions caused by magnetic fields. Instead of using an intraoperative calibration phantom that has to be visible within each image resulting in overlaying markers, the presented approach directly takes advantage of the physical background of the distortions. Based on a computed physical model of an image intensifier and a magnetic field sensor, an online compensation of distortions can be achieved without the need of an intraoperative calibration phantom. The model has to be adapted once to each specific image intensifier through calibration, which is based on an optimization algorithm systematically altering the physical model parameters, until a minimal error is reached. Once calibrated, the model is able to predict the distortions caused by the measured magnetic field vector and build an appropriate dewarping function. The time needed for model calibration is not yet optimized and takes up to 4 h on a 3 GHz CPU. In contrast, the time needed for distortion correction is less than 1 s and therefore absolutely acceptable for intraoperative use. First evaluations showed that by using the model based dewarping algorithm the distortions of an XRII with a 21 cm FOV could be significantly reduced. The model was able to predict and compensate distortions by approximately 80% to a remaining error of 0.45 mm (max) (0.19 mm rms)

  18. Predictive Accuracy of a Cardiovascular Disease Risk Prediction Model in Rural South India – A Community Based Retrospective Cohort Study

    Directory of Open Access Journals (Sweden)

    Farah N Fathima

    2015-03-01

    Full Text Available Background: Identification of individuals at risk of developing cardiovascular diseases by risk stratification is the first step in primary prevention. Aims & Objectives: To assess the five year risk of developing a cardiovascular event from retrospective data and to assess the predictive accuracy of the non laboratory based National Health and Nutrition Examination Survey (NHANES risk prediction model among individuals in a rural South Indian population. Materials & Methods: A community based retrospective cohort study was conducted in three villages where risk stratification was done for all eligible adults aged between 35-74 years at the time of initial assessment using the NHANES risk prediction charts. Household visits were made after a period of five years by trained doctors to determine cardiovascular outcomes. Results: 521 people fulfilled the eligibility criteria of whom 486 (93.3% could be traced after five years. 56.8% were in low risk, 36.6% were in moderate risk and 6.6% were in high risk categories. 29 persons (5.97% had had cardiovascular events over the last five years of which 24 events (82.7% were nonfatal and five (17.25% were fatal. The mean age of the people who developed cardiovascular events was 57.24 ± 9.09 years. The odds ratios for the three levels of risk showed a linear trend with the odds ratios for the moderate risk and high risk category being 1.35 and 1.94 respectively with the low risk category as baseline. Conclusion: The non laboratory based NHANES charts did not accurately predict the occurrence of cardiovascular events in any of the risk categories.

  19. Gas Emission Prediction Model of Coal Mine Based on CSBP Algorithm

    Directory of Open Access Journals (Sweden)

    Xiong Yan

    2016-01-01

    Full Text Available In view of the nonlinear characteristics of gas emission in a coal working face, a prediction method is proposed based on cuckoo search algorithm optimized BP neural network (CSBP. In the CSBP algorithm, the cuckoo search is adopted to optimize weight and threshold parameters of BP network, and obtains the global optimal solutions. Furthermore, the twelve main affecting factors of the gas emission in the coal working face are taken as input vectors of CSBP algorithm, the gas emission is acted as output vector, and then the prediction model of BP neural network with optimal parameters is established. The results show that the CSBP algorithm has batter generalization ability and higher prediction accuracy, and can be utilized effectively in the prediction of coal mine gas emission.

  20. Prediction Model of Collapse Risk Based on Information Entropy and Distance Discriminant Analysis Method

    Directory of Open Access Journals (Sweden)

    Hujun He

    2017-01-01

    Full Text Available The prediction and risk classification of collapse is an important issue in the process of highway construction in mountainous regions. Based on the principles of information entropy and Mahalanobis distance discriminant analysis, we have produced a collapse hazard prediction model. We used the entropy measure method to reduce the influence indexes of the collapse activity and extracted the nine main indexes affecting collapse activity as the discriminant factors of the distance discriminant analysis model (i.e., slope shape, aspect, gradient, and height, along with exposure of the structural face, stratum lithology, relationship between weakness face and free face, vegetation cover rate, and degree of rock weathering. We employ postearthquake collapse data in relation to construction of the Yingxiu-Wolong highway, Hanchuan County, China, as training samples for analysis. The results were analyzed using the back substitution estimation method, showing high accuracy and no errors, and were the same as the prediction result of uncertainty measure. Results show that the classification model based on information entropy and distance discriminant analysis achieves the purpose of index optimization and has excellent performance, high prediction accuracy, and a zero false-positive rate. The model can be used as a tool for future evaluation of collapse risk.

  1. Energy saving and prediction modeling of petrochemical industries: A novel ELM based on FAHP

    International Nuclear Information System (INIS)

    Geng, ZhiQiang; Qin, Lin; Han, YongMing; Zhu, QunXiong

    2017-01-01

    Extreme learning machine (ELM), which is a simple single-hidden-layer feed-forward neural network with fast implementation, has been widely applied in many engineering fields. However, it is difficult to enhance the modeling ability of extreme learning in disposing the high-dimensional noisy data. And the predictive modeling method based on the ELM integrated fuzzy C-Means integrating analytic hierarchy process (FAHP) (FAHP-ELM) is proposed. The fuzzy C-Means algorithm is used to cluster the input attributes of the high-dimensional data. The Analytic Hierarchy Process (AHP) based on the entropy weights is proposed to filter the redundant information and extracts characteristic components. Then, the fusion data is used as the input of the ELM. Compared with the back-propagation (BP) neural network and the ELM, the proposed model has better performance in terms of the speed of convergence, generalization and modeling accuracy based on University of California Irvine (UCI) benchmark datasets. Finally, the proposed method was applied to build the energy saving and predictive model of the purified terephthalic acid (PTA) solvent system and the ethylene production system. The experimental results demonstrated the validity of the proposed method. Meanwhile, it could enhance the efficiency of energy utilization and achieve energy conservation and emission reduction. - Highlights: • The ELM integrated FAHP approach is proposed. • The FAHP-ELM prediction model is effectively verified through UCI datasets. • The energy saving and prediction model of petrochemical industries is obtained. • The method is efficient in improvement of energy efficiency and emission reduction.

  2. Traffic Incident Clearance Time and Arrival Time Prediction Based on Hazard Models

    Directory of Open Access Journals (Sweden)

    Yang beibei Ji

    2014-01-01

    Full Text Available Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an effective input for travel time prediction. In this paper, the hazard based prediction models are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.

  3. Learning-based Nonlinear Model Predictive Control to Improve Vision-based Mobile Robot Path Tracking

    Science.gov (United States)

    2015-07-01

    corresponding cost function to be J(u) = ( xd − x)TQx ( xd − x) + uTRu, (20) where Qx ∈ RKnx×Knx is positive semi-definite, R and u are as in (3), xd is a...sequence of desired states, xd = ( xd ,k+1, . . . , xd ,k+K), x is a sequence of predicted states, x = (xk+1, . . . ,xk+K), and K is the given prediction...vact,k−1+b, ωact,k−1+b), based ωk θk vk xd ,i−1 xd ,i xd ,i+1 xk yk Figure 5: Definition of the robot velocities, vk and ωk, and three pose variables

  4. A network security situation prediction model based on wavelet neural network with optimized parameters

    Directory of Open Access Journals (Sweden)

    Haibo Zhang

    2016-08-01

    Full Text Available The security incidents ion networks are sudden and uncertain, it is very hard to precisely predict the network security situation by traditional methods. In order to improve the prediction accuracy of the network security situation, we build a network security situation prediction model based on Wavelet Neural Network (WNN with optimized parameters by the Improved Niche Genetic Algorithm (INGA. The proposed model adopts WNN which has strong nonlinear ability and fault-tolerance performance. Also, the parameters for WNN are optimized through the adaptive genetic algorithm (GA so that WNN searches more effectively. Considering the problem that the adaptive GA converges slowly and easily turns to the premature problem, we introduce a novel niche technology with a dynamic fuzzy clustering and elimination mechanism to solve the premature convergence of the GA. Our final simulation results show that the proposed INGA-WNN prediction model is more reliable and effective, and it achieves faster convergence-speed and higher prediction accuracy than the Genetic Algorithm-Wavelet Neural Network (GA-WNN, Genetic Algorithm-Back Propagation Neural Network (GA-BPNN and WNN.

  5. Offset-Free Model Predictive Control of Open Water Channel Based on Moving Horizon Estimation

    Science.gov (United States)

    Ekin Aydin, Boran; Rutten, Martine

    2016-04-01

    Model predictive control (MPC) is a powerful control option which is increasingly used by operational water managers for managing water systems. The explicit consideration of constraints and multi-objective management are important features of MPC. However, due to the water loss in open water systems by seepage, leakage and evaporation a mismatch between the model and the real system will be created. These mismatch affects the performance of MPC and creates an offset from the reference set point of the water level. We present model predictive control based on moving horizon estimation (MHE-MPC) to achieve offset free control of water level for open water canals. MHE-MPC uses the past predictions of the model and the past measurements of the system to estimate unknown disturbances and the offset in the controlled water level is systematically removed. We numerically tested MHE-MPC on an accurate hydro-dynamic model of the laboratory canal UPC-PAC located in Barcelona. In addition, we also used well known disturbance modeling offset free control scheme for the same test case. Simulation experiments on a single canal reach show that MHE-MPC outperforms disturbance modeling offset free control scheme.

  6. Extended state observer based fuzzy model predictive control for ultra-supercritical boiler-turbine unit

    International Nuclear Information System (INIS)

    Zhang, Fan; Wu, Xiao; Shen, Jiong

    2017-01-01

    Highlights: • A novel ESOFMPC is proposed based on the combination of ESO and stable MPC. • The improved ESO can overcome unknown disturbances on any channel of MIMO system. • Nonlinearity and disturbance of boiler-turbine unit can be handled simultaneously. - Abstract: The regulation of ultra-supercritical (USC) boiler-turbine unit in large-scale power plants is vulnerable to various unknown disturbances, meanwhile, the internal nonlinearity makes it a challenging task for wide range load tracking. To overcome these two issues simultaneously, an extended state observer based fuzzy model predictive control is proposed for the USC boiler-turbine unit. Firstly, the fuzzy model of a 1000-MW coal-fired USC boiler-turbine unit is established through the nonlinearity analysis. Then a fuzzy stable model predictive controller is devised on the fuzzy model using output cost function for the purpose of wide range load tracking. An improved linear extended state observer, which can estimate plant behavior variations and unknown disturbances regardless of the direct feedthrough characteristic of the system, is synthesized with the predictive controller to enhance its disturbance rejection property. Closed-loop stability of the overall control system is guaranteed. Simulation results on a 1000-MW USC boiler-turbine unit model demonstrate the effectiveness of the proposed approach.

  7. A new solar power output prediction based on hybrid forecast engine and decomposition model.

    Science.gov (United States)

    Zhang, Weijiang; Dang, Hongshe; Simoes, Rolando

    2018-06-12

    Regarding to the growing trend of photovoltaic (PV) energy as a clean energy source in electrical networks and its uncertain nature, PV energy prediction has been proposed by researchers in recent decades. This problem is directly effects on operation in power network while, due to high volatility of this signal, an accurate prediction model is demanded. A new prediction model based on Hilbert Huang transform (HHT) and integration of improved empirical mode decomposition (IEMD) with feature selection and forecast engine is presented in this paper. The proposed approach is divided into three main sections. In the first section, the signal is decomposed by the proposed IEMD as an accurate decomposition tool. To increase the accuracy of the proposed method, a new interpolation method has been used instead of cubic spline curve (CSC) fitting in EMD. Then the obtained output is entered into the new feature selection procedure to choose the best candidate inputs. Finally, the signal is predicted by a hybrid forecast engine composed of support vector regression (SVR) based on an intelligent algorithm. The effectiveness of the proposed approach has been verified over a number of real-world engineering test cases in comparison with other well-known models. The obtained results prove the validity of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Soil-pipe interaction modeling for pipe behavior prediction with super learning based methods

    Science.gov (United States)

    Shi, Fang; Peng, Xiang; Liu, Huan; Hu, Yafei; Liu, Zheng; Li, Eric

    2018-03-01

    Underground pipelines are subject to severe distress from the surrounding expansive soil. To investigate the structural response of water mains to varying soil movements, field data, including pipe wall strains in situ soil water content, soil pressure and temperature, was collected. The research on monitoring data analysis has been reported, but the relationship between soil properties and pipe deformation has not been well-interpreted. To characterize the relationship between soil property and pipe deformation, this paper presents a super learning based approach combining feature selection algorithms to predict the water mains structural behavior in different soil environments. Furthermore, automatic variable selection method, e.i. recursive feature elimination algorithm, were used to identify the critical predictors contributing to the pipe deformations. To investigate the adaptability of super learning to different predictive models, this research employed super learning based methods to three different datasets. The predictive performance was evaluated by R-squared, root-mean-square error and mean absolute error. Based on the prediction performance evaluation, the superiority of super learning was validated and demonstrated by predicting three types of pipe deformations accurately. In addition, a comprehensive understand of the water mains working environments becomes possible.

  9. Deep Belief Network Based Hybrid Model for Building Energy Consumption Prediction

    Directory of Open Access Journals (Sweden)

    Chengdong Li

    2018-01-01

    Full Text Available To enhance the prediction performance for building energy consumption, this paper presents a modified deep belief network (DBN based hybrid model. The proposed hybrid model combines the outputs from the DBN model with the energy-consuming pattern to yield the final prediction results. The energy-consuming pattern in this study represents the periodicity property of building energy consumption and can be extracted from the observed historical energy consumption data. The residual data generated by removing the energy-consuming pattern from the original data are utilized to train the modified DBN model. The training of the modified DBN includes two steps, the first one of which adopts the contrastive divergence (CD algorithm to optimize the hidden parameters in a pre-train way, while the second one determines the output weighting vector by the least squares method. The proposed hybrid model is applied to two kinds of building energy consumption data sets that have different energy-consuming patterns (daily-periodicity and weekly-periodicity. In order to examine the advantages of the proposed model, four popular artificial intelligence methods—the backward propagation neural network (BPNN, the generalized radial basis function neural network (GRBFNN, the extreme learning machine (ELM, and the support vector regressor (SVR are chosen as the comparative approaches. Experimental results demonstrate that the proposed DBN based hybrid model has the best performance compared with the comparative techniques. Another thing to be mentioned is that all the predictors constructed by utilizing the energy-consuming patterns perform better than those designed only by the original data. This verifies the usefulness of the incorporation of the energy-consuming patterns. The proposed approach can also be extended and applied to some other similar prediction problems that have periodicity patterns, e.g., the traffic flow forecasting and the electricity consumption

  10. Trait-based representation of biological nitrification: Model development, testing, and predicted community composition

    Directory of Open Access Journals (Sweden)

    Nick eBouskill

    2012-10-01

    Full Text Available Trait-based microbial models show clear promise as tools to represent the diversity and activity of microorganisms across ecosystem gradients. These models parameterize specific traits that determine the relative fitness of an ‘organism’ in a given environment, and represent the complexity of biological systems across temporal and spatial scales. In this study we introduce a microbial community trait-based modeling framework (MicroTrait focused on nitrification (MicroTrait-N that represents the ammonia-oxidizing bacteria (AOB and ammonia-oxidizing archaea (AOA and nitrite oxidizing bacteria (NOB using traits related to enzyme kinetics and physiological properties. We used this model to predict nitrifier diversity, ammonia (NH3 oxidation rates and nitrous oxide (N2O production across pH, temperature and substrate gradients. Predicted nitrifier diversity was predominantly determined by temperature and substrate availability, the latter was strongly influenced by pH. The model predicted that transient N2O production rates are maximized by a decoupling of the AOB and NOB communities, resulting in an accumulation and detoxification of nitrite to N2O by AOB. However, cumulative N2O production (over six month simulations is maximized in a system where the relationship between AOB and NOB is maintained. When the reactions uncouple, the AOB become unstable and biomass declines rapidly, resulting in decreased NH3 oxidation and N2O production. We evaluated this model against site level chemical datasets from the interior of Alaska and accurately simulated NH3 oxidation rates and the relative ratio of AOA:AOB biomass. The predicted community structure and activity indicate (a parameterization of a small number of traits may be sufficient to broadly characterize nitrifying community structure and (b changing decadal trends in climate and edaphic conditions could impact nitrification rates in ways that are not captured by extant biogeochemical models.

  11. Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model.

    Science.gov (United States)

    Xianfang, Wang; Junmei, Wang; Xiaolei, Wang; Yue, Zhang

    2017-01-01

    The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server.

  12. QoS prediction for web services based on user-trust propagation model

    Science.gov (United States)

    Thinh, Le-Van; Tu, Truong-Dinh

    2017-10-01

    There is an important online role for Web service providers and users; however, the rapidly growing number of service providers and users, it can create some similar functions among web services. This is an exciting area for research, and researchers seek to to propose solutions for the best service to users. Collaborative filtering (CF) algorithms are widely used in recommendation systems, although these are less effective for cold-start users. Recently, some recommender systems have been developed based on social network models, and the results show that social network models have better performance in terms of CF, especially for cold-start users. However, most social network-based recommendations do not consider the user's mood. This is a hidden source of information, and is very useful in improving prediction efficiency. In this paper, we introduce a new model called User-Trust Propagation (UTP). The model uses a combination of trust and the mood of users to predict the QoS value and matrix factorisation (MF), which is used to train the model. The experimental results show that the proposed model gives better accuracy than other models, especially for the cold-start problem.

  13. Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition

    Science.gov (United States)

    Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.

    2005-12-01

    Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.

  14. Combined Prediction Model of Death Toll for Road Traffic Accidents Based on Independent and Dependent Variables

    Directory of Open Access Journals (Sweden)

    Feng Zhong-xiang

    2014-01-01

    Full Text Available In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability.

  15. Combined prediction model of death toll for road traffic accidents based on independent and dependent variables.

    Science.gov (United States)

    Feng, Zhong-xiang; Lu, Shi-sheng; Zhang, Wei-hua; Zhang, Nan-nan

    2014-01-01

    In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability.

  16. Sparse Power-Law Network Model for Reliable Statistical Predictions Based on Sampled Data

    Directory of Open Access Journals (Sweden)

    Alexander P. Kartun-Giles

    2018-04-01

    Full Text Available A projective network model is a model that enables predictions to be made based on a subsample of the network data, with the predictions remaining unchanged if a larger sample is taken into consideration. An exchangeable model is a model that does not depend on the order in which nodes are sampled. Despite a large variety of non-equilibrium (growing and equilibrium (static sparse complex network models that are widely used in network science, how to reconcile sparseness (constant average degree with the desired statistical properties of projectivity and exchangeability is currently an outstanding scientific problem. Here we propose a network process with hidden variables which is projective and can generate sparse power-law networks. Despite the model not being exchangeable, it can be closely related to exchangeable uncorrelated networks as indicated by its information theory characterization and its network entropy. The use of the proposed network process as a null model is here tested on real data, indicating that the model offers a promising avenue for statistical network modelling.

  17. An IL28B genotype-based clinical prediction model for treatment of chronic hepatitis C.

    Directory of Open Access Journals (Sweden)

    Thomas R O'Brien

    Full Text Available Genetic variation in IL28B and other factors are associated with sustained virological response (SVR after pegylated-interferon/ribavirin treatment for chronic hepatitis C (CHC. Using data from the HALT-C Trial, we developed a model to predict a patient's probability of SVR based on IL28B genotype and clinical variables.HALT-C enrolled patients with advanced CHC who had failed previous interferon-based treatment. Subjects were re-treated with pegylated-interferon/ribavirin during trial lead-in. We used step-wise logistic regression to calculate adjusted odds ratios (aOR and create the predictive model. Leave-one-out cross-validation was used to predict a priori probabilities of SVR and determine area under the receiver operator characteristics curve (AUC.Among 646 HCV genotype 1-infected European American patients, 14.2% achieved SVR. IL28B rs12979860-CC genotype was the strongest predictor of SVR (aOR, 7.56; p10% (43.3% of subjects had an SVR rate of 27.9% and accounted for 84.8% of subjects actually achieving SVR. To verify that consideration of both IL28B genotype and clinical variables is required for treatment decisions, we calculated AUC values from published data for the IDEAL Study.A clinical prediction model based on IL28B genotype and clinical variables can yield useful individualized predictions of the probability of treatment success that could increase SVR rates and decrease the frequency of futile treatment among patients with CHC.

  18. Prediction of recombinant protein overexpression in Escherichia coli using a machine learning based model (RPOLP).

    Science.gov (United States)

    Habibi, Narjeskhatoon; Norouzi, Alireza; Mohd Hashim, Siti Z; Shamsir, Mohd Shahir; Samian, Razip

    2015-11-01

    Recombinant protein overexpression, an important biotechnological process, is ruled by complex biological rules which are mostly unknown, is in need of an intelligent algorithm so as to avoid resource-intensive lab-based trial and error experiments in order to determine the expression level of the recombinant protein. The purpose of this study is to propose a predictive model to estimate the level of recombinant protein overexpression for the first time in the literature using a machine learning approach based on the sequence, expression vector, and expression host. The expression host was confined to Escherichia coli which is the most popular bacterial host to overexpress recombinant proteins. To provide a handle to the problem, the overexpression level was categorized as low, medium and high. A set of features which were likely to affect the overexpression level was generated based on the known facts (e.g. gene length) and knowledge gathered from related literature. Then, a representative sub-set of features generated in the previous objective was determined using feature selection techniques. Finally a predictive model was developed using random forest classifier which was able to adequately classify the multi-class imbalanced small dataset constructed. The result showed that the predictive model provided a promising accuracy of 80% on average, in estimating the overexpression level of a recombinant protein. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model

    Science.gov (United States)

    Morin, Mario A.; Ficarazzo, Francesco

    2006-04-01

    Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.

  20. Blinded prospective evaluation of computer-based mechanistic schizophrenia disease model for predicting drug response.

    Directory of Open Access Journals (Sweden)

    Hugo Geerts

    Full Text Available The tremendous advances in understanding the neurobiological circuits involved in schizophrenia have not translated into more effective treatments. An alternative strategy is to use a recently published 'Quantitative Systems Pharmacology' computer-based mechanistic disease model of cortical/subcortical and striatal circuits based upon preclinical physiology, human pathology and pharmacology. The physiology of 27 relevant dopamine, serotonin, acetylcholine, norepinephrine, gamma-aminobutyric acid (GABA and glutamate-mediated targets is calibrated using retrospective clinical data on 24 different antipsychotics. The model was challenged to predict quantitatively the clinical outcome in a blinded fashion of two experimental antipsychotic drugs; JNJ37822681, a highly selective low-affinity dopamine D(2 antagonist and ocaperidone, a very high affinity dopamine D(2 antagonist, using only pharmacology and human positron emission tomography (PET imaging data. The model correctly predicted the lower performance of JNJ37822681 on the positive and negative syndrome scale (PANSS total score and the higher extra-pyramidal symptom (EPS liability compared to olanzapine and the relative performance of ocaperidone against olanzapine, but did not predict the absolute PANSS total score outcome and EPS liability for ocaperidone, possibly due to placebo responses and EPS assessment methods. Because of its virtual nature, this modeling approach can support central nervous system research and development by accounting for unique human drug properties, such as human metabolites, exposure, genotypes and off-target effects and can be a helpful tool for drug discovery and development.

  1. Prediction of Dissolved Gas Concentrations in Transformer Oil Based on the KPCA-FFOA-GRNN Model

    Directory of Open Access Journals (Sweden)

    Jun Lin

    2018-01-01

    Full Text Available The purpose of analyzing the dissolved gas in transformer oil is to determine the transformer’s operating status and is an important basis for fault diagnosis. Accurate prediction of the concentration of dissolved gas in oil can provide an important reference for the evaluation of the state of the transformer. A combined predicting model is proposed based on kernel principal component analysis (KPCA and a generalized regression neural network (GRNN using an improved fruit fly optimization algorithm (FFOA to select the smooth factor. Firstly, based on the idea of using the dissolved gas ratio of oil to diagnose the transformer fault, gas concentration ratios are also used as characteristic parameters. Secondly, the main parameters are selected from the feature parameters using the KPCA method, and the GRNN is then used to predict the gas concentration in the transformer oil. In the training process of the network, the FFOA is used to select the smooth factor of the neural network. Through a concrete example, it is shown that the method proposed in this paper has better data fitting ability and more accurate prediction ability compared with the support vector machine (SVM and gray model (GM methods.

  2. Advanced Emergency Braking Control Based on a Nonlinear Model Predictive Algorithm for Intelligent Vehicles

    Directory of Open Access Journals (Sweden)

    Ronghui Zhang

    2017-05-01

    Full Text Available Focusing on safety, comfort and with an overall aim of the comprehensive improvement of a vision-based intelligent vehicle, a novel Advanced Emergency Braking System (AEBS is proposed based on Nonlinear Model Predictive Algorithm. Considering the nonlinearities of vehicle dynamics, a vision-based longitudinal vehicle dynamics model is established. On account of the nonlinear coupling characteristics of the driver, surroundings, and vehicle itself, a hierarchical control structure is proposed to decouple and coordinate the system. To avoid or reduce the collision risk between the intelligent vehicle and collision objects, a coordinated cost function of tracking safety, comfort, and fuel economy is formulated. Based on the terminal constraints of stable tracking, a multi-objective optimization controller is proposed using the theory of non-linear model predictive control. To quickly and precisely track control target in a finite time, an electronic brake controller for AEBS is designed based on the Nonsingular Fast Terminal Sliding Mode (NFTSM control theory. To validate the performance and advantages of the proposed algorithm, simulations are implemented. According to the simulation results, the proposed algorithm has better integrated performance in reducing the collision risk and improving the driving comfort and fuel economy of the smart car compared with the existing single AEBS.

  3. Large-scale ligand-based predictive modelling using support vector machines.

    Science.gov (United States)

    Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola

    2016-01-01

    The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse.

  4. Prediction of Proper Temperatures for the Hot Stamping Process Based on the Kinetics Models

    Science.gov (United States)

    Samadian, P.; Parsa, M. H.; Mirzadeh, H.

    2015-02-01

    Nowadays, the application of kinetics models for predicting microstructures of steels subjected to thermo-mechanical treatments has increased to minimize direct experimentation, which is costly and time consuming. In the current work, the final microstructures of AISI 4140 steel sheets after the hot stamping process were predicted using the Kirkaldy and Li kinetics models combined with new thermodynamically based models in order for the determination of the appropriate process temperatures. In this way, the effect of deformation during hot stamping on the Ae3, Acm, and Ae1 temperatures was considered, and then the equilibrium volume fractions of phases at different temperatures were calculated. Moreover, the ferrite transformation rate equations of the Kirkaldy and Li models were modified by a term proposed by Åkerström to consider the influence of plastic deformation. Results showed that the modified Kirkaldy model is satisfactory for the determination of appropriate austenitization temperatures for the hot stamping process of AISI 4140 steel sheets because of agreeable microstructure predictions in comparison with the experimental observations.

  5. Bridge Deterioration Prediction Model Based On Hybrid Markov-System Dynamic

    Directory of Open Access Journals (Sweden)

    Widodo Soetjipto Jojok

    2017-01-01

    Full Text Available Instantaneous bridge failure tends to increase in Indonesia. To mitigate this condition, Indonesia’s Bridge Management System (I-BMS has been applied to continuously monitor the condition of bridges. However, I-BMS only implements visual inspection for maintenance priority of the bridge structure component instead of bridge structure system. This paper proposes a new bridge failure prediction model based on hybrid Markov-System Dynamic (MSD. System dynamic is used to represent the correlation among bridge structure components while Markov chain is used to calculate temporal probability of the bridge failure. Around 235 data of bridges in Indonesia were collected from Directorate of Bridge the Ministry of Public Works and Housing for calculating transition probability of the model. To validate the model, a medium span concrete bridge was used as a case study. The result shows that the proposed model can accurately predict the bridge condition. Besides predicting the probability of the bridge failure, this model can also be used as an early warning system for bridge monitoring activity.

  6. Support vector regression model based predictive control of water level of U-tube steam generators

    Energy Technology Data Exchange (ETDEWEB)

    Kavaklioglu, Kadir, E-mail: kadir.kavaklioglu@pau.edu.tr

    2014-10-15

    Highlights: • Water level of U-tube steam generators was controlled in a model predictive fashion. • Models for steam generator water level were built using support vector regression. • Cost function minimization for future optimal controls was performed by using the steepest descent method. • The results indicated the feasibility of the proposed method. - Abstract: A predictive control algorithm using support vector regression based models was proposed for controlling the water level of U-tube steam generators of pressurized water reactors. Steam generator data were obtained using a transfer function model of U-tube steam generators. Support vector regression based models were built using a time series type model structure for five different operating powers. Feedwater flow controls were calculated by minimizing a cost function that includes the level error, the feedwater change and the mismatch between feedwater and steam flow rates. Proposed algorithm was applied for a scenario consisting of a level setpoint change and a steam flow disturbance. The results showed that steam generator level can be controlled at all powers effectively by the proposed method.

  7. Multivariate Autoregressive Model Based Heart Motion Prediction Approach for Beating Heart Surgery

    Directory of Open Access Journals (Sweden)

    Fan Liang

    2013-02-01

    Full Text Available A robotic tool can enable a surgeon to conduct off-pump coronary artery graft bypass surgery on a beating heart. The robotic tool actively alleviates the relative motion between the point of interest (POI on the heart surface and the surgical tool and allows the surgeon to operate as if the heart were stationary. Since the beating heart's motion is relatively high-band, with nonlinear and nonstationary characteristics, it is difficult to follow. Thus, precise beating heart motion prediction is necessary for the tracking control procedure during the surgery. In the research presented here, we first observe that Electrocardiography (ECG signal contains the causal phase information on heart motion and non-stationary heart rate dynamic variations. Then, we investigate the relationship between ECG signal and beating heart motion using Granger Causality Analysis, which describes the feasibility of the improved prediction of heart motion. Next, we propose a nonlinear time-varying multivariate vector autoregressive (MVAR model based adaptive prediction method. In this model, the significant correlation between ECG and heart motion enables the improvement of the prediction of sharp changes in heart motion and the approximation of the motion with sufficient detail. Dual Kalman Filters (DKF estimate the states and parameters of the model, respectively. Last, we evaluate the proposed algorithm through comparative experiments using the two sets of collected vivo data.

  8. Hyperspectral-based predictive modelling of grapevine water status in the Portuguese Douro wine region

    Science.gov (United States)

    Pôças, Isabel; Gonçalves, João; Costa, Patrícia Malva; Gonçalves, Igor; Pereira, Luís S.; Cunha, Mario

    2017-06-01

    In this study, hyperspectral reflectance (HySR) data derived from a handheld spectroradiometer were used to assess the water status of three grapevine cultivars in two sub-regions of Douro wine region during two consecutive years. A large set of potential predictors derived from the HySR data were considered for modelling/predicting the predawn leaf water potential (Ψpd) through different statistical and machine learning techniques. Three HySR vegetation indices were selected as final predictors for the computation of the models and the in-season time trend was removed from data by using a time predictor. The vegetation indices selected were the Normalized Reflectance Index for the wavelengths 554 nm and 561 nm (NRI554;561), the water index (WI) for the wavelengths 900 nm and 970 nm, and the D1 index which is associated with the rate of reflectance increase in the wavelengths of 706 nm and 730 nm. These vegetation indices covered the green, red edge and the near infrared domains of the electromagnetic spectrum. A large set of state-of-the-art analysis and statistical and machine-learning modelling techniques were tested. Predictive modelling techniques based on generalized boosted model (GBM), bagged multivariate adaptive regression splines (B-MARS), generalized additive model (GAM), and Bayesian regularized neural networks (BRNN) showed the best performance for predicting Ψpd, with an average determination coefficient (R2) ranging between 0.78 and 0.80 and RMSE varying between 0.11 and 0.12 MPa. When cultivar Touriga Nacional was used for training the models and the cultivars Touriga Franca and Tinta Barroca for testing (independent validation), the models performance was good, particularly for GBM (R2 = 0.85; RMSE = 0.09 MPa). Additionally, the comparison of Ψpd observed and predicted showed an equitable dispersion of data from the various cultivars. The results achieved show a good potential of these predictive models based on vegetation indices to support

  9. Prediction Model of Machining Failure Trend Based on Large Data Analysis

    Science.gov (United States)

    Li, Jirong

    2017-12-01

    The mechanical processing has high complexity, strong coupling, a lot of control factors in the machining process, it is prone to failure, in order to improve the accuracy of fault detection of large mechanical equipment, research on fault trend prediction requires machining, machining fault trend prediction model based on fault data. The characteristics of data processing using genetic algorithm K mean clustering for machining, machining feature extraction which reflects the correlation dimension of fault, spectrum characteristics analysis of abnormal vibration of complex mechanical parts processing process, the extraction method of the abnormal vibration of complex mechanical parts processing process of multi-component spectral decomposition and empirical mode decomposition Hilbert based on feature extraction and the decomposition results, in order to establish the intelligent expert system for the data base, combined with large data analysis method to realize the machining of the Fault trend prediction. The simulation results show that this method of fault trend prediction of mechanical machining accuracy is better, the fault in the mechanical process accurate judgment ability, it has good application value analysis and fault diagnosis in the machining process.

  10. A Genomics-Based Model for Prediction of Severe Bioprosthetic Mitral Valve Calcification.

    Science.gov (United States)

    Ponasenko, Anastasia V; Khutornaya, Maria V; Kutikhin, Anton G; Rutkovskaya, Natalia V; Tsepokina, Anna V; Kondyukova, Natalia V; Yuzhalin, Arseniy E; Barbarash, Leonid S

    2016-08-31

    Severe bioprosthetic mitral valve calcification is a significant problem in cardiovascular surgery. Unfortunately, clinical markers did not demonstrate efficacy in prediction of severe bioprosthetic mitral valve calcification. Here, we examined whether a genomics-based approach is efficient in predicting the risk of severe bioprosthetic mitral valve calcification. A total of 124 consecutive Russian patients who underwent mitral valve replacement surgery were recruited. We investigated the associations of the inherited variation in innate immunity, lipid metabolism and calcium metabolism genes with severe bioprosthetic mitral valve calcification. Genotyping was conducted utilizing the TaqMan assay. Eight gene polymorphisms were significantly associated with severe bioprosthetic mitral valve calcification and were therefore included into stepwise logistic regression which identified male gender, the T/T genotype of the rs3775073 polymorphism within the TLR6 gene, the C/T genotype of the rs2229238 polymorphism within the IL6R gene, and the A/A genotype of the rs10455872 polymorphism within the LPA gene as independent predictors of severe bioprosthetic mitral valve calcification. The developed genomics-based model had fair predictive value with area under the receiver operating characteristic (ROC) curve of 0.73. In conclusion, our genomics-based approach is efficient for the prediction of severe bioprosthetic mitral valve calcification.

  11. Economic Model Predictive Control for Hot Water Based Heating Systems in Smart Buildings

    DEFF Research Database (Denmark)

    Awadelrahman, M. A. Ahmed; Zong, Yi; Li, Hongwei

    2017-01-01

    This paper presents a study to optimize the heating energy costs in a residential building with varying electricity price signals based on an Economic Model Predictive Controller (EMPC). The investigated heating system consists of an air source heat pump (ASHP) incorporated with a hot water tank...... as active Thermal Energy Storage (TES), where two optimization problems are integrated together to optimize both the ASHP electricity consumption and the building heating consumption utilizing a heat dynamic model of the building. The results show that the proposed EMPC can save the energy cost by load...

  12. Nonlinear Model Predictive Control of a Cable-Robot-Based Motion Simulator

    DEFF Research Database (Denmark)

    Katliar, Mikhail; Fischer, Joerg; Frison, Gianluca

    2017-01-01

    In this paper we present the implementation of a model-predictive controller (MPC) for real-time control of a cable-robot-based motion simulator. The controller computes control inputs such that a desired acceleration and angular velocity at a defined point in simulator's cabin are tracked while...... satisfying constraints imposed by working space and allowed cable forces of the robot. In order to fully use the simulator capabilities, we propose an approach that includes the motion platform actuation in the MPC model. The tracking performance and computation time of the algorithm are investigated...

  13. Coordinated Voltage Control of a Wind Farm based on Model Predictive Control

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Guo, Qinglai

    2016-01-01

    This paper presents an autonomous wind farm voltage controller based on Model Predictive Control (MPC). The reactive power compensation and voltage regulation devices of the wind farm include Static Var Compensators (SVCs), Static Var Generators (SVGs), Wind Turbine Generators (WTGs) and On...... are calculated based on an analytical method to improve the computation efficiency and overcome the convergence problem. Two control modes are designed for both voltage violated and normal operation conditions. A wind farm with 20 wind turbines was used to conduct case studies to verify the proposed coordinated...

  14. A Bayesian network model for predicting type 2 diabetes risk based on electronic health records

    Science.gov (United States)

    Xie, Jiang; Liu, Yan; Zeng, Xu; Zhang, Wu; Mei, Zhen

    2017-07-01

    An extensive, in-depth study of diabetes risk factors (DBRF) is of crucial importance to prevent (or reduce) the chance of suffering from type 2 diabetes (T2D). Accumulation of electronic health records (EHRs) makes it possible to build nonlinear relationships between risk factors and diabetes. However, the current DBRF researches mainly focus on qualitative analyses, and the inconformity of physical examination items makes the risk factors likely to be lost, which drives us to study the novel machine learning approach for risk model development. In this paper, we use Bayesian networks (BNs) to analyze the relationship between physical examination information and T2D, and to quantify the link between risk factors and T2D. Furthermore, with the quantitative analyses of DBRF, we adopt EHR and propose a machine learning approach based on BNs to predict the risk of T2D. The experiments demonstrate that our approach can lead to better predictive performance than the classical risk model.

  15. Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network.

    Science.gov (United States)

    Han, Hong-Gui; Zhang, Lu; Hou, Ying; Qiao, Jun-Fei

    2016-02-01

    A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration.

  16. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Micromechanics-based damage model for failure prediction in cold forming

    Energy Technology Data Exchange (ETDEWEB)

    Lu, X.Z.; Chan, L.C., E-mail: lc.chan@polyu.edu.hk

    2017-04-06

    The purpose of this study was to develop a micromechanics-based damage (micro-damage) model that was concerned with the evolution of micro-voids for failure prediction in cold forming. Typical stainless steel SS316L was selected as the specimen material, and the nonlinear isotropic hardening rule was extended to describe the large deformation of the specimen undergoing cold forming. A micro-focus high-resolution X-ray computed tomography (CT) system was employed to trace and measure the micro-voids inside the specimen directly. Three-dimensional (3D) representative volume element (RVE) models with different sizes and spatial locations were reconstructed from the processed CT images of the specimen, and the average size and volume fraction of micro-voids (VFMV) for the specimen were determined via statistical analysis. Subsequently, the micro-damage model was compiled as a user-defined material subroutine into the finite element (FE) package ABAQUS. The stress-strain responses and damage evolutions of SS316L specimens under tensile and compressive deformations at different strain rates were predicted and further verified experimentally. It was concluded that the proposed micro-damage model is convincing for failure prediction in cold forming of the SS316L material.

  18. Research of Coal Resources Reserves Prediction Based on GM (1, 1) Model

    Science.gov (United States)

    Xiao, Jiancheng

    2018-01-01

    Based on the forecast of China’s coal reserves, this paper uses the GM (1, 1) gray forecasting theory to establish the gray forecasting model of China’s coal reserves based on the data of China’s coal reserves from 2002 to 2009, and obtained the trend of coal resources reserves with the current economic and social development situation, and the residual test model is established, so the prediction model is more accurate. The results show that China’s coal reserves can ensure the use of production at least 300 years of use. And the results are similar to the mainstream forecast results, and that are in line with objective reality.

  19. A residual life prediction model based on the generalized σ -N curved surface

    Directory of Open Access Journals (Sweden)

    Zongwen AN

    2016-06-01

    Full Text Available In order to investigate change rule of the residual life of structure under random repeated load, firstly, starting from the statistic meaning of random repeated load, the joint probability density function of maximum stress and minimum stress is derived based on the characteristics of order statistic (maximum order statistic and minimum order statistic; then, based on the equation of generalized σ -N curved surface, considering the influence of load cycles number on fatigue life, a relationship among minimum stress, maximum stress and residual life, that is the σmin(n- σmax(n-Nr(n curved surface model, is established; finally, the validity of the proposed model is demonstrated by a practical case. The result shows that the proposed model can reflect the influence of maximum stress and minimum stress on residual life of structure under random repeated load, which can provide a theoretical basis for life prediction and reliability assessment of structure.

  20. Model-Based Prediction of Pulsed Eddy Current Testing Signals from Stratified Conductive Structures

    International Nuclear Information System (INIS)

    Zhang, Jian Hai; Song, Sung Jin; Kim, Woong Ji; Kim, Hak Joon; Chung, Jong Duk

    2011-01-01

    Excitation and propagation of electromagnetic field of a cylindrical coil above an arbitrary number of conductive plates for pulsed eddy current testing(PECT) are very complex problems due to their complicated physical properties. In this paper, analytical modeling of PECT is established by Fourier series based on truncated region eigenfunction expansion(TREE) method for a single air-cored coil above stratified conductive structures(SCS) to investigate their integrity. From the presented expression of PECT, the coil impedance due to SCS is calculated based on analytical approach using the generalized reflection coefficient in series form. Then the multilayered structures manufactured by non-ferromagnetic (STS301L) and ferromagnetic materials (SS400) are investigated by the developed PECT model. Good prediction of analytical model of PECT not only contributes to the development of an efficient solver but also can be applied to optimize the conditions of experimental setup in PECT

  1. An RES-Based Model for Risk Assessment and Prediction of Backbreak in Bench Blasting

    Science.gov (United States)

    Faramarzi, F.; Ebrahimi Farsangi, M. A.; Mansouri, H.

    2013-07-01

    Most blasting operations are associated with various forms of energy loss, emerging as environmental side effects of rock blasting, such as flyrock, vibration, airblast, and backbreak. Backbreak is an adverse phenomenon in rock blasting operations, which imposes risk and increases operation expenses because of safety reduction due to the instability of walls, poor fragmentation, and uneven burden in subsequent blasts. In this paper, based on the basic concepts of a rock engineering systems (RES) approach, a new model for the prediction of backbreak and the risk associated with a blast is presented. The newly suggested model involves 16 effective parameters on backbreak due to blasting, while retaining simplicity as well. The data for 30 blasts, carried out at Sungun copper mine, western Iran, were used to predict backbreak and the level of risk corresponding to each blast by the RES-based model. The results obtained were compared with the backbreak measured for each blast, which showed that the level of risk achieved is in consistence with the backbreak measured. The maximum level of risk [vulnerability index (VI) = 60] was associated with blast No. 2, for which the corresponding average backbreak was the highest achieved (9.25 m). Also, for blasts with levels of risk under 40, the minimum average backbreaks (<4 m) were observed. Furthermore, to evaluate the model performance for backbreak prediction, the coefficient of correlation ( R 2) and root mean square error (RMSE) of the model were calculated ( R 2 = 0.8; RMSE = 1.07), indicating the good performance of the model.

  2. Enabling Persistent Autonomy for Underwater Gliders with Ocean Model Predictions and Terrain Based Navigation

    Directory of Open Access Journals (Sweden)

    Andrew eStuntz

    2016-04-01

    Full Text Available Effective study of ocean processes requires sampling over the duration of long (weeks to months oscillation patterns. Such sampling requires persistent, autonomous underwater vehicles, that have a similarly long deployment duration. The spatiotemporal dynamics of the ocean environment, coupled with limited communication capabilities, make navigation and localization difficult, especially in coastal regions where the majority of interesting phenomena occur. In this paper, we consider the combination of two methods for reducing navigation and localization error; a predictive approach based on ocean model predictions and a prior information approach derived from terrain-based navigation. The motivation for this work is not only for real-time state estimation, but also for accurately reconstructing the actual path that the vehicle traversed to contextualize the gathered data, with respect to the science question at hand. We present an application for the practical use of priors and predictions for large-scale ocean sampling. This combined approach builds upon previous works by the authors, and accurately localizes the traversed path of an underwater glider over long-duration, ocean deployments. The proposed method takes advantage of the reliable, short-term predictions of an ocean model, and the utility of priors used in terrain-based navigation over areas of significant bathymetric relief to bound uncertainty error in dead-reckoning navigation. This method improves upon our previously published works by 1 demonstrating the utility of our terrain-based navigation method with multiple field trials, and 2 presenting a hybrid algorithm that combines both approaches to bound navigational error and uncertainty for long-term deployments of underwater vehicles. We demonstrate the approach by examining data from actual field trials with autonomous underwater gliders, and demonstrate an ability to estimate geographical location of an underwater glider to 2

  3. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    Science.gov (United States)

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the

  4. Short-Term Bus Passenger Demand Prediction Based on Time Series Model and Interactive Multiple Model Approach

    Directory of Open Access Journals (Sweden)

    Rui Xue

    2015-01-01

    Full Text Available Although bus passenger demand prediction has attracted increased attention during recent years, limited research has been conducted in the context of short-term passenger demand forecasting. This paper proposes an interactive multiple model (IMM filter algorithm-based model to predict short-term passenger demand. After aggregated in 15 min interval, passenger demand data collected from a busy bus route over four months were used to generate time series. Considering that passenger demand exhibits various characteristics in different time scales, three time series were developed, named weekly, daily, and 15 min time series. After the correlation, periodicity, and stationarity analyses, time series models were constructed. Particularly, the heteroscedasticity of time series was explored to achieve better prediction performance. Finally, IMM filter algorithm was applied to combine individual forecasting models with dynamically predicted passenger demand for next interval. Different error indices were adopted for the analyses of individual and hybrid models. The performance comparison indicates that hybrid model forecasts are superior to individual ones in accuracy. Findings of this study are of theoretical and practical significance in bus scheduling.

  5. Discrete Model Predictive Control-Based Maximum Power Point Tracking for PV Systems: Overview and Evaluation

    DEFF Research Database (Denmark)

    Lashab, Abderezak; Sera, Dezso; Guerrero, Josep M.

    2018-01-01

    The main objective of this work is to provide an overview and evaluation of discrete model predictive controlbased maximum power point tracking (MPPT) for PV systems. A large number of MPC based MPPT methods have been recently introduced in the literature with very promising performance, however......, an in-depth investigation and comparison of these methods have not been carried out yet. Therefore, this paper has set out to provide an in-depth analysis and evaluation of MPC based MPPT methods applied to various common power converter topologies. The performance of MPC based MPPT is directly linked...... with the converter topology, and it is also affected by the accurate determination of the converter parameters, sensitivity to converter parameter variations is also investigated. The static and dynamic performance of the trackers are assessed according to the EN 50530 standard, using detailed simulation models...

  6. Predicting commuter flows in spatial networks using a radiation model based on temporal ranges

    Science.gov (United States)

    Ren, Yihui; Ercsey-Ravasz, Mária; Wang, Pu; González, Marta C.; Toroczkai, Zoltán

    2014-11-01

    Understanding network flows such as commuter traffic in large transportation networks is an ongoing challenge due to the complex nature of the transportation infrastructure and human mobility. Here we show a first-principles based method for traffic prediction using a cost-based generalization of the radiation model for human mobility, coupled with a cost-minimizing algorithm for efficient distribution of the mobility fluxes through the network. Using US census and highway traffic data, we show that traffic can efficiently and accurately be computed from a range-limited, network betweenness type calculation. The model based on travel time costs captures the log-normal distribution of the traffic and attains a high Pearson correlation coefficient (0.75) when compared with real traffic. Because of its principled nature, this method can inform many applications related to human mobility driven flows in spatial networks, ranging from transportation, through urban planning to mitigation of the effects of catastrophic events.

  7. Alignment and prediction of cis-regulatory modules based on a probabilistic model of evolution.

    Directory of Open Access Journals (Sweden)

    Xin He

    2009-03-01

    Full Text Available Cross-species comparison has emerged as a powerful paradigm for predicting cis-regulatory modules (CRMs and understanding their evolution. The comparison requires reliable sequence alignment, which remains a challenging task for less conserved noncoding sequences. Furthermore, the existing models of DNA sequence evolution generally do not explicitly treat the special properties of CRM sequences. To address these limitations, we propose a model of CRM evolution that captures different modes of evolution of functional transcription factor binding sites (TFBSs and the background sequences. A particularly novel aspect of our work is a probabilistic model of gains and losses of TFBSs, a process being recognized as an important part of regulatory sequence evolution. We present a computational framework that uses this model to solve the problems of CRM alignment and prediction. Our alignment method is similar to existing methods of statistical alignment but uses the conserved binding sites to improve alignment. Our CRM prediction method deals with the inherent uncertainties of binding site annotations and sequence alignment in a probabilistic framework. In simulated as well as real data, we demonstrate that our program is able to improve both alignment and prediction of CRM sequences over several state-of-the-art methods. Finally, we used alignments produced by our program to study binding site conservation in genome-wide binding data of key transcription factors in the Drosophila blastoderm, with two intriguing results: (i the factor-bound sequences are under strong evolutionary constraints even if their neighboring genes are not expressed in the blastoderm and (ii binding sites in distal bound sequences (relative to transcription start sites tend to be more conserved than those in proximal regions. Our approach is implemented as software, EMMA (Evolutionary Model-based cis-regulatory Module Analysis, ready to be applied in a broad biological context.

  8. Model-based prediction of nephropathia epidemica outbreaks based on climatological and vegetation data and bank vole population dynamics.

    Science.gov (United States)

    Haredasht, S Amirpour; Taylor, C J; Maes, P; Verstraeten, W W; Clement, J; Barrios, M; Lagrou, K; Van Ranst, M; Coppin, P; Berckmans, D; Aerts, J-M

    2013-11-01

    Wildlife-originated zoonotic diseases in general are a major contributor to emerging infectious diseases. Hantaviruses more specifically cause thousands of human disease cases annually worldwide, while understanding and predicting human hantavirus epidemics pose numerous unsolved challenges. Nephropathia epidemica (NE) is a human infection caused by Puumala virus, which is naturally carried and shed by bank voles (Myodes glareolus). The objective of this study was to develop a method that allows model-based predicting 3 months ahead of the occurrence of NE epidemics. Two data sets were utilized to develop and test the models. These data sets were concerned with NE cases in Finland and Belgium. In this study, we selected the most relevant inputs from all the available data for use in a dynamic linear regression (DLR) model. The number of NE cases in Finland were modelled using data from 1996 to 2008. The NE cases were predicted based on the time series data of average monthly air temperature (°C) and bank voles' trapping index using a DLR model. The bank voles' trapping index data were interpolated using a related dynamic harmonic regression model (DHR). Here, the DLR and DHR models used time-varying parameters. Both the DHR and DLR models were based on a unified state-space estimation framework. For the Belgium case, no time series of the bank voles' population dynamics were available. Several studies, however, have suggested that the population of bank voles is related to the variation in seed production of beech and oak trees in Northern Europe. Therefore, the NE occurrence pattern in Belgium was predicted based on a DLR model by using remotely sensed phenology parameters of broad-leaved forests, together with the oak and beech seed categories and average monthly air temperature (°C) using data from 2001 to 2009. Our results suggest that even without any knowledge about hantavirus dynamics in the host population, the time variation in NE outbreaks in Finland

  9. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  10. A control method for agricultural greenhouses heating based on computational fluid dynamics and energy prediction model

    International Nuclear Information System (INIS)

    Chen, Jiaoliao; Xu, Fang; Tan, Dapeng; Shen, Zheng; Zhang, Libin; Ai, Qinglin

    2015-01-01

    Highlights: • A novel control method for the heating greenhouse with SWSHPS is proposed. • CFD is employed to predict the priorities of FCU loops for thermal performance. • EPM is act as an on-line tool to predict the total energy demand of greenhouse. • The CFD–EPM-based method can save energy and improve control accuracy. • The energy savings potential is between 8.7% and 15.1%. - Abstract: As energy heating is one of the main production costs, many efforts have been made to reduce the energy consumption of agricultural greenhouses. Herein, a novel control method of greenhouse heating using computational fluid dynamics (CFD) and energy prediction model (EPM) is proposed for energy savings and system performance. Based on the low-Reynolds number k–ε turbulence principle, a CFD model of heating greenhouse is developed, applying the discrete ordinates model for the radiative heat transfers and porous medium approach for plants considering plants sensible and latent heat exchanges. The CFD simulations have been validated, and used to analyze the greenhouse thermal performance and the priority of fan coil units (FCU) loops under the various heating conditions. According to the heating efficiency and temperature uniformity, the priorities of each FCU loop can be predicted to generate a database with priorities for control system. EPM is built up based on the thermal balance, and used to predict and optimize the energy demand of the greenhouse online. Combined with the priorities of FCU loops from CFD simulations offline, we have developed the CFD–EPM-based heating control system of greenhouse with surface water source heat pumps system (SWSHPS). Compared with conventional multi-zone independent control (CMIC) method, the energy savings potential is between 8.7% and 15.1%, and the control temperature deviation is decreased to between 0.1 °C and 0.6 °C in the investigated greenhouse. These results show the CFD–EPM-based method can improve system

  11. Recent developments in predictive uncertainty assessment based on the model conditional processor approach

    Directory of Open Access Journals (Sweden)

    G. Coccia

    2011-10-01

    Full Text Available The work aims at discussing the role of predictive uncertainty in flood forecasting and flood emergency management, its relevance to improve the decision making process and the techniques to be used for its assessment.

    Real time flood forecasting requires taking into account predictive uncertainty for a number of reasons. Deterministic hydrological/hydraulic forecasts give useful information about real future events, but their predictions, as usually done in practice, cannot be taken and used as real future occurrences but rather used as pseudo-measurements of future occurrences in order to reduce the uncertainty of decision makers. Predictive Uncertainty (PU is in fact defined as the probability of occurrence of a future value of a predictand (such as water level, discharge or water volume conditional upon prior observations and knowledge as well as on all the information we can obtain on that specific future value from model forecasts. When dealing with commensurable quantities, as in the case of floods, PU must be quantified in terms of a probability distribution function which will be used by the emergency managers in their decision process in order to improve the quality and reliability of their decisions.

    After introducing the concept of PU, the presently available processors are introduced and discussed in terms of their benefits and limitations. In this work the Model Conditional Processor (MCP has been extended to the possibility of using two joint Truncated Normal Distributions (TNDs, in order to improve adaptation to low and high flows.

    The paper concludes by showing the results of the application of the MCP on two case studies, the Po river in Italy and the Baron Fork river, OK, USA. In the Po river case the data provided by the Civil Protection of the Emilia Romagna region have been used to implement an operational example, where the predicted variable is the observed water level. In the Baron Fork River

  12. Validation of a Methodology to Predict Micro-Vibrations Based on Finite Element Model Approach

    Science.gov (United States)

    Soula, Laurent; Rathband, Ian; Laduree, Gregory

    2014-06-01

    This paper presents the second part of the ESA R&D study called "METhodology for Analysis of structure- borne MICro-vibrations" (METAMIC). After defining an integrated analysis and test methodology to help predicting micro-vibrations [1], a full-scale validation test campaign has been carried out. It is based on a bread-board representative of typical spacecraft (S/C) platform consisting in a versatile structure made of aluminium sandwich panels equipped with different disturbance sources and a dummy payload made of a silicon carbide (SiC) bench. The bread-board has been instrumented with a large set of sensitive accelerometers and tests have been performed including back-ground noise measurement, modal characterization and micro- vibration tests. The results provided responses to the perturbation coming from a reaction wheel or cryo-cooler compressors, operated independently then simultaneously with different operation modes. Using consistent modelling and associated experimental characterization techniques, a correlation status has been assessed by comparing test results with predictions based on FEM approach. Very good results have been achieved particularly for the case of a wheel in sweeping rate operation with test results over-predicted within a reasonable margin lower than two. Some limitations of the methodology have also been identified for sources operating at a fixed rate or coming with a small number of dominant harmonics and recommendations have been issued in order to deal with model uncertainties and stay conservative.

  13. Does folic acid supplementation prevent or promote colorectal cancer? Results from model-based predictions.

    Science.gov (United States)

    Luebeck, E Georg; Moolgavkar, Suresh H; Liu, Amy Y; Boynton, Alanna; Ulrich, Cornelia M

    2008-06-01

    Folate is essential for nucleotide synthesis, DNA replication, and methyl group supply. Low-folate status has been associated with increased risks of several cancer types, suggesting a chemopreventive role of folate. However, recent findings on giving folic acid to patients with a history of colorectal polyps raise concerns about the efficacy and safety of folate supplementation and the long-term health effects of folate fortification. Results suggest that undetected precursor lesions may progress under folic acid supplementation, consistent with the role of folate role in nucleotide synthesis and cell proliferation. To better understand the possible trade-offs between the protective effects due to decreased mutation rates and possibly concomitant detrimental effects due to increased cell proliferation of folic acid, we used a biologically based mathematical model of colorectal carcinogenesis. We predict changes in cancer risk based on timing of treatment start and the potential effect of folic acid on cell proliferation and mutation rates. Changes in colorectal cancer risk in response to folic acid supplementation are likely a complex function of treatment start, duration, and effect on cell proliferation and mutations rates. Predicted colorectal cancer incidence rates under supplementation are mostly higher than rates without folic acid supplementation unless supplementation is initiated early in life (before age 20 years). To the extent to which this model predicts reality, it indicates that the effect on cancer risk when starting folic acid supplementation late in life is small, yet mostly detrimental. Experimental studies are needed to provide direct evidence for this dual role of folate in colorectal cancer and to validate and improve the model predictions.

  14. Analysis of direct contact membrane distillation based on a lumped-parameter dynamic predictive model

    KAUST Repository

    Karam, Ayman M.

    2016-10-03

    Membrane distillation (MD) is an emerging technology that has a great potential for sustainable water desalination. In order to pave the way for successful commercialization of MD-based water desalination techniques, adequate and accurate dynamical models of the process are essential. This paper presents the predictive capabilities of a lumped-parameter dynamic model for direct contact membrane distillation (DCMD) and discusses the results under wide range of steady-state and dynamic conditions. Unlike previous studies, the proposed model captures the time response of the spacial temperature distribution along the flow direction. It also directly solves for the local temperatures at the membrane interfaces, which allows to accurately model and calculate local flux values along with other intrinsic variables of great influence on the process, like the temperature polarization coefficient (TPC). The proposed model is based on energy and mass conservation principles and analogy between thermal and electrical systems. Experimental data was collected to validated the steady-state and dynamic responses of the model. The obtained results shows great agreement with the experimental data. The paper discusses the results of several simulations under various conditions to optimize the DCMD process efficiency and analyze its response. This demonstrates some potential applications of the proposed model to carry out scale up and design studies. © 2016

  15. Analysis of direct contact membrane distillation based on a lumped-parameter dynamic predictive model

    KAUST Repository

    Karam, Ayman M.; Alsaadi, Ahmad Salem; Ghaffour, NorEddine; Laleg-Kirati, Taous-Meriem

    2016-01-01

    Membrane distillation (MD) is an emerging technology that has a great potential for sustainable water desalination. In order to pave the way for successful commercialization of MD-based water desalination techniques, adequate and accurate dynamical models of the process are essential. This paper presents the predictive capabilities of a lumped-parameter dynamic model for direct contact membrane distillation (DCMD) and discusses the results under wide range of steady-state and dynamic conditions. Unlike previous studies, the proposed model captures the time response of the spacial temperature distribution along the flow direction. It also directly solves for the local temperatures at the membrane interfaces, which allows to accurately model and calculate local flux values along with other intrinsic variables of great influence on the process, like the temperature polarization coefficient (TPC). The proposed model is based on energy and mass conservation principles and analogy between thermal and electrical systems. Experimental data was collected to validated the steady-state and dynamic responses of the model. The obtained results shows great agreement with the experimental data. The paper discusses the results of several simulations under various conditions to optimize the DCMD process efficiency and analyze its response. This demonstrates some potential applications of the proposed model to carry out scale up and design studies. © 2016

  16. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    CR cultural resource CRM cultural resource management CRPM Cultural Resource Predictive Modeling DoD Department of Defense ESTCP Environmental...resource management ( CRM ) legal obligations under NEPA and the NHPA, military installations need to demonstrate that CRM decisions are based on objective...maxim “one size does not fit all,” and demonstrate that DoD installations have many different CRM needs that can and should be met through a variety

  17. Prediction of power ramp defects - development of a physically based model and evaluation of existing criteria

    International Nuclear Information System (INIS)

    Notley, M.J.F.; Kohn, E.

    2001-01-01

    Power-ramp induced fuel failure is not a problem in the present CANDU reactors. The current empirical correlations that define probability of failure do not agree one-with-another and do not allow extrapolation outside the database. A new methodology, based on physical processes, is presented and compared to data. The methodology calculates the pre-ramp sheath stress and the incremental stress during the ramp, and whether or not there is a defect is predicted based on a failure threshold stress. The proposed model confirms the deductions made by daSilva from an empirical 'fit' to data from the 1988 PNGS power ramp failure incident. It is recommended that daSilvas' correlation be used as reference for OPG (Ontario Power Generation) power reactor fuel, and that extrapolation be performed using the new model. (author)

  18. Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems

    Directory of Open Access Journals (Sweden)

    Man Hong

    2013-01-01

    Full Text Available To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR is proposed. A predictive control algorithm based on the Kalman filter for constrained Hammerstein-Wiener systems is designed. An output feedback control law regarding the linear subsystem is derived by state observation. The size of reaction heat produced and its influence on the output are evaluated by the Kalman filter. The observation and evaluation results are calculated by the multistep predictive approach. Actual control variables are computed while considering the constraints of the optimal control problem in a finite horizon through the receding horizon. The simulation example of the CSTR tester shows the effectiveness and feasibility of the proposed algorithm.

  19. Prediction models for solitary pulmonary nodules based on curvelet textural features and clinical parameters.

    Science.gov (United States)

    Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua

    2013-01-01

    Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.

  20. Logic-based models in systems biology: a predictive and parameter-free network analysis method.

    Science.gov (United States)

    Wynn, Michelle L; Consul, Nikita; Merajver, Sofia D; Schnell, Santiago

    2012-11-01

    Highly complex molecular networks, which play fundamental roles in almost all cellular processes, are known to be dysregulated in a number of diseases, most notably in cancer. As a consequence, there is a critical need to develop practical methodologies for constructing and analysing molecular networks at a systems level. Mathematical models built with continuous differential equations are an ideal methodology because they can provide a detailed picture of a network's dynamics. To be predictive, however, differential equation models require that numerous parameters be known a priori and this information is almost never available. An alternative dynamical approach is the use of discrete logic-based models that can provide a good approximation of the qualitative behaviour of a biochemical system without the burden of a large parameter space. Despite their advantages, there remains significant resistance to the use of logic-based models in biology. Here, we address some common concerns and provide a brief tutorial on the use of logic-based models, which we motivate with biological examples.

  1. Logic-based models in systems biology: a predictive and parameter-free network analysis method†

    Science.gov (United States)

    Wynn, Michelle L.; Consul, Nikita; Merajver, Sofia D.

    2012-01-01

    Highly complex molecular networks, which play fundamental roles in almost all cellular processes, are known to be dysregulated in a number of diseases, most notably in cancer. As a consequence, there is a critical need to develop practical methodologies for constructing and analysing molecular networks at a systems level. Mathematical models built with continuous differential equations are an ideal methodology because they can provide a detailed picture of a network’s dynamics. To be predictive, however, differential equation models require that numerous parameters be known a priori and this information is almost never available. An alternative dynamical approach is the use of discrete logic-based models that can provide a good approximation of the qualitative behaviour of a biochemical system without the burden of a large parameter space. Despite their advantages, there remains significant resistance to the use of logic-based models in biology. Here, we address some common concerns and provide a brief tutorial on the use of logic-based models, which we motivate with biological examples. PMID:23072820

  2. Miedema model based methodology to predict amorphous-forming-composition range in binary and ternary systems

    Energy Technology Data Exchange (ETDEWEB)

    Das, N., E-mail: nirupamd@barc.gov.in [Materials Science Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India); Mittra, J. [Materials Science Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India); Murty, B.S. [Department of Metallurgical and Materials Engineering, IIT Madras, Chennai 600 036 (India); Pabi, S.K. [Department of Metallurgical and Materials Engineering, IIT Kharagpur, Kharagpur 721 302 (India); Kulkarni, U.D.; Dey, G.K. [Materials Science Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India)

    2013-02-15

    Highlights: Black-Right-Pointing-Pointer A methodology was proposed to predict amorphous forming compositions (AFCs). Black-Right-Pointing-Pointer Chemical contribution to enthalpy of mixing {proportional_to} enthalpy of amorphous for AFCs. Black-Right-Pointing-Pointer Accuracy in the prediction of AFC-range was noticed in Al-Ni-Ti system. Black-Right-Pointing-Pointer Mechanical alloying (MA) results of Al-Ni-Ti followed the predicted AFC-range. Black-Right-Pointing-Pointer Earlier MA results of Al-Ni-Ti also conformed to the predicted AFC-range. - Abstract: From the earlier works on the prediction of amorphous forming composition range (AFCR) using Miedema based model and also, on mechanical alloying experiments it has been observed that all amorphous forming compositions of a given alloy system falls within a linear band when the chemical contribution to enthalpy of the solid solution ({Delta}H{sup ss}) is plotted against the enthalpy of mixing in the amorphous phase ({Delta}H{sup amor}). On the basis of this observation, a methodology has been proposed in this article to identify the AFCR of a ternary system that is likely to be more precise than what can be obtained using {Delta}H{sup amor} - {Delta}H{sup ss} < 0 criterion. MA experiments on various compositions of Al-Ni-Ti system, producing amorphous, crystalline, and mixture of amorphous plus crystalline phases have been carried out and the phases have been characterized using X-ray diffraction and transmission electron microscopy techniques. Data from the present MA experiments and, also, from the literature have been used to validate the proposed approach. Also, the proximity of compositions, producing a mixture of amorphous and crystalline phases to the boundary of AFCR in the Al-Ni-Ti ternary has been found useful to validate the effectiveness of the prediction.

  3. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Science.gov (United States)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  4. Predicting speech intelligibility in adverse conditions: evaluation of the speech-based envelope power spectrum model

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2011-01-01

    conditions by comparing predictions to measured data from [Kjems et al. (2009). J. Acoust. Soc. Am. 126 (3), 1415-1426] where speech is mixed with four different interferers, including speech-shaped noise, bottle noise, car noise, and cafe noise. The model accounts well for the differences in intelligibility......The speech-based envelope power spectrum model (sEPSM) [Jørgensen and Dau (2011). J. Acoust. Soc. Am., 130 (3), 1475–1487] estimates the envelope signal-to-noise ratio (SNRenv) of distorted speech and accurately describes the speech recognition thresholds (SRT) for normal-hearing listeners...... observed for the different interferers. None of the standardized models successfully describe these data....

  5. Model predictive control for a smart solar tank based on weather and consumption forecasts

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Bacher, Peder; Perers, Bengt

    2012-01-01

    In this work the heat dynamics of a storage tank were modelled on the basis of data and maximum likelihood methods. The resulting grey-box model was used for Economic Model Predictive Control (MPC) of the energy in the tank. The control objective was to balance the energy from a solar collector...... and the heat consumption in a residential house. The storage tank provides heat in periods where there is low solar radiation and stores heat when there is surplus solar heat. The forecasts of consumption patterns were based on data obtained from meters in a group of single-family houses in Denmark. The tank...... can also be heated by electric heating elements if necessary, but the electricity costs of operating these heating elements should be minimized. Consequently, the heating elements should be used in periods with cheap electricity. It is proposed to integrate a price-sensitive control to enable...

  6. Structured prediction models for RNN based sequence labeling in clinical text.

    Science.gov (United States)

    Jagannatha, Abhyuday N; Yu, Hong

    2016-11-01

    Sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data. In clinical domain one major application of sequence labeling involves extraction of medical entities such as medication, indication, and side-effects from Electronic Health Record narratives. Sequence labeling in this domain, presents its own set of challenges and objectives. In this work we experimented with various CRF based structured learning models with Recurrent Neural Networks. We extend the previously studied LSTM-CRF models with explicit modeling of pairwise potentials. We also propose an approximate version of skip-chain CRF inference with RNN potentials. We use these methodologies for structured prediction in order to improve the exact phrase detection of various medical entities.

  7. Temperature prediction model of asphalt pavement in cold regions based on an improved BP neural network

    International Nuclear Information System (INIS)

    Xu, Bo; Dan, Han-Cheng; Li, Liang

    2017-01-01

    Highlights: • Pavement temperature prediction model is presented with improved BP neural network. • Dynamic and static methods are presented to predict pavement temperature. • Pavement temperature can be excellently predicted in next 3 h. - Abstract: Ice cover on pavement threatens traffic safety, and pavement temperature is the main factor used to determine whether the wet pavement is icy or not. In this paper, a temperature prediction model of the pavement in winter is established by introducing an improved Back Propagation (BP) neural network model. Before the application of the BP neural network model, many efforts were made to eliminate chaos and determine the regularity of temperature on the pavement surface (e.g., analyze the regularity of diurnal and monthly variations of pavement temperature). New dynamic and static prediction methods are presented by improving the algorithms to intelligently overcome the prediction inaccuracy at the change point of daily temperature. Furthermore, some scenarios have been compared for different dates and road sections to verify the reliability of the prediction model. According to the analysis results, the daily pavement temperatures can be accurately predicted for the next 3 h from the time of prediction by combining the dynamic and static prediction methods. The presented method in this paper can provide technical references for temperature prediction of the pavement and the development of an early-warning system for icy pavements in cold regions.

  8. Passivity-based model predictive control for mobile vehicle motion planning

    CERN Document Server

    Tahirovic, Adnan

    2013-01-01

    Passivity-based Model Predictive Control for Mobile Vehicle Navigation represents a complete theoretical approach to the adoption of passivity-based model predictive control (MPC) for autonomous vehicle navigation in both indoor and outdoor environments. The brief also introduces analysis of the worst-case scenario that might occur during the task execution. Some of the questions answered in the text include: • how to use an MPC optimization framework for the mobile vehicle navigation approach; • how to guarantee safe task completion even in complex environments including obstacle avoidance and sideslip and rollover avoidance; and  • what to expect in the worst-case scenario in which the roughness of the terrain leads the algorithm to generate the longest possible path to the goal. The passivity-based MPC approach provides a framework in which a wide range of complex vehicles can be accommodated to obtain a safer and more realizable tool during the path-planning stage. During task execution, the optimi...

  9. Predicting seizure by modeling synaptic plasticity based on EEG signals - a case study of inherited epilepsy

    Science.gov (United States)

    Zhang, Honghui; Su, Jianzhong; Wang, Qingyun; Liu, Yueming; Good, Levi; Pascual, Juan M.

    2018-03-01

    This paper explores the internal dynamical mechanisms of epileptic seizures through quantitative modeling based on full brain electroencephalogram (EEG) signals. Our goal is to provide seizure prediction and facilitate treatment for epileptic patients. Motivated by an earlier mathematical model with incorporated synaptic plasticity, we studied the nonlinear dynamics of inherited seizures through a differential equation model. First, driven by a set of clinical inherited electroencephalogram data recorded from a patient with diagnosed Glucose Transporter Deficiency, we developed a dynamic seizure model on a system of ordinary differential equations. The model was reduced in complexity after considering and removing redundancy of each EEG channel. Then we verified that the proposed model produces qualitatively relevant behavior which matches the basic experimental observations of inherited seizure, including synchronization index and frequency. Meanwhile, the rationality of the connectivity structure hypothesis in the modeling process was verified. Further, through varying the threshold condition and excitation strength of synaptic plasticity, we elucidated the effect of synaptic plasticity to our seizure model. Results suggest that synaptic plasticity has great effect on the duration of seizure activities, which support the plausibility of therapeutic interventions for seizure control.

  10. Robust self-triggered model predictive control for constrained discrete-time LTI systems based on homothetic tubes

    NARCIS (Netherlands)

    Aydiner, E.; Brunner, F.D.; Heemels, W.P.M.H.; Allgower, F.

    2015-01-01

    In this paper we present a robust self-triggered model predictive control (MPC) scheme for discrete-time linear time-invariant systems subject to input and state constraints and additive disturbances. In self-triggered model predictive control, at every sampling instant an optimization problem based

  11. Application of GIS based data driven evidential belief function model to predict groundwater potential zonation

    Science.gov (United States)

    Nampak, Haleh; Pradhan, Biswajeet; Manap, Mohammad Abd

    2014-05-01

    The objective of this paper is to exploit potential application of an evidential belief function (EBF) model for spatial prediction of groundwater productivity at Langat basin area, Malaysia using geographic information system (GIS) technique. About 125 groundwater yield data were collected from well locations. Subsequently, the groundwater yield was divided into high (⩾11 m3/h) and low yields (divided into a testing dataset 70% (42 wells) for training the model and the remaining 30% (18 wells) was used for validation purpose. To perform cross validation, the frequency ratio (FR) approach was applied into remaining groundwater wells with low yield to show the spatial correlation between the low potential zones of groundwater productivity. A total of twelve groundwater conditioning factors that affect the storage of groundwater occurrences were derived from various data sources such as satellite based imagery, topographic maps and associated database. Those twelve groundwater conditioning factors are elevation, slope, curvature, stream power index (SPI), topographic wetness index (TWI), drainage density, lithology, lineament density, land use, normalized difference vegetation index (NDVI), soil and rainfall. Subsequently, the Dempster-Shafer theory of evidence model was applied to prepare the groundwater potential map. Finally, the result of groundwater potential map derived from belief map was validated using testing data. Furthermore, to compare the performance of the EBF result, logistic regression model was applied. The success-rate and prediction-rate curves were computed to estimate the efficiency of the employed EBF model compared to LR method. The validation results demonstrated that the success-rate for EBF and LR methods were 83% and 82% respectively. The area under the curve for prediction-rate of EBF and LR methods were calculated 78% and 72% respectively. The outputs achieved from the current research proved the efficiency of EBF in groundwater

  12. SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties

    Energy Technology Data Exchange (ETDEWEB)

    Panebianco, Stefano; Lemaître, Jean-Francois; Sida, Jean-Luc [CEA Centre de Saclay, Gif-sur-Ivette (France); Dubray, Noëel [CEA, DAM, DIF, Arpajon (France); Goriely, Stephane [Institut d' Astronomie et d' Astrophisique, Universite Libre de Bruxelles, Brussels (Belgium)

    2014-07-01

    Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed. (author)

  13. Predictive statistical modelling of cadmium content in durum wheat grain based on soil parameters.

    Science.gov (United States)

    Viala, Yoann; Laurette, Julien; Denaix, Laurence; Gourdain, Emmanuelle; Méléard, Benoit; Nguyen, Christophe; Schneider, André; Sappin-Didier, Valérie

    2017-09-01

    Regulatory limits on cadmium (Cd) content in food products are tending to become stricter, especially in cereals, which are a major contributor to dietary intake of Cd by humans. This is of particular importance for durum wheat, which accumulates more Cd than bread wheat. The contamination of durum wheat grain by Cd depends not only on the genotype but also to a large extent on soil Cd availability. Assessing the phytoavailability of Cd for durum wheat is thus crucial, and appropriate methods are required. For this purpose, we propose a statistical model to predict Cd accumulation in durum wheat grain based on soil geochemical properties related to Cd availability in French agricultural soils with low Cd contents and neutral to alkaline pH (soils commonly used to grow durum wheat). The best model is based on the concentration of total Cd in the soil solution, the pH of a soil CaCl 2 extract, the cation exchange capacity (CEC), and the content of manganese oxides (Tamm's extraction) in the soil. The model variables suggest a major influence of cadmium buffering power of the soil and of Cd speciation in solution. The model successfully explains 88% of Cd variability in grains with, generally, below 0.02 mg Cd kg -1 prediction error in wheat grain. Monte Carlo cross-validation indicated that model accuracy will suffice for the European Community project to reduce the regulatory limit from 0.2 to 0.15 mg Cd kg -1 grain, but not for the intermediate step at 0.175 mg Cd kg -1 . The model will help farmers assess the risk that the Cd content of their durum wheat grain will exceed regulatory limits, and help food safety authorities test different regulatory thresholds to find a trade-off between food safety and the negative impact a too strict regulation could have on farmers.

  14. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Science.gov (United States)

    Xie, Weihong; Yu, Yang

    2017-01-01

    Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly. PMID:29124062

  15. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Directory of Open Access Journals (Sweden)

    Fan Liang

    2017-01-01

    Full Text Available Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly.

  16. A Genetic Algorithm Based Support Vector Machine Model for Blood-Brain Barrier Penetration Prediction

    Directory of Open Access Journals (Sweden)

    Daqing Zhang

    2015-01-01

    Full Text Available Blood-brain barrier (BBB is a highly complex physical barrier determining what substances are allowed to enter the brain. Support vector machine (SVM is a kernel-based machine learning method that is widely used in QSAR study. For a successful SVM model, the kernel parameters for SVM and feature subset selection are the most important factors affecting prediction accuracy. In most studies, they are treated as two independent problems, but it has been proven that they could affect each other. We designed and implemented genetic algorithm (GA to optimize kernel parameters and feature subset selection for SVM regression and applied it to the BBB penetration prediction. The results show that our GA/SVM model is more accurate than other currently available log BB models. Therefore, to optimize both SVM parameters and feature subset simultaneously with genetic algorithm is a better approach than other methods that treat the two problems separately. Analysis of our log BB model suggests that carboxylic acid group, polar surface area (PSA/hydrogen-bonding ability, lipophilicity, and molecular charge play important role in BBB penetration. Among those properties relevant to BBB penetration, lipophilicity could enhance the BBB penetration while all the others are negatively correlated with BBB penetration.

  17. A radar-based hydrological model for flash flood prediction in the dry regions of Israel

    Science.gov (United States)

    Ronen, Alon; Peleg, Nadav; Morin, Efrat

    2014-05-01

    Flash floods are floods which follow shortly after rainfall events, and are among the most destructive natural disasters that strike people and infrastructures in humid and arid regions alike. Using a hydrological model for the prediction of flash floods in gauged and ungauged basins can help mitigate the risk and damage they cause. The sparsity of rain gauges in arid regions requires the use of radar measurements in order to get reliable quantitative precipitation estimations (QPE). While many hydrological models use radar data, only a handful do so in dry climate. This research presents a robust radar-based hydro-meteorological model built specifically for dry climate. Using this model we examine the governing factors of flash floods in the arid and semi-arid regions of Israel in particular and in dry regions in general. The hydrological model built is a semi-distributed, physically-based model, which represents the main hydrological processes in the area, namely infiltration, flow routing and transmission losses. Three infiltration functions were examined - Initial & Constant, SCS-CN and Green&Ampt. The parameters for each function were found by calibration based on 53 flood events in three catchments, and validation was performed using 55 flood events in six catchments. QPE were obtained from a C-band weather radar and adjusted using a weighted multiple regression method based on a rain gauge network. Antecedent moisture conditions were calculated using a daily recharge assessment model (DREAM). We found that the SCS-CN infiltration function performed better than the other two, with reasonable agreement between calculated and measured peak discharge. Effects of storm characteristics were studied using synthetic storms from a high resolution weather generator (HiReS-WG), and showed a strong correlation between storm speed, storm direction and rain depth over desert soils to flood volume and peak discharge.

  18. Prospective assessment of dosimetric/physiologic-based models for predicting radiation pneumonitis

    International Nuclear Information System (INIS)

    Kocak, Zafer; Borst, Gerben R.; Zeng Jing; Zhou Sumin; Hollis, Donna R.; Zhang Junan; Evans, Elizabeth S.; Folz, Rodney J.; Wong, Terrence; Kahn, Daniel; Belderbos, Jose S.A.; Lebesque, Joos V.; Marks, Lawrence B.

    2007-01-01

    Purpose: Clinical and 3D dosimetric parameters are associated with symptomatic radiation pneumonitis rates in retrospective studies. Such parameters include: mean lung dose (MLD), radiation (RT) dose to perfused lung (via SPECT), and pre-RT lung function. Based on prior publications, we defined pre-RT criteria hypothesized to be predictive for later development of pneumonitis. We herein prospectively test the predictive abilities of these dosimetric/functional parameters on 2 cohorts of patients from Duke and Netherlands Cancer Institute (NKI). Methods and Materials: For the Duke cohort, 55 eligible patients treated between 1999 and 2005 on a prospective IRB-approved study to monitor RT-induced lung injury were analyzed. A similar group of patients treated at the NKI between 1996 and 2002 were identified. Patients believed to be at high and low risk for pneumonitis were defined based on: (1) MLD; (2) OpRP (sum of predicted perfusion reduction based on regional dose-response curve); and (3) pre-RT DLCO. All doses reflected tissue density heterogeneity. The rates of grade ≥2 pneumonitis in the 'presumed' high and low risk groups were compared using Fisher's exact test. Results: In the Duke group, pneumonitis rates in patients prospectively deemed to be at 'high' vs. 'low' risk are 7 of 20 and 9 of 35, respectively; p = 0.33 one-tailed Fisher's. Similarly, comparable rates for the NKI group are 4 of 21 and 6 of 44, respectively, p = 0.41 one-tailed Fisher's. Conclusion: The prospective model appears unable to accurately segregate patients into high vs. low risk groups. However, considered retrospectively, these data are consistent with prior studies suggesting that dosimetric (e.g., MLD) and functional (e.g., PFTs or SPECT) parameters are predictive for RT-induced pneumonitis. Additional work is needed to better identify, and prospectively assess, predictors of RT-induced lung injury

  19. Spectral Neugebauer-based color halftone prediction model accounting for paper fluorescence.

    Science.gov (United States)

    Hersch, Roger David

    2014-08-20

    We present a spectral model for predicting the fluorescent emission and the total reflectance of color halftones printed on optically brightened paper. By relying on extended Neugebauer models, the proposed model accounts for the attenuation by the ink halftones of both the incident exciting light in the UV wavelength range and the emerging fluorescent emission in the visible wavelength range. The total reflectance is predicted by adding the predicted fluorescent emission relative to the incident light and the pure reflectance predicted with an ink-spreading enhanced Yule-Nielsen modified Neugebauer reflectance prediction model. The predicted fluorescent emission spectrum as a function of the amounts of cyan, magenta, and yellow inks is very accurate. It can be useful to paper and ink manufacturers who would like to study in detail the contribution of the fluorescent brighteners and the attenuation of the fluorescent emission by ink halftones.

  20. Combined prediction model for supply risk in nuclear power equipment manufacturing industry based on support vector machine and decision tree

    International Nuclear Information System (INIS)

    Shi Chunsheng; Meng Dapeng

    2011-01-01

    The prediction index for supply risk is developed based on the factor identifying of nuclear equipment manufacturing industry. The supply risk prediction model is established with the method of support vector machine and decision tree, based on the investigation on 3 important nuclear power equipment manufacturing enterprises and 60 suppliers. Final case study demonstrates that the combination model is better than the single prediction model, and demonstrates the feasibility and reliability of this model, which provides a method to evaluate the suppliers and measure the supply risk. (authors)

  1. A model-based prognostic approach to predict interconnect failure using impedance analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Dae Il; Yoon, Jeong Ah [Dept. of System Design and Control Engineering. Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2016-10-15

    The reliability of electronic assemblies is largely affected by the health of interconnects, such as solder joints, which provide mechanical, electrical and thermal connections between circuit components. During field lifecycle conditions, interconnects are often subjected to a DC open circuit, one of the most common interconnect failure modes, due to cracking. An interconnect damaged by cracking is sometimes extremely hard to detect when it is a part of a daisy-chain structure, neighboring with other healthy interconnects that have not yet cracked. This cracked interconnect may seem to provide a good electrical contact due to the compressive load applied by the neighboring healthy interconnects, but it can cause the occasional loss of electrical continuity under operational and environmental loading conditions in field applications. Thus, cracked interconnects can lead to the intermittent failure of electronic assemblies and eventually to permanent failure of the product or the system. This paper introduces a model-based prognostic approach to quantitatively detect and predict interconnect failure using impedance analysis and particle filtering. Impedance analysis was previously reported as a sensitive means of detecting incipient changes at the surface of interconnects, such as cracking, based on the continuous monitoring of RF impedance. To predict the time to failure, particle filtering was used as a prognostic approach using the Paris model to address the fatigue crack growth. To validate this approach, mechanical fatigue tests were conducted with continuous monitoring of RF impedance while degrading the solder joints under test due to fatigue cracking. The test results showed the RF impedance consistently increased as the solder joints were degraded due to the growth of cracks, and particle filtering predicted the time to failure of the interconnects similarly to their actual timesto- failure based on the early sensitivity of RF impedance.

  2. Prediction of earth rotation parameters based on improved weighted least squares and autoregressive model

    Directory of Open Access Journals (Sweden)

    Sun Zhangzhen

    2012-08-01

    Full Text Available In this paper, an improved weighted least squares (WLS, together with autoregressive (AR model, is proposed to improve prediction accuracy of earth rotation parameters(ERP. Four weighting schemes are developed and the optimal power e for determination of the weight elements is studied. The results show that the improved WLS-AR model can improve the ERP prediction accuracy effectively, and for different prediction intervals of ERP, different weight scheme should be chosen.

  3. Introducing spatial information into predictive NF-kappaB modelling--an agent-based approach.

    Directory of Open Access Journals (Sweden)

    Mark Pogson

    2008-06-01

    Full Text Available Nature is governed by local interactions among lower-level sub-units, whether at the cell, organ, organism, or colony level. Adaptive system behaviour emerges via these interactions, which integrate the activity of the sub-units. To understand the system level it is necessary to understand the underlying local interactions. Successful models of local interactions at different levels of biological organisation, including epithelial tissue and ant colonies, have demonstrated the benefits of such 'agent-based' modelling. Here we present an agent-based approach to modelling a crucial biological system--the intracellular NF-kappaB signalling pathway. The pathway is vital to immune response regulation, and is fundamental to basic survival in a range of species. Alterations in pathway regulation underlie a variety of diseases, including atherosclerosis and arthritis. Our modelling of individual molecules, receptors and genes provides a more comprehensive outline of regulatory network mechanisms than previously possible with equation-based approaches. The method also permits consideration of structural parameters in pathway regulation; here we predict that inhibition of NF-kappaB is directly affected by actin filaments of the cytoskeleton sequestering excess inhibitors, therefore regulating steady-state and feedback behaviour.

  4. Models based on ultraviolet spectroscopy, polyphenols, oligosaccharides and polysaccharides for prediction of wine astringency.

    Science.gov (United States)

    Boulet, Jean-Claude; Trarieux, Corinne; Souquet, Jean-Marc; Ducasse, Maris-Agnés; Caillé, Soline; Samson, Alain; Williams, Pascale; Doco, Thierry; Cheynier, Véronique

    2016-01-01

    Astringency elicited by tannins is usually assessed by tasting. Alternative methods involving tannin precipitation have been proposed, but they remain time-consuming. Our goal was to propose a faster method and investigate the links between wine composition and astringency. Red wines covering a wide range of astringency intensities, assessed by sensory analysis, were selected. Prediction models based on multiple linear regression (MLR) were built using UV spectrophotometry (190-400 nm) and chemical analysis (enological analysis, polyphenols, oligosaccharides and polysaccharides). Astringency intensity was strongly correlated (R(2) = 0.825) with tannin precipitation by bovine serum albumin (BSA). Wine absorbances at 230 nm (A230) proved more suitable for astringency prediction (R(2) = 0.705) than A280 (R(2) = 0.56) or tannin concentration estimated by phloroglucinolysis (R(2) = 0.59). Three variable models built with A230, oligosaccharides and polysaccharides presented high R(2) and low errors of cross-validation. These models confirmed that polysaccharides decrease astringency perception and indicated a positive relationship between oligosaccharides and astringency. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Multilevel binomial logistic prediction model for malignant pulmonary nodules based on texture features of CT image

    International Nuclear Information System (INIS)

    Wang Huan; Guo Xiuhua; Jia Zhongwei; Li Hongkai; Liang Zhigang; Li Kuncheng; He Qian

    2010-01-01

    Purpose: To introduce multilevel binomial logistic prediction model-based computer-aided diagnostic (CAD) method of small solitary pulmonary nodules (SPNs) diagnosis by combining patient and image characteristics by textural features of CT image. Materials and methods: Describe fourteen gray level co-occurrence matrix textural features obtained from 2171 benign and malignant small solitary pulmonary nodules, which belongs to 185 patients. Multilevel binomial logistic model is applied to gain these initial insights. Results: Five texture features, including Inertia, Entropy, Correlation, Difference-mean, Sum-Entropy, and age of patients own aggregating character on patient-level, which are statistically different (P < 0.05) between benign and malignant small solitary pulmonary nodules. Conclusion: Some gray level co-occurrence matrix textural features are efficiently descriptive features of CT image of small solitary pulmonary nodules, which can profit diagnosis of earlier period lung cancer if combined patient-level characteristics to some extent.

  6. A voxel-based finite element model for the prediction of bladder deformation

    Energy Technology Data Exchange (ETDEWEB)

    Xiangfei, Chai; Herk, Marcel van; Hulshof, Maarten C. C. M.; Bel, Arjan [Radiation Oncology Department, Academic Medical Center, University of Amsterdam, 1105 AZ Amsterdam (Netherlands); Radiation Oncology Department, Netherlands Cancer Institute, 1066 CX Amsterdam (Netherlands); Radiation Oncology Department, Academic Medical Center, University of Amsterdam, 1105 AZ Amsterdam (Netherlands)

    2012-01-15

    Purpose: A finite element (FE) bladder model was previously developed to predict bladder deformation caused by bladder filling change. However, two factors prevent a wide application of FE models: (1) the labor required to construct a FE model with high quality mesh and (2) long computation time needed to construct the FE model and solve the FE equations. In this work, we address these issues by constructing a low-resolution voxel-based FE bladder model directly from the binary segmentation images and compare the accuracy and computational efficiency of the voxel-based model used to simulate bladder deformation with those of a classical FE model with a tetrahedral mesh. Methods: For ten healthy volunteers, a series of MRI scans of the pelvic region was recorded at regular intervals of 10 min over 1 h. For this series of scans, the bladder volume gradually increased while rectal volume remained constant. All pelvic structures were defined from a reference image for each volunteer, including bladder wall, small bowel, prostate (male), uterus (female), rectum, pelvic bone, spine, and the rest of the body. Four separate FE models were constructed from these structures: one with a tetrahedral mesh (used in previous study), one with a uniform hexahedral mesh, one with a nonuniform hexahedral mesh, and one with a low-resolution nonuniform hexahedral mesh. Appropriate material properties were assigned to all structures and uniform pressure was applied to the inner bladder wall to simulate bladder deformation from urine inflow. Performance of the hexahedral meshes was evaluated against the performance of the standard tetrahedral mesh by comparing the accuracy of bladder shape prediction and computational efficiency. Results: FE model with a hexahedral mesh can be quickly and automatically constructed. No substantial differences were observed between the simulation results of the tetrahedral mesh and hexahedral meshes (<1% difference in mean dice similarity coefficient to

  7. A voxel-based finite element model for the prediction of bladder deformation

    International Nuclear Information System (INIS)

    Chai Xiangfei; Herk, Marcel van; Hulshof, Maarten C. C. M.; Bel, Arjan

    2012-01-01

    Purpose: A finite element (FE) bladder model was previously developed to predict bladder deformation caused by bladder filling change. However, two factors prevent a wide application of FE models: (1) the labor required to construct a FE model with high quality mesh and (2) long computation time needed to construct the FE model and solve the FE equations. In this work, we address these issues by constructing a low-resolution voxel-based FE bladder model directly from the binary segmentation images and compare the accuracy and computational efficiency of the voxel-based model used to simulate bladder deformation with those of a classical FE model with a tetrahedral mesh. Methods: For ten healthy volunteers, a series of MRI scans of the pelvic region was recorded at regular intervals of 10 min over 1 h. For this series of scans, the bladder volume gradually increased while rectal volume remained constant. All pelvic structures were defined from a reference image for each volunteer, including bladder wall, small bowel, prostate (male), uterus (female), rectum, pelvic bone, spine, and the rest of the body. Four separate FE models were constructed from these structures: one with a tetrahedral mesh (used in previous study), one with a uniform hexahedral mesh, one with a nonuniform hexahedral mesh, and one with a low-resolution nonuniform hexahedral mesh. Appropriate material properties were assigned to all structures and uniform pressure was applied to the inner bladder wall to simulate bladder deformation from urine inflow. Performance of the hexahedral meshes was evaluated against the performance of the standard tetrahedral mesh by comparing the accuracy of bladder shape prediction and computational efficiency. Results: FE model with a hexahedral mesh can be quickly and automatically constructed. No substantial differences were observed between the simulation results of the tetrahedral mesh and hexahedral meshes (<1% difference in mean dice similarity coefficient to

  8. Predictive modelling for shelf life determination of nutricereal based fermented baby food.

    Science.gov (United States)

    Rasane, Prasad; Jha, Alok; Sharma, Nitya

    2015-08-01

    A shelf life model based on storage temperatures was developed for a nutricereal based fermented baby food formulation. The formulated baby food samples were packaged and stored at 10, 25, 37 and 45 °C for a test storage period of 180 days. A shelf life study was conducted using consumer and semi-trained panels, along with chemical analysis (moisture and acidity). The chemical parameters (moisture and titratable acidity) were found inadequate in determining the shelf life of the formulated product. Weibull hazard analysis was used to determine the shelf life of the product based on sensory evaluation. Considering 25 and 50 % rejection probability, the shelf life of the baby food formulation was predicted to be 98 and 322 days, 84 and 271 days, 71 and 221 days and 58 and 171 days for the samples stored at 10, 25, 37 and 45 °C, respectively. A shelf life equation was proposed using the rejection times obtained from the consumer study. Finally, the formulated baby food samples were subjected to microbial analysis for the predicted shelf life period and were found microbiologically safe for consumption during the storage period of 360 days.

  9. Crystal plasticity-based modeling for predicting anisotropic behaviour and formability of metallic materials

    International Nuclear Information System (INIS)

    Pham, Son; Jeong, Youngung; Creuziger, Adam; Iadicola, Mark; Foecke, Tim; Rollett, Anthony

    2016-01-01

    Metallic materials often exhibit anisotropic behaviour under complex load paths because of changes in microstructure, e.g., dislocations and crystallographic texture. In this study, we present the development of constitutive model based on dislocations, point defects and texture in order to predict anisotropic response under complex load paths. In detail, dislocation/solute atom interactions were considered to account for strain aging and static recovery. A hardening matrix based on the interaction of dislocations was built to represent the cross-hardening of different slip systems. Clear differentiation between forward and backward slip directions of dislocations was made to describe back stresses during path changes. In addition, we included dynamic recovery in order to better account for large plastic deformation. The model is validated against experimental data for AA5754-O with path changes, e.g., Figure 1 [1] Another effort is to include microstructure in forming predictions with a minimal increase in computational time. This effort enables comprehensive investigations of the influence of texture-induced anisotropy on formability [2]. Application of these improvements to predict forming limits of various BCC textures, such as γ, ρ, α, η and ϵ fibers and a random (R) texture. These simulations demonstrate that the crystallographic texture has significant (both positive and negative) effects on the forming limit diagrams (Figure 2). For example, the y fiber texture, that is often sought through thermo-mechanical processing due to high r-value, had the highest forming limit in the balanced biaxial strain path but the lowest forming limit under the plane strain path among textures under consideration. (paper)

  10. Inference for multivariate regression model based on multiply imputed synthetic data generated via posterior predictive sampling

    Science.gov (United States)

    Moura, Ricardo; Sinha, Bimal; Coelho, Carlos A.

    2017-06-01

    The recent popularity of the use of synthetic data as a Statistical Disclosure Control technique has enabled the development of several methods of generating and analyzing such data, but almost always relying in asymptotic distributions and in consequence being not adequate for small sample datasets. Thus, a likelihood-based exact inference procedure is derived for the matrix of regression coefficients of the multivariate regression model, for multiply imputed synthetic data generated via Posterior Predictive Sampling. Since it is based in exact distributions this procedure may even be used in small sample datasets. Simulation studies compare the results obtained from the proposed exact inferential procedure with the results obtained from an adaptation of Reiters combination rule to multiply imputed synthetic datasets and an application to the 2000 Current Population Survey is discussed.

  11. Predicting the Water Level Fluctuation in an Alpine Lake Using Physically Based, Artificial Neural Network, and Time Series Forecasting Models

    Directory of Open Access Journals (Sweden)

    Chih-Chieh Young

    2015-01-01

    Full Text Available Accurate prediction of water level fluctuation is important in lake management due to its significant impacts in various aspects. This study utilizes four model approaches to predict water levels in the Yuan-Yang Lake (YYL in Taiwan: a three-dimensional hydrodynamic model, an artificial neural network (ANN model (back propagation neural network, BPNN, a time series forecasting (autoregressive moving average with exogenous inputs, ARMAX model, and a combined hydrodynamic and ANN model. Particularly, the black-box ANN model and physically based hydrodynamic model are coupled to more accurately predict water level fluctuation. Hourly water level data (a total of 7296 observations was collected for model calibration (training and validation. Three statistical indicators (mean absolute error, root mean square error, and coefficient of correlation were adopted to evaluate model performances. Overall, the results demonstrate that the hydrodynamic model can satisfactorily predict hourly water level changes during the calibration stage but not for the validation stage. The ANN and ARMAX models better predict the water level than the hydrodynamic model does. Meanwhile, the results from an ANN model are superior to those by the ARMAX model in both training and validation phases. The novel proposed concept using a three-dimensional hydrodynamic model in conjunction with an ANN model has clearly shown the improved prediction accuracy for the water level fluctuation.

  12. Reliability–based economic model predictive control for generalised flow–based networks including actuators’ health–aware capabilities

    Directory of Open Access Journals (Sweden)

    Grosso Juan M.

    2016-09-01

    Full Text Available This paper proposes a reliability-based economic model predictive control (MPC strategy for the management of generalised flow-based networks, integrating some ideas on network service reliability, dynamic safety stock planning, and degradation of equipment health. The proposed strategy is based on a single-layer economic optimisation problem with dynamic constraints, which includes two enhancements with respect to existing approaches. The first enhancement considers chance-constraint programming to compute an optimal inventory replenishment policy based on a desired risk acceptability level, leading to dynamical allocation of safety stocks in flow-based networks to satisfy non-stationary flow demands. The second enhancement computes a smart distribution of the control effort and maximises actuators’ availability by estimating their degradation and reliability. The proposed approach is illustrated with an application of water transport networks using the Barcelona network as the case study considered.

  13. Study on Apparent Kinetic Prediction Model of the Smelting Reduction Based on the Time-Series

    Directory of Open Access Journals (Sweden)

    Guo-feng Fan

    2012-01-01

    Full Text Available A series of direct smelting reduction experiment has been carried out with high phosphorous iron ore of the different bases by thermogravimetric analyzer. The derivative thermogravimetric (DTG data have been obtained from the experiments. One-step forward local weighted linear (LWL method , one of the most suitable ways of predicting chaotic time-series methods which focus on the errors, is used to predict DTG. In the meanwhile, empirical mode decomposition-autoregressive (EMD-AR, a data mining technique in signal processing, is also used to predict DTG. The results show that (1 EMD-AR(4 is the most appropriate and its error is smaller than the former; (2 root mean square error (RMSE has decreased about two-thirds; (3 standardized root mean square error (NMSE has decreased in an order of magnitude. Finally in this paper, EMD-AR method has been improved by golden section weighting; its error would be smaller than before. Therefore, the improved EMD-AR model is a promising alternative for apparent reaction rate (DTG. The analytical results have been an important reference in the field of industrial control.

  14. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  15. Improving predictive power of physically based rainfall-induced shallow landslide models: a probabilistic approach

    Directory of Open Access Journals (Sweden)

    S. Raia

    2014-03-01

    Full Text Available Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are based on deterministic laws. These models extend spatially the static stability models adopted in geotechnical engineering, and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the operation of the existing models lays in the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of rainfall-induced shallow landslides. For this purpose, we have modified the transient rainfall infiltration and grid-based regional slope-stability analysis (TRIGRS code. The new code (TRIGRS-P adopts a probabilistic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs

  16. Performance of a process-based hydrodynamic model in predicting shoreline change

    Science.gov (United States)

    Safak, I.; Warner, J. C.; List, J. H.

    2012-12-01

    Shoreline change is controlled by a complex combination of processes that include waves, currents, sediment characteristics and availability, geologic framework, human interventions, and sea level rise. A comprehensive data set of shoreline position (14 shorelines between 1978-2002) along the continuous and relatively non-interrupted North Carolina Coast from Oregon Inlet to Cape Hatteras (65 km) reveals a spatial pattern of alternating erosion and accretion, with an erosional average shoreline change rate of -1.6 m/yr and up to -8 m/yr in some locations. This data set gives a unique opportunity to study long-term shoreline change in an area hit by frequent storm events while relatively uninfluenced by human interventions and the effects of tidal inlets. Accurate predictions of long-term shoreline change may require a model that accurately resolves surf zone processes and sediment transport patterns. Conventional methods for predicting shoreline change such as one-line models and regression of shoreline positions have been designed for computational efficiency. These methods, however, not only have several underlying restrictions (validity for small angle of wave approach, assuming bottom contours and shoreline to be parallel, depth of closure, etc.) but also their empirical estimates of sediment transport rates in the surf zone have been shown to vary greatly from the calculations of process-based hydrodynamic models. We focus on hind-casting long-term shoreline change using components of the process-based, three-dimensional coupled-ocean-atmosphere-wave-sediment transport modeling system (COAWST). COAWST is forced with historical predictions of atmospheric and oceanographic data from public-domain global models. Through a method of coupled concurrent grid-refinement approach in COAWST, the finest grid with resolution of O(10 m) that covers the surf zone along the section of interest is forced at its spatial boundaries with waves and currents computed on the grids

  17. Prediction of Sliding Friction Coefficient Based on a Novel Hybrid Molecular-Mechanical Model.

    Science.gov (United States)

    Zhang, Xiaogang; Zhang, Yali; Wang, Jianmei; Sheng, Chenxing; Li, Zhixiong

    2018-08-01

    Sliding friction is a complex phenomenon which arises from the mechanical and molecular interactions of asperities when examined in a microscale. To reveal and further understand the effects of micro scaled mechanical and molecular components of friction coefficient on overall frictional behavior, a hybrid molecular-mechanical model is developed to investigate the effects of main factors, including different loads and surface roughness values, on the sliding friction coefficient in a boundary lubrication condition. Numerical modelling was conducted using a deterministic contact model and based on the molecular-mechanical theory of friction. In the contact model, with given external loads and surface topographies, the pressure distribution, real contact area, and elastic/plastic deformation of each single asperity contact were calculated. Then asperity friction coefficient was predicted by the sum of mechanical and molecular components of friction coefficient. The mechanical component was mainly determined by the contact width and elastic/plastic deformation, and the molecular component was estimated as a function of the contact area and interfacial shear stress. Numerical results were compared with experimental results and a good agreement was obtained. The model was then used to predict friction coefficients in different operating and surface conditions. Numerical results explain why applied load has a minimum effect on the friction coefficients. They also provide insight into the effect of surface roughness on the mechanical and molecular components of friction coefficients. It is revealed that the mechanical component dominates the friction coefficient when the surface roughness is large (Rq > 0.2 μm), while the friction coefficient is mainly determined by the molecular component when the surface is relatively smooth (Rq < 0.2 μm). Furthermore, optimal roughness values for minimizing the friction coefficient are recommended.

  18. explICU: A web-based visualization and predictive modeling toolkit for mortality in intensive care patients.

    Science.gov (United States)

    Chen, Robert; Kumar, Vikas; Fitch, Natalie; Jagadish, Jitesh; Lifan Zhang; Dunn, William; Duen Horng Chau

    2015-01-01

    Preventing mortality in intensive care units (ICUs) has been a top priority in American hospitals. Predictive modeling has been shown to be effective in prediction of mortality based upon data from patients' past medical histories from electronic health records (EHRs). Furthermore, visualization of timeline events is imperative in the ICU setting in order to quickly identify trends in patient histories that may lead to mortality. With the increasing adoption of EHRs, a wealth of medical data is becoming increasingly available for secondary uses such as data exploration and predictive modeling. While data exploration and predictive modeling are useful for finding risk factors in ICU patients, the process is time consuming and requires a high level of computer programming ability. We propose explICU, a web service that hosts EHR data, displays timelines of patient events based upon user-specified preferences, performs predictive modeling in the back end, and displays results to the user via intuitive, interactive visualizations.

  19. Model predictive control of an air suspension system with damping multi-mode switching damper based on hybrid model

    Science.gov (United States)

    Sun, Xiaoqiang; Yuan, Chaochun; Cai, Yingfeng; Wang, Shaohua; Chen, Long

    2017-09-01

    This paper presents the hybrid modeling and the model predictive control of an air suspension system with damping multi-mode switching damper. Unlike traditional damper with continuously adjustable damping, in this study, a new damper with four discrete damping modes is applied to vehicle semi-active air suspension. The new damper can achieve different damping modes by just controlling the on-off statuses of two solenoid valves, which makes its damping adjustment more efficient and more reliable. However, since the damping mode switching induces different modes of operation, the air suspension system with the new damper poses challenging hybrid control problem. To model both the continuous/discrete dynamics and the switching between different damping modes, the framework of mixed logical dynamical (MLD) systems is used to establish the system hybrid model. Based on the resulting hybrid dynamical model, the system control problem is recast as a model predictive control (MPC) problem, which allows us to optimize the switching sequences of the damping modes by taking into account the suspension performance requirements. Numerical simulations results demonstrate the efficacy of the proposed control method finally.

  20. Robust entry guidance using linear covariance-based model predictive control

    Directory of Open Access Journals (Sweden)

    Jianjun Luo

    2017-02-01

    Full Text Available For atmospheric entry vehicles, guidance design can be accomplished by solving an optimal issue using optimal control theories. However, traditional design methods generally focus on the nominal performance and do not include considerations of the robustness in the design process. This paper proposes a linear covariance-based model predictive control method for robust entry guidance design. Firstly, linear covariance analysis is employed to directly incorporate the robustness into the guidance design. The closed-loop covariance with the feedback updated control command is initially formulated to provide the expected errors of the nominal state variables in the presence of uncertainties. Then, the closed-loop covariance is innovatively used as a component of the cost function to guarantee the robustness to reduce its sensitivity to uncertainties. After that, the models predictive control is used to solve the optimal problem, and the control commands (bank angles are calculated. Finally, a series of simulations for different missions have been completed to demonstrate the high performance in precision and the robustness with respect to initial perturbations as well as uncertainties in the entry process. The 3σ confidence region results in the presence of uncertainties which show that the robustness of the guidance has been improved, and the errors of the state variables are decreased by approximately 35%.

  1. Score-based prediction of genomic islands in prokaryotic genomes using hidden Markov models

    Directory of Open Access Journals (Sweden)

    Surovcik Katharina

    2006-03-01

    Full Text Available Abstract Background Horizontal gene transfer (HGT is considered a strong evolutionary force shaping the content of microbial genomes in a substantial manner. It is the difference in speed enabling the rapid adaptation to changing environmental demands that distinguishes HGT from gene genesis, duplications or mutations. For a precise characterization, algorithms are needed that identify transfer events with high reliability. Frequently, the transferred pieces of DNA have a considerable length, comprise several genes and are called genomic islands (GIs or more specifically pathogenicity or symbiotic islands. Results We have implemented the program SIGI-HMM that predicts GIs and the putative donor of each individual alien gene. It is based on the analysis of codon usage (CU of each individual gene of a genome under study. CU of each gene is compared against a carefully selected set of CU tables representing microbial donors or highly expressed genes. Multiple tests are used to identify putatively alien genes, to predict putative donors and to mask putatively highly expressed genes. Thus, we determine the states and emission probabilities of an inhomogeneous hidden Markov model working on gene level. For the transition probabilities, we draw upon classical test theory with the intention of integrating a sensitivity controller in a consistent manner. SIGI-HMM was written in JAVA and is publicly available. It accepts as input any file created according to the EMBL-format. It generates output in the common GFF format readable for genome browsers. Benchmark tests showed that the output of SIGI-HMM is in agreement with known findings. Its predictions were both consistent with annotated GIs and with predictions generated by different methods. Conclusion SIGI-HMM is a sensitive tool for the identification of GIs in microbial genomes. It allows to interactively analyze genomes in detail and to generate or to test hypotheses about the origin of acquired

  2. Predictive Modeling of Mechanical Properties of Welded Joints Based on Dynamic Fuzzy RBF Neural Network

    Directory of Open Access Journals (Sweden)

    ZHANG Yongzhi

    2016-10-01

    Full Text Available A dynamic fuzzy RBF neural network model was built to predict the mechanical properties of welded joints, and the purpose of the model was to overcome the shortcomings of static neural networks including structural identification, dynamic sample training and learning algorithm. The structure and parameters of the model are no longer head of default, dynamic adaptive adjustment in the training, suitable for dynamic sample data for learning, learning algorithm introduces hierarchical learning and fuzzy rule pruning strategy, to accelerate the training speed of model and make the model more compact. Simulation of the model was carried out by using three kinds of thickness and different process TC4 titanium alloy TIG welding test data. The results show that the model has higher prediction accuracy, which is suitable for predicting the mechanical properties of welded joints, and has opened up a new way for the on-line control of the welding process.

  3. Predictive Modeling of Fast-Curing Thermosets in Nozzle-Based Extrusion

    Science.gov (United States)

    Xie, Jingjin; Randolph, Robert; Simmons, Gary; Hull, Patrick V.; Mazzeo, Aaron D.

    2017-01-01

    This work presents an approach to modeling the dynamic spreading and curing behavior of thermosets in nozzle-based extrusions. Thermosets cover a wide range of materials, some of which permit low-temperature processing with subsequent high-temperature and high-strength working properties. Extruding thermosets may overcome the limited working temperatures and strengths of conventional thermoplastic materials used in additive manufacturing. This project aims to produce technology for the fabrication of thermoset-based structures leveraging advances made in nozzle-based extrusion, such as fused deposition modeling (FDM), material jetting, and direct writing. Understanding the synergistic interactions between spreading and fast curing of extruded thermosetting materials will provide essential insights for applications that require accurate dimensional controls, such as additive manufacturing [1], [2] and centrifugal coating/forming [3]. Two types of thermally curing thermosets -- one being a soft silicone (Ecoflex 0050) and the other being a toughened epoxy (G/Flex) -- served as the test materials in this work to obtain models for cure kinetics and viscosity. The developed models align with extensive measurements made with differential scanning calorimetry (DSC) and rheology. DSC monitors the change in the heat of reaction, which reflects the rate and degree of cure at different crosslinking stages. Rheology measures the change in complex viscosity, shear moduli, yield stress, and other properties dictated by chemical composition. By combining DSC and rheological measurements, it is possible to establish a set of models profiling the cure kinetics and chemorheology without prior knowledge of chemical composition, which is usually necessary for sophisticated mechanistic modeling. In this work, we conducted both isothermal and dynamic measurements with both DSC and rheology. With the developed models, numerical simulations yielded predictions of diameter and height of

  4. Predictive value of EEG in postanoxic encephalopathy: A quantitative model-based approach.

    Science.gov (United States)

    Efthymiou, Evdokia; Renzel, Roland; Baumann, Christian R; Poryazova, Rositsa; Imbach, Lukas L

    2017-10-01

    The majority of comatose patients after cardiac arrest do not regain consciousness due to severe postanoxic encephalopathy. Early and accurate outcome prediction is therefore essential in determining further therapeutic interventions. The electroencephalogram is a standardized and commonly available tool used to estimate prognosis in postanoxic patients. The identification of pathological EEG patterns with poor prognosis relies however primarily on visual EEG scoring by experts. We introduced a model-based approach of EEG analysis (state space model) that allows for an objective and quantitative description of spectral EEG variability. We retrospectively analyzed standard EEG recordings in 83 comatose patients after cardiac arrest between 2005 and 2013 in the intensive care unit of the University Hospital Zürich. Neurological outcome was assessed one month after cardiac arrest using the Cerebral Performance Category. For a dynamic and quantitative EEG analysis, we implemented a model-based approach (state space analysis) to quantify EEG background variability independent from visual scoring of EEG epochs. Spectral variability was compared between groups and correlated with clinical outcome parameters and visual EEG patterns. Quantitative assessment of spectral EEG variability (state space velocity) revealed significant differences between patients with poor and good outcome after cardiac arrest: Lower mean velocity in temporal electrodes (T4 and T5) was significantly associated with poor prognostic outcome (pEEG patterns such as generalized periodic discharges (pEEG analysis (state space analysis) provides a novel, complementary marker for prognosis in postanoxic encephalopathy. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    Science.gov (United States)

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  6. Model-based predictive control scheme for cost optimization and balancing services for supermarket refrigeration Systems

    NARCIS (Netherlands)

    Weerts, H.H.M.; Shafiei, S.E.; Stoustrup, J.; Izadi-Zamanabadi, R.; Boje, E.; Xia, X.

    2014-01-01

    A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predictive

  7. Model-based chatter stability prediction and detection for the turning of a flexible workpiece

    Science.gov (United States)

    Lu, Kaibo; Lian, Zisheng; Gu, Fengshou; Liu, Hunju

    2018-02-01

    Machining long slender workpieces still presents a technical challenge on the shop floor due to their low stiffness and damping. Regenerative chatter is a major hindrance in machining processes, reducing the geometric accuracies and dynamic stability of the cutting system. This study has been motivated by the fact that chatter occurrence is generally in relation to the cutting position in straight turning of slender workpieces, which has seldom been investigated comprehensively in literature. In the present paper, a predictive chatter model of turning a tailstock supported slender workpiece considering the cutting position change during machining is explored. Based on linear stability analysis and stiffness distribution at different cutting positions along the workpiece, the effect of the cutting tool movement along the length of the workpiece on chatter stability is studied. As a result, an entire stability chart for a single cutting pass is constructed. Through this stability chart the critical cutting condition and the chatter onset location along the workpiece in a turning operation can be estimated. The difference between the predicted tool locations and the experimental results was within 9% at high speed cutting. Also, on the basis of the predictive model the dynamic behavior during chatter that when chatter arises at some cutting location it will continue for a period of time until another specified location is arrived at, can be inferred. The experimental observation is in good agreement with the theoretical inference. In chatter detection respect, besides the delay strategy and overlap processing technique, a relative threshold algorithm is proposed to detect chatter by comparing the spectrum and variance of the acquired acceleration signals with the reference saved during stable cutting. The chatter monitoring method has shown reliability for various machining conditions.

  8. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress

    Directory of Open Access Journals (Sweden)

    Ching-Hsue Cheng

    2018-01-01

    Full Text Available The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i the proposed model is different from the previous models lacking the concept of time series; (ii the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.

  9. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress

    Science.gov (United States)

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies. PMID:29765399

  10. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress.

    Science.gov (United States)

    Cheng, Ching-Hsue; Chan, Chia-Pang; Yang, Jun-He

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.

  11. Construction of risk prediction model of type 2 diabetes mellitus based on logistic regression

    Directory of Open Access Journals (Sweden)

    Li Jian

    2017-01-01

    Full Text Available Objective: to construct multi factor prediction model for the individual risk of T2DM, and to explore new ideas for early warning, prevention and personalized health services for T2DM. Methods: using logistic regression techniques to screen the risk factors for T2DM and construct the risk prediction model of T2DM. Results: Male’s risk prediction model logistic regression equation: logit(P=BMI × 0.735+ vegetables × (−0.671 + age × 0.838+ diastolic pressure × 0.296+ physical activity× (−2.287 + sleep ×(−0.009 +smoking ×0.214; Female’s risk prediction model logistic regression equation: logit(P=BMI ×1.979+ vegetables× (−0.292 + age × 1.355+ diastolic pressure× 0.522+ physical activity × (−2.287 + sleep × (−0.010.The area under the ROC curve of male was 0.83, the sensitivity was 0.72, the specificity was 0.86, the area under the ROC curve of female was 0.84, the sensitivity was 0.75, the specificity was 0.90. Conclusion: This study model data is from a compared study of nested case, the risk prediction model has been established by using the more mature logistic regression techniques, and the model is higher predictive sensitivity, specificity and stability.

  12. A prediction model for spontaneous regression of cervical intraepithelial neoplasia grade 2, based on simple clinical parameters.

    Science.gov (United States)

    Koeneman, Margot M; van Lint, Freyja H M; van Kuijk, Sander M J; Smits, Luc J M; Kooreman, Loes F S; Kruitwagen, Roy F P M; Kruse, Arnold J

    2017-01-01

    This study aims to develop a prediction model for spontaneous regression of cervical intraepithelial neoplasia grade 2 (CIN 2) lesions based on simple clinicopathological parameters. The study was conducted at Maastricht University Medical Center, the Netherlands. The prediction model was developed in a retrospective cohort of 129 women with a histologic diagnosis of CIN 2 who were managed by watchful waiting for 6 to 24months. Five potential predictors for spontaneous regression were selected based on the literature and expert opinion and were analyzed in a multivariable logistic regression model, followed by backward stepwise deletion based on the Wald test. The prediction model was internally validated by the bootstrapping method. Discriminative capacity and accuracy were tested by assessing the area under the receiver operating characteristic curve (AUC) and a calibration plot. Disease regression within 24months was seen in 91 (71%) of 129 patients. A prediction model was developed including the following variables: smoking, Papanicolaou test outcome before the CIN 2 diagnosis, concomitant CIN 1 diagnosis in the same biopsy, and more than 1 biopsy containing CIN 2. Not smoking, Papanicolaou class predictive of disease regression. The AUC was 69.2% (95% confidence interval, 58.5%-79.9%), indicating a moderate discriminative ability of the model. The calibration plot indicated good calibration of the predicted probabilities. This prediction model for spontaneous regression of CIN 2 may aid physicians in the personalized management of these lesions. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Improving predictive power of physically based rainfall-induced shallow landslide models: a probablistic approach

    Science.gov (United States)

    Raia, S.; Alvioli, M.; Rossi, M.; Baum, R.L.; Godt, J.W.; Guzzetti, F.

    2013-01-01

    Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are deterministic. These models extend spatially the static stability models adopted in geotechnical engineering and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the existing models is the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of shallow rainfall-induced landslides. For the purpose, we have modified the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Analysis (TRIGRS) code. The new code (TRIGRS-P) adopts a stochastic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs of several model runs obtained varying the input parameters

  14. Soil erosion model predictions using parent material/soil texture-based parameters compared to using site-specific parameters

    Science.gov (United States)

    R. B. Foltz; W. J. Elliot; N. S. Wagenbrenner

    2011-01-01

    Forested areas disturbed by access roads produce large amounts of sediment. One method to predict erosion and, hence, manage forest roads is the use of physically based soil erosion models. A perceived advantage of a physically based model is that it can be parameterized at one location and applied at another location with similar soil texture or geological parent...

  15. A study on model fidelity for model predictive control-based obstacle avoidance in high-speed autonomous ground vehicles

    Science.gov (United States)

    Liu, Jiechao; Jayakumar, Paramsothy; Stein, Jeffrey L.; Ersal, Tulga

    2016-11-01

    This paper investigates the level of model fidelity needed in order for a model predictive control (MPC)-based obstacle avoidance algorithm to be able to safely and quickly avoid obstacles even when the vehicle is close to its dynamic limits. The context of this work is large autonomous ground vehicles that manoeuvre at high speed within unknown, unstructured, flat environments and have significant vehicle dynamics-related constraints. Five different representations of vehicle dynamics models are considered: four variations of the two degrees-of-freedom (DoF) representation as lower fidelity models and a fourteen DoF representation with combined-slip Magic Formula tyre model as a higher fidelity model. It is concluded that the two DoF representation that accounts for tyre nonlinearities and longitudinal load transfer is necessary for the MPC-based obstacle avoidance algorithm in order to operate the vehicle at its limits within an environment that includes large obstacles. For less challenging environments, however, the two DoF representation with linear tyre model and constant axle loads is sufficient.

  16. Combined Active and Reactive Power Control of Wind Farms based on Model Predictive Control

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Wang, Jianhui

    2017-01-01

    This paper proposes a combined wind farm controller based on Model Predictive Control (MPC). Compared with the conventional decoupled active and reactive power control, the proposed control scheme considers the significant impact of active power on voltage variations due to the low X=R ratio...... of wind farm collector systems. The voltage control is improved. Besides, by coordination of active and reactive power, the Var capacity is optimized to prevent potential failures due to Var shortage, especially when the wind farm operates close to its full load. An analytical method is used to calculate...... the sensitivity coefficients to improve the computation efficiency and overcome the convergence problem. Two control modes are designed for both normal and emergency conditions. A wind farm with 20 wind turbines was used to verify the proposed combined control scheme....

  17. Disturbance observer based model predictive control for accurate atmospheric entry of spacecraft

    Science.gov (United States)

    Wu, Chao; Yang, Jun; Li, Shihua; Li, Qi; Guo, Lei

    2018-05-01

    Facing the complex aerodynamic environment of Mars atmosphere, a composite atmospheric entry trajectory tracking strategy is investigated in this paper. External disturbances, initial states uncertainties and aerodynamic parameters uncertainties are the main problems. The composite strategy is designed to solve these problems and improve the accuracy of Mars atmospheric entry. This strategy includes a model predictive control for optimized trajectory tracking performance, as well as a disturbance observer based feedforward compensation for external disturbances and uncertainties attenuation. 500-run Monte Carlo simulations show that the proposed composite control scheme achieves more precise Mars atmospheric entry (3.8 km parachute deployment point distribution error) than the baseline control scheme (8.4 km) and integral control scheme (5.8 km).

  18. Automated Text Analysis Based on Skip-Gram Model for Food Evaluation in Predicting Consumer Acceptance.

    Science.gov (United States)

    Kim, Augustine Yongwhi; Ha, Jin Gwan; Choi, Hoduk; Moon, Hyeonjoon

    2018-01-01

    The purpose of this paper is to evaluate food taste, smell, and characteristics from consumers' online reviews. Several studies in food sensory evaluation have been presented for consumer acceptance. However, these studies need taste descriptive word lexicon, and they are not suitable for analyzing large number of evaluators to predict consumer acceptance. In this paper, an automated text analysis method for food evaluation is presented to analyze and compare recently introduced two jjampong ramen types (mixed seafood noodles). To avoid building a sensory word lexicon, consumers' reviews are collected from SNS. Then, by training word embedding model with acquired reviews, words in the large amount of review text are converted into vectors. Based on these words represented as vectors, inference is performed to evaluate taste and smell of two jjampong ramen types. Finally, the reliability and merits of the proposed food evaluation method are confirmed by a comparison with the results from an actual consumer preference taste evaluation.

  19. Model Predictive Control-based gait pattern generation for wearable exoskeletons.

    Science.gov (United States)

    Wang, Letian; van Asseldonk, Edwin H F; van der Kooij, Herman

    2011-01-01

    This paper introduces a new method for controlling wearable exoskeletons that do not need predefined joint trajectories. Instead, it only needs basic gait descriptors such as step length, swing duration, and walking speed. End point Model Predictive Control (MPC) is used to generate the online joint trajectories based on these gait parameters. Real-time ability and control performance of the method during the swing phase of gait cycle is studied in this paper. Experiments are performed by helping a human subject swing his leg with different patterns in the LOPES gait trainer. Results show that the method is able to assist subjects to make steps with different step length and step duration without predefined joint trajectories and is fast enough for real-time implementation. Future study of the method will focus on controlling the exoskeletons in the entire gait cycle. © 2011 IEEE

  20. Iterated non-linear model predictive control based on tubes and contractive constraints.

    Science.gov (United States)

    Murillo, M; Sánchez, G; Giovanini, L

    2016-05-01

    This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Assess and Predict Automatic Generation Control Performances for Thermal Power Generation Units Based on Modeling Techniques

    Science.gov (United States)

    Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao

    2018-02-01

    Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.

  2. Automated Text Analysis Based on Skip-Gram Model for Food Evaluation in Predicting Consumer Acceptance

    Directory of Open Access Journals (Sweden)

    Augustine Yongwhi Kim

    2018-01-01

    Full Text Available The purpose of this paper is to evaluate food taste, smell, and characteristics from consumers’ online reviews. Several studies in food sensory evaluation have been presented for consumer acceptance. However, these studies need taste descriptive word lexicon, and they are not suitable for analyzing large number of evaluators to predict consumer acceptance. In this paper, an automated text analysis method for food evaluation is presented to analyze and compare recently introduced two jjampong ramen types (mixed seafood noodles. To avoid building a sensory word lexicon, consumers’ reviews are collected from SNS. Then, by training word embedding model with acquired reviews, words in the large amount of review text are converted into vectors. Based on these words represented as vectors, inference is performed to evaluate taste and smell of two jjampong ramen types. Finally, the reliability and merits of the proposed food evaluation method are confirmed by a comparison with the results from an actual consumer preference taste evaluation.

  3. Virtual-view PSNR prediction based on a depth distortion tolerance model and support vector machine.

    Science.gov (United States)

    Chen, Fen; Chen, Jiali; Peng, Zongju; Jiang, Gangyi; Yu, Mei; Chen, Hua; Jiao, Renzhi

    2017-10-20

    Quality prediction of virtual-views is important for free viewpoint video systems, and can be used as feedback to improve the performance of depth video coding and virtual-view rendering. In this paper, an efficient virtual-view peak signal to noise ratio (PSNR) prediction method is proposed. First, the effect of depth distortion on virtual-view quality is analyzed in detail, and a depth distortion tolerance (DDT) model that determines the DDT range is presented. Next, the DDT model is used to predict the virtual-view quality. Finally, a support vector machine (SVM) is utilized to train and obtain the virtual-view quality prediction model. Experimental results show that the Spearman's rank correlation coefficient and root mean square error between the actual PSNR and the predicted PSNR by DDT model are 0.8750 and 0.6137 on average, and by the SVM prediction model are 0.9109 and 0.5831. The computational complexity of the SVM method is lower than the DDT model and the state-of-the-art methods.

  4. Solar energy prediction and verification using operational model forecasts and ground-based solar measurements

    International Nuclear Information System (INIS)

    Kosmopoulos, P.G.; Kazadzis, S.; Lagouvardos, K.; Kotroni, V.; Bais, A.

    2015-01-01

    The present study focuses on the predictions and verification of these predictions of solar energy using ground-based solar measurements from the Hellenic Network for Solar Energy and the National Observatory of Athens network, as well as solar radiation operational forecasts provided by the MM5 mesoscale model. The evaluation was carried out independently for the different networks, for two forecast horizons (1 and 2 days ahead), for the seasons of the year, for varying solar elevation, for the indicative energy potential of the area, and for four classes of cloud cover based on the calculated clearness index (k_t): CS (clear sky), SC (scattered clouds), BC (broken clouds) and OC (overcast). The seasonal dependence presented relative rRMSE (Root Mean Square Error) values ranging from 15% (summer) to 60% (winter), while the solar elevation dependence revealed a high effectiveness and reliability near local noon (rRMSE ∼30%). An increment of the errors with cloudiness was also observed. For CS with mean GHI (global horizontal irradiance) ∼ 650 W/m"2 the errors are 8%, for SC 20% and for BC and OC the errors were greater (>40%) but correspond to much lower radiation levels (<120 W/m"2) of consequently lower energy potential impact. The total energy potential for each ground station ranges from 1.5 to 1.9 MWh/m"2, while the mean monthly forecast error was found to be consistently below 10%. - Highlights: • Long term measurements at different atmospheric cases are needed for energy forecasting model evaluations. • The total energy potential at the Greek sites presented ranges from 1.5 to 1.9 MWh/m"2. • Mean monthly energy forecast errors are within 10% for all cases analyzed. • Cloud presence results of an additional forecast error that varies with the cloud cover.

  5. Case studies of extended model-based flood forecasting: prediction of dike strength and flood impacts

    Science.gov (United States)

    Stuparu, Dana; Bachmann, Daniel; Bogaard, Tom; Twigt, Daniel; Verkade, Jan; de Bruijn, Karin; de Leeuw, Annemargreet

    2017-04-01

    Flood forecasts, warning and emergency response are important components in flood risk management. Most flood forecasting systems use models to translate weather predictions to forecasted discharges or water levels. However, this information is often not sufficient for real time decisions. A sound understanding of the reliability of embankments and flood dynamics is needed to react timely and reduce the negative effects of the flood. Where are the weak points in the dike system? When, how much and where the water will flow? When and where is the greatest impact expected? Model-based flood impact forecasting tries to answer these questions by adding new dimensions to the existing forecasting systems by providing forecasted information about: (a) the dike strength during the event (reliability), (b) the flood extent in case of an overflow or a dike failure (flood spread) and (c) the assets at risk (impacts). This work presents three study-cases in which such a set-up is applied. Special features are highlighted. Forecasting of dike strength. The first study-case focusses on the forecast of dike strength in the Netherlands for the river Rhine branches Waal, Nederrijn and IJssel. A so-called reliability transformation is used to translate the predicted water levels at selected dike sections into failure probabilities during a flood event. The reliability of a dike section is defined by fragility curves - a summary of the dike strength conditional to the water level. The reliability information enhances the emergency management and inspections of embankments. Ensemble forecasting. The second study-case shows the setup of a flood impact forecasting system in Dumfries, Scotland. The existing forecasting system is extended with a 2D flood spreading model in combination with the Delft-FIAT impact model. Ensemble forecasts are used to make use of the uncertainty in the precipitation forecasts, which is useful to quantify the certainty of a forecasted flood event. From global

  6. Physics-based process modeling, reliability prediction, and design guidelines for flip-chip devices

    Science.gov (United States)

    Michaelides, Stylianos

    Flip Chip on Board (FCOB) and Chip-Scale Packages (CSPs) are relatively new technologies that are being increasingly used in the electronic packaging industry. Compared to the more widely used face-up wirebonding and TAB technologies, flip-chips and most CSPs provide the shortest possible leads, lower inductance, higher frequency, better noise control, higher density, greater input/output (I/O), smaller device footprint and lower profile. However, due to the short history and due to the introduction of several new electronic materials, designs, and processing conditions, very limited work has been done to understand the role of material, geometry, and processing parameters on the reliability of flip-chip devices. Also, with the ever-increasing complexity of semiconductor packages and with the continued reduction in time to market, it is too costly to wait until the later stages of design and testing to discover that the reliability is not satisfactory. The objective of the research is to develop integrated process-reliability models that will take into consideration the mechanics of assembly processes to be able to determine the reliability of face-down devices under thermal cycling and long-term temperature dwelling. The models incorporate the time and temperature-dependent constitutive behavior of various materials in the assembly to be able to predict failure modes such as die cracking and solder cracking. In addition, the models account for process-induced defects and macro-micro features of the assembly. Creep-fatigue and continuum-damage mechanics models for the solder interconnects and fracture-mechanics models for the die have been used to determine the reliability of the devices. The results predicted by the models have been successfully validated against experimental data. The validated models have been used to develop qualification and test procedures for implantable medical devices. In addition, the research has helped develop innovative face

  7. Nonlinear Model Predictive Control for Solid Oxide Fuel Cell System Based On Wiener Model

    OpenAIRE

    T. H. Lee; J. H. Park; S. M. Lee; S. C. Lee

    2010-01-01

    In this paper, we consider Wiener nonlinear model for solid oxide fuel cell (SOFC). The Wiener model of the SOFC consists of a linear dynamic block and a static output non-linearity followed by the block, in which linear part is approximated by state-space model and the nonlinear part is identified by a polynomial form. To control the SOFC system, we have to consider various view points such as operating conditions, another constraint conditions, change of load current and so on. A change of ...

  8. Climate-based models for pulsed resources improve predictability of consumer population dynamics: outbreaks of house mice in forest ecosystems.

    Directory of Open Access Journals (Sweden)

    E Penelope Holland

    Full Text Available Accurate predictions of the timing and magnitude of consumer responses to episodic seeding events (masts are important for understanding ecosystem dynamics and for managing outbreaks of invasive species generated by masts. While models relating consumer populations to resource fluctuations have been developed successfully for a range of natural and modified ecosystems, a critical gap that needs addressing is better prediction of resource pulses. A recent model used change in summer temperature from one year to the next (ΔT for predicting masts for forest and grassland plants in New Zealand. We extend this climate-based method in the framework of a model for consumer-resource dynamics to predict invasive house mouse (Mus musculus outbreaks in forest ecosystems. Compared with previous mast models based on absolute temperature, the ΔT method for predicting masts resulted in an improved model for mouse population dynamics. There was also a threshold effect of ΔT on the likelihood of an outbreak occurring. The improved climate-based method for predicting resource pulses and consumer responses provides a straightforward rule of thumb for determining, with one year's advance warning, whether management intervention might be required in invaded ecosystems. The approach could be applied to consumer-resource systems worldwide where climatic variables are used to model the size and duration of resource pulses, and may have particular relevance for ecosystems where global change scenarios predict increased variability in climatic events.

  9. Chemical structure-based predictive model for methanogenic anaerobic biodegradation potential.

    Science.gov (United States)

    Meylan, William; Boethling, Robert; Aronson, Dallas; Howard, Philip; Tunkel, Jay

    2007-09-01

    Many screening-level models exist for predicting aerobic biodegradation potential from chemical structure, but anaerobic biodegradation generally has been ignored by modelers. We used a fragment contribution approach to develop a model for predicting biodegradation potential under methanogenic anaerobic conditions. The new model has 37 fragments (substructures) and classifies a substance as either fast or slow, relative to the potential to be biodegraded in the "serum bottle" anaerobic biodegradation screening test (Organization for Economic Cooperation and Development Guideline 311). The model correctly classified 90, 77, and 91% of the chemicals in the training set (n = 169) and two independent validation sets (n = 35 and 23), respectively. Accuracy of predictions of fast and slow degradation was equal for training-set chemicals, but fast-degradation predictions were less accurate than slow-degradation predictions for the validation sets. Analysis of the signs of the fragment coefficients for this and the other (aerobic) Biowin models suggests that in the context of simple group contribution models, the majority of positive and negative structural influences on ultimate degradation are the same for aerobic and methanogenic anaerobic biodegradation.

  10. a Predictive Model of Permeability for Fractal-Based Rough Rock Fractures during Shear

    Science.gov (United States)

    Huang, Na; Jiang, Yujing; Liu, Richeng; Li, Bo; Zhang, Zhenyu

    This study investigates the roles of fracture roughness, normal stress and shear displacement on the fluid flow characteristics through three-dimensional (3D) self-affine fractal rock fractures, whose surfaces are generated using the modified successive random additions (SRA) algorithm. A series of numerical shear-flow tests under different normal stresses were conducted on rough rock fractures to calculate the evolutions of fracture aperture and permeability. The results show that the rough surfaces of fractal-based fractures can be described using the scaling parameter Hurst exponent (H), in which H = 3 - Df, where Df is the fractal dimension of 3D single fractures. The joint roughness coefficient (JRC) distribution of fracture profiles follows a Gauss function with a negative linear relationship between H and average JRC. The frequency curves of aperture distributions change from sharp to flat with increasing shear displacement, indicating a more anisotropic and heterogeneous flow pattern. Both the mean aperture and permeability of fracture increase with the increment of surface roughness and decrement of normal stress. At the beginning of shear, the permeability increases remarkably and then gradually becomes steady. A predictive model of permeability using the mean mechanical aperture is proposed and the validity is verified by comparisons with the experimental results reported in literature. The proposed model provides a simple method to approximate permeability of fractal-based rough rock fractures during shear using fracture aperture distribution that can be easily obtained from digitized fracture surface information.

  11. Model predictive control-based scheduler for repetitive discrete event systems with capacity constraints

    Directory of Open Access Journals (Sweden)

    Hiroyuki Goto

    2013-07-01

    Full Text Available A model predictive control-based scheduler for a class of discrete event systems is designed and developed. We focus on repetitive, multiple-input, multiple-output, and directed acyclic graph structured systems on which capacity constraints can be imposed. The target system’s behaviour is described by linear equations in max-plus algebra, referred to as state-space representation. Assuming that the system’s performance can be improved by paying additional cost, we adjust the system parameters and determine control inputs for which the reference output signals can be observed. The main contribution of this research is twofold, 1: For systems with capacity constraints, we derived an output prediction equation as functions of adjustable variables in a recursive form, 2: Regarding the construct for the system’s representation, we improved the structure to accomplish general operations which are essential for adjusting the system parameters. The result of numerical simulation in a later section demonstrates the effectiveness of the developed controller.

  12. Prediction of allosteric sites on protein surfaces with an elastic-network-model-based thermodynamic method.

    Science.gov (United States)

    Su, Ji Guo; Qi, Li Sheng; Li, Chun Hua; Zhu, Yan Ying; Du, Hui Jing; Hou, Yan Xue; Hao, Rui; Wang, Ji Hua

    2014-08-01

    Allostery is a rapid and efficient way in many biological processes to regulate protein functions, where binding of an effector at the allosteric site alters the activity and function at a distant active site. Allosteric regulation of protein biological functions provides a promising strategy for novel drug design. However, how to effectively identify the allosteric sites remains one of the major challenges for allosteric drug design. In the present work, a thermodynamic method based on the elastic network model was proposed to predict the allosteric sites on the protein surface. In our method, the thermodynamic coupling between the allosteric and active sites was considered, and then the allosteric sites were identified as those where the binding of an effector molecule induces a large change in the binding free energy of the protein with its ligand. Using the proposed method, two proteins, i.e., the 70 kD heat shock protein (Hsp70) and GluA2 alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid (AMPA) receptor, were studied and the allosteric sites on the protein surface were successfully identified. The predicted results are consistent with the available experimental data, which indicates that our method is a simple yet effective approach for the identification of allosteric sites on proteins.

  13. Programmable Nucleic Acid Based Polygons with Controlled Neuroimmunomodulatory Properties for Predictive QSAR Modeling.

    Science.gov (United States)

    Johnson, Morgan Brittany; Halman, Justin R; Satterwhite, Emily; Zakharov, Alexey V; Bui, My N; Benkato, Kheiria; Goldsworthy, Victoria; Kim, Taejin; Hong, Enping; Dobrovolskaia, Marina A; Khisamutdinov, Emil F; Marriott, Ian; Afonin, Kirill A

    2017-11-01

    In the past few years, the study of therapeutic RNA nanotechnology has expanded tremendously to encompass a large group of interdisciplinary sciences. It is now evident that rationally designed programmable RNA nanostructures offer unique advantages in addressing contemporary therapeutic challenges such as distinguishing target cell types and ameliorating disease. However, to maximize the therapeutic benefit of these nanostructures, it is essential to understand the immunostimulatory aptitude of such tools and identify potential complications. This paper presents a set of 16 nanoparticle platforms that are highly configurable. These novel nucleic acid based polygonal platforms are programmed for controllable self-assembly from RNA and/or DNA strands via canonical Watson-Crick interactions. It is demonstrated that the immunostimulatory properties of these particular designs can be tuned to elicit the desired immune response or lack thereof. To advance the current understanding of the nanoparticle properties that contribute to the observed immunomodulatory activity and establish corresponding designing principles, quantitative structure-activity relationship modeling is conducted. The results demonstrate that molecular weight, together with melting temperature and half-life, strongly predicts the observed immunomodulatory activity. This framework provides the fundamental guidelines necessary for the development of a new library of nanoparticles with predictable immunomodulatory activity. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. A Novel Adaptive Conditional Probability-Based Predicting Model for User’s Personality Traits

    Directory of Open Access Journals (Sweden)

    Mengmeng Wang

    2015-01-01

    Full Text Available With the pervasive increase in social media use, the explosion of users’ generated data provides a potentially very rich source of information, which plays an important role in helping online researchers understand user’s behaviors deeply. Since user’s personality traits are the driving force of user’s behaviors, hence, in this paper, along with social network features, we first extract linguistic features, emotional statistical features, and topic features from user’s Facebook status updates, followed by quantifying importance of features via Kendall correlation coefficient. And then, on the basis of weighted features and dynamic updated thresholds of personality traits, we deploy a novel adaptive conditional probability-based predicting model which considers prior knowledge of correlations between user’s personality traits to predict user’s Big Five personality traits. In the experimental work, we explore the existence of correlations between user’s personality traits which provides a better theoretical support for our proposed method. Moreover, on the same Facebook dataset, compared to other methods, our method can achieve an F1-measure of 80.6% when taking into account correlations between user’s personality traits, and there is an impressive improvement of 5.8% over other approaches.

  15. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  16. Predictive modelling-based design and experiments for synthesis and spinning of bioinspired silk fibres

    Science.gov (United States)

    Gronau, Greta; Jacobsen, Matthew M.; Huang, Wenwen; Rizzo, Daniel J.; Li, David; Staii, Cristian; Pugno, Nicola M.; Wong, Joyce Y.; Kaplan, David L.; Buehler, Markus J.

    2016-01-01

    Scalable computational modelling tools are required to guide the rational design of complex hierarchical materials with predictable functions. Here, we utilize mesoscopic modelling, integrated with genetic block copolymer synthesis and bioinspired spinning process, to demonstrate de novo materials design that incorporates chemistry, processing and material characterization. We find that intermediate hydrophobic/hydrophilic block ratios observed in natural spider silks and longer chain lengths lead to outstanding silk fibre formation. This design by nature is based on the optimal combination of protein solubility, self-assembled aggregate size and polymer network topology. The original homogeneous network structure becomes heterogeneous after spinning, enhancing the anisotropic network connectivity along the shear flow direction. Extending beyond the classical polymer theory, with insights from the percolation network model, we illustrate the direct proportionality between network conductance and fibre Young's modulus. This integrated approach provides a general path towards de novo functional network materials with enhanced mechanical properties and beyond (optical, electrical or thermal) as we have experimentally verified. PMID:26017575

  17. Predictive modelling-based design and experiments for synthesis and spinning of bioinspired silk fibres.

    Science.gov (United States)

    Lin, Shangchao; Ryu, Seunghwa; Tokareva, Olena; Gronau, Greta; Jacobsen, Matthew M; Huang, Wenwen; Rizzo, Daniel J; Li, David; Staii, Cristian; Pugno, Nicola M; Wong, Joyce Y; Kaplan, David L; Buehler, Markus J

    2015-05-28

    Scalable computational modelling tools are required to guide the rational design of complex hierarchical materials with predictable functions. Here, we utilize mesoscopic modelling, integrated with genetic block copolymer synthesis and bioinspired spinning process, to demonstrate de novo materials design that incorporates chemistry, processing and material characterization. We find that intermediate hydrophobic/hydrophilic block ratios observed in natural spider silks and longer chain lengths lead to outstanding silk fibre formation. This design by nature is based on the optimal combination of protein solubility, self-assembled aggregate size and polymer network topology. The original homogeneous network structure becomes heterogeneous after spinning, enhancing the anisotropic network connectivity along the shear flow direction. Extending beyond the classical polymer theory, with insights from the percolation network model, we illustrate the direct proportionality between network conductance and fibre Young's modulus. This integrated approach provides a general path towards de novo functional network materials with enhanced mechanical properties and beyond (optical, electrical or thermal) as we have experimentally verified.

  18. Predicting drought propagation within peat layers using a three dimensionally explicit voxel based model

    Science.gov (United States)

    Condro, A. A.; Pawitan, H.; Risdiyanto, I.

    2018-05-01

    Peatlands are very vulnerable to widespread fires during dry seasons, due to availability of aboveground fuel biomass on the surface and belowground fuel biomass on the sub-surface. Hence, understanding drought propagation occurring within peat layers is crucial with regards to disaster mitigation activities on peatlands. Using a three dimensionally explicit voxel-based model of peatland hydrology, this study predicted drought propagation time lags into sub-surface peat layers after drought events occurrence on the surface of about 1 month during La-Nina and 2.5 months during El-Nino. The study was carried out on a high-conservation-value area of oil palm plantation in West Kalimantan. Validity of the model was evaluated and its applicability for disaster mitigation was discussed. The animations of simulated voxels are available at: goo.gl/HDRMYN (El-Nino 2015 episode) and goo.gl/g1sXPl (La-Nina 2016 episode). The model is available at: goo.gl/RiuMQz.

  19. A molecular-mechanics based finite element model for strength prediction of single wall carbon nanotubes

    International Nuclear Information System (INIS)

    Meo, M.; Rossi, M.

    2007-01-01

    The aim of this work was to develop a finite element model based on molecular mechanics to predict the ultimate strength and strain of single wallet carbon nanotubes (SWCNT). The interactions between atoms was modelled by combining the use of non-linear elastic and torsional elastic spring. In particular, with this approach, it was tried to combine the molecular mechanics approach with finite element method without providing any not-physical data on the interactions between the carbon atoms, i.e. the CC-bond inertia moment or Young's modulus definition. Mechanical properties as Young's modulus, ultimate strength and strain for several CNTs were calculated. Further, a stress-strain curve for large deformation (up to 70%) is reported for a nanotube Zig-Zag (9,0). The results showed that good agreement with the experimental and numerical results of several authors was obtained. A comparison of the mechanical properties of nanotubes with same diameter and different chirality was carried out. Finally, the influence of the presence of defects on the strength and strain of a SWNT was also evaluated. In particular, the stress-strain curve a nanotube with one-vacancy defect was evaluated and compared with the curve of a pristine one, showing a reduction of the ultimate strength and strain for the defected nanotube. The FE model proposed demonstrate to be a reliable tool to simulate mechanical behaviour of carbon nanotubes both in the linear elastic field and the non-linear elastic field

  20. A Network-Based Model of Oncogenic Collaboration for Prediction of Drug Sensitivity

    Directory of Open Access Journals (Sweden)

    Ted G Laderas

    2015-12-01

    Full Text Available Tumorigenesis is a multi-step process, involving the acquisition of multiple oncogenic mutations that transform cells, resulting in systemic dysregulation that enables proliferation, among other cancer hallmarks. High throughput omics techniques are used in precision medicine, allowing identification of these mutations with the goal of identifying treatments that target them. However, the multiplicity of oncogenes required for transformation, known as oncogenic collaboration, makes assigning effective treatments difficult. Motivated by this observation, we propose a new type of oncogenic collaboration where mutations in genes that interact with an oncogene may contribute to its dysregulation, a new genomic feature that we term surrogate oncogenes. By mapping mutations to a protein/protein interaction network, we can determine significance of the observed distribution using permutation-based methods. For a panel of 38 breast cancer cell lines, we identified significant surrogate oncogenes in oncogenes such as BRCA1 and ESR1. In addition, using Random Forest Classifiers, we show that these significant surrogate oncogenes predict drug sensitivity for 74 drugs in the breast cancer cell lines with a mean error rate of 30.9%. Additionally, we show that surrogate oncogenes are predictive of survival in patients. The surrogate oncogene framework incorporates unique or rare mutations on an individual level. Our model has the potential for integrating patient-unique mutations in predicting drug-sensitivity, suggesting a potential new direction in precision medicine, as well as a new approach for drug development. Additionally, we show the prevalence of significant surrogate oncogenes in multiple cancers within the Cancer Genome Atlas, suggesting that surrogate oncogenes may be a useful genomic feature for guiding pancancer analyses and assigning therapies across many tissue types.

  1. A discriminatory function for prediction of protein-DNA interactions based on alpha shape modeling.

    Science.gov (United States)

    Zhou, Weiqiang; Yan, Hong

    2010-10-15

    Protein-DNA interaction has significant importance in many biological processes. However, the underlying principle of the molecular recognition process is still largely unknown. As more high-resolution 3D structures of protein-DNA complex are becoming available, the surface characteristics of the complex become an important research topic. In our work, we apply an alpha shape model to represent the surface structure of the protein-DNA complex and developed an interface-atom curvature-dependent conditional probability discriminatory function for the prediction of protein-DNA interaction. The interface-atom curvature-dependent formalism captures atomic interaction details better than the atomic distance-based method. The proposed method provides good performance in discriminating the native structures from the docking decoy sets, and outperforms the distance-dependent formalism in terms of the z-score. Computer experiment results show that the curvature-dependent formalism with the optimal parameters can achieve a native z-score of -8.17 in discriminating the native structure from the highest surface-complementarity scored decoy set and a native z-score of -7.38 in discriminating the native structure from the lowest RMSD decoy set. The interface-atom curvature-dependent formalism can also be used to predict apo version of DNA-binding proteins. These results suggest that the interface-atom curvature-dependent formalism has a good prediction capability for protein-DNA interactions. The code and data sets are available for download on http://www.hy8.com/bioinformatics.htm kenandzhou@hotmail.com.

  2. Long-term orbit prediction for China's Tiangong-1 spacecraft based on mean atmosphere model

    Science.gov (United States)

    Tang, Jingshi; Liu, Lin; Miao, Manqian

    Tiangong-1 is China's test module for future space station. It has gone through three successful rendezvous and dockings with Shenzhou spacecrafts from 2011 to 2013. For the long-term management and maintenance, the orbit sometimes needs to be predicted for a long period of time. As Tiangong-1 works in a low-Earth orbit with an altitude of about 300-400 km, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 10-20 days, the error in the a priori atmosphere model, if not properly corrected, could induce the semi-major axis error and the overall position error up to a few kilometers and several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSIS00. The a priori reference mean density can be corrected during precise orbit determination (POD). For applications in the long-term orbit prediction, the observations are first accumulated. With sufficiently long period of observations, we are able to obtain a series of the diurnal mean densities. This series bears the recent variation of the atmosphere density and can be analyzed for various periods. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. We show that the densities predicted with this approach can serve to increase the accuracy of the predicted orbit. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700m and overall position errors better than 600km.

  3. Past, present and prospect of an Artificial Intelligence (AI) based model for sediment transport prediction

    Science.gov (United States)

    Afan, Haitham Abdulmohsin; El-shafie, Ahmed; Mohtar, Wan Hanna Melini Wan; Yaseen, Zaher Mundher

    2016-10-01

    An accurate model for sediment prediction is a priority for all hydrological researchers. Many conventional methods have shown an inability to achieve an accurate prediction of suspended sediment. These methods are unable to understand the behaviour of sediment transport in rivers due to the complexity, noise, non-stationarity, and dynamism of the sediment pattern. In the past two decades, Artificial Intelligence (AI) and computational approaches have become a remarkable tool for developing an accurate model. These approaches are considered a powerful tool for solving any non-linear model, as they can deal easily with a large number of data and sophisticated models. This paper is a review of all AI approaches that have been applied in sediment modelling. The current research focuses on the development of AI application in sediment transport. In addition, the review identifies major challenges and opportunities for prospective research. Throughout the literature, complementary models superior to classical modelling.

  4. Genomic prediction based on data from three layer lines using non-linear regression models

    NARCIS (Netherlands)

    Huang, H.; Windig, J.J.; Vereijken, A.; Calus, M.P.L.

    2014-01-01

    Background - Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. Methods - In an attempt to alleviate

  5. Predicting the Risk of Attrition for Undergraduate Students with Time Based Modelling

    Science.gov (United States)

    Chai, Kevin E. K.; Gibson, David

    2015-01-01

    Improving student retention is an important and challenging problem for universities. This paper reports on the development of a student attrition model for predicting which first year students are most at-risk of leaving at various points in time during their first semester of study. The objective of developing such a model is to assist…

  6. In silico prediction of toxicity of non-congeneric industrial chemicals using ensemble learning based modeling approaches

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com; Gupta, Shikha

    2014-03-15

    Ensemble learning approach based decision treeboost (DTB) and decision tree forest (DTF) models are introduced in order to establish quantitative structure–toxicity relationship (QSTR) for the prediction of toxicity of 1450 diverse chemicals. Eight non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals was evaluated using Tanimoto similarity index. Stochastic gradient boosting and bagging algorithms supplemented DTB and DTF models were constructed for classification and function optimization problems using the toxicity end-point in T. pyriformis. Special attention was drawn to prediction ability and robustness of the models, investigated both in external and 10-fold cross validation processes. In complete data, optimal DTB and DTF models rendered accuracies of 98.90%, 98.83% in two-category and 98.14%, 98.14% in four-category toxicity classifications. Both the models further yielded classification accuracies of 100% in external toxicity data of T. pyriformis. The constructed regression models (DTB and DTF) using five descriptors yielded correlation coefficients (R{sup 2}) of 0.945, 0.944 between the measured and predicted toxicities with mean squared errors (MSEs) of 0.059, and 0.064 in complete T. pyriformis data. The T. pyriformis regression models (DTB and DTF) applied to the external toxicity data sets yielded R{sup 2} and MSE values of 0.637, 0.655; 0.534, 0.507 (marine bacteria) and 0.741, 0.691; 0.155, 0.173 (algae). The results suggest for wide applicability of the inter-species models in predicting toxicity of new chemicals for regulatory purposes. These approaches provide useful strategy and robust tools in the screening of ecotoxicological risk or environmental hazard potential of chemicals. - Graphical abstract: Importance of input variables in DTB and DTF classification models for (a) two-category, and (b) four-category toxicity intervals in T. pyriformis data. Generalization and predictive abilities of the

  7. In silico prediction of toxicity of non-congeneric industrial chemicals using ensemble learning based modeling approaches

    International Nuclear Information System (INIS)

    Singh, Kunwar P.; Gupta, Shikha

    2014-01-01

    Ensemble learning approach based decision treeboost (DTB) and decision tree forest (DTF) models are introduced in order to establish quantitative structure–toxicity relationship (QSTR) for the prediction of toxicity of 1450 diverse chemicals. Eight non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals was evaluated using Tanimoto similarity index. Stochastic gradient boosting and bagging algorithms supplemented DTB and DTF models were constructed for classification and function optimization problems using the toxicity end-point in T. pyriformis. Special attention was drawn to prediction ability and robustness of the models, investigated both in external and 10-fold cross validation processes. In complete data, optimal DTB and DTF models rendered accuracies of 98.90%, 98.83% in two-category and 98.14%, 98.14% in four-category toxicity classifications. Both the models further yielded classification accuracies of 100% in external toxicity data of T. pyriformis. The constructed regression models (DTB and DTF) using five descriptors yielded correlation coefficients (R 2 ) of 0.945, 0.944 between the measured and predicted toxicities with mean squared errors (MSEs) of 0.059, and 0.064 in complete T. pyriformis data. The T. pyriformis regression models (DTB and DTF) applied to the external toxicity data sets yielded R 2 and MSE values of 0.637, 0.655; 0.534, 0.507 (marine bacteria) and 0.741, 0.691; 0.155, 0.173 (algae). The results suggest for wide applicability of the inter-species models in predicting toxicity of new chemicals for regulatory purposes. These approaches provide useful strategy and robust tools in the screening of ecotoxicological risk or environmental hazard potential of chemicals. - Graphical abstract: Importance of input variables in DTB and DTF classification models for (a) two-category, and (b) four-category toxicity intervals in T. pyriformis data. Generalization and predictive abilities of the

  8. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  9. Modeling infection transmission in primate networks to predict centrality-based risk.

    Science.gov (United States)

    Romano, Valéria; Duboscq, Julie; Sarabian, Cécile; Thomas, Elodie; Sueur, Cédric; MacIntosh, Andrew J J

    2016-07-01

    Social structure can theoretically regulate disease risk by mediating exposure to pathogens via social proximity and contact. Investigating the role of central individuals within a network may help predict infectious agent transmission as well as implement disease control strategies, but little is known about such dynamics in real primate networks. We combined social network analysis and a modeling approach to better understand transmission of a theoretical infectious agent in wild Japanese macaques, highly social animals which form extended but highly differentiated social networks. We collected focal data from adult females living on the islands of Koshima and Yakushima, Japan. Individual identities as well as grooming networks were included in a Markov graph-based simulation. In this model, the probability that an individual will transmit an infectious agent depends on the strength of its relationships with other group members. Similarly, its probability of being infected depends on its relationships with already infected group members. We correlated: (i) the percentage of subjects infected during a latency-constrained epidemic; (ii) the mean latency to complete transmission; (iii) the probability that an individual is infected first among all group members; and (iv) each individual's mean rank in the chain of transmission with different individual network centralities (eigenvector, strength, betweenness). Our results support the hypothesis that more central individuals transmit infections in a shorter amount of time and to more subjects but also become infected more quickly than less central individuals. However, we also observed that the spread of infectious agents on the Yakushima network did not always differ from expectations of spread on random networks. Generalizations about the importance of observed social networks in pathogen flow should thus be made with caution, since individual characteristics in some real world networks appear less relevant than

  10. Erosion prediction for alpine slopes: a symbiosis of remote sensing and a physical based erosion model

    Science.gov (United States)

    Kaiser, Andreas; Neugirg, Fabian; Haas, Florian; Schindewolf, Marcus; Schmidt, Jürgen

    2014-05-01

    As rainfall simulations represent an established tool for quantifying soil detachment on cultivated area in lowlands and low mountain ranges, they are rarely used on steep slopes high mountain ranges. Still this terrain represents productive sediment sources of high morphodynamic. A quantitative differentiation between gravitationally and fluvially relocated material reveals a major challenge in understanding erosion on steep slopes: does solifluction as a result of melting in spring or heavy convective rainstorms during summer cause the essential erosion processes? This paper aims to answer this question by separating gravitational mass movement (solifluction, landslides, mudflow and needle ice) and runoff-induced detachment. First simulated rainstorm experiments are used to assess the sediment production on bare soil on a strongly inclined plot (1 m², 42°) in the northern limestone Alps. Throughout precipitation experiments runoff and related suspended sediments were quantified. In order to enlarge slope length virtually to around 20 m a runoff feeding device is additionally implemented. Soil physical parameters were derived from on-site sampling. The generated data is introduced to the physically based and catchment-scaled erosion model EROSION 3D to upscale plot size to small watershed conditions. Thus infiltration, runoff, detachment, transport and finally deposition can be predicted for single rainstorm events and storm sequences. Secondly, in order to separate gravitational mass movements and water erosion, a LiDAR and structure-from-motion based monitoring approach is carried out to produce high-resolution digital elevation models. A time series analysis of detachment and deposition from different points in time is implemented. Absolute volume losses are then compared to sediment losses calculated by the erosion model as the latter only generates data that is connected to water induced hillside erosion. This methodology will be applied in other watersheds

  11. A Prediction Model for ROS1-Rearranged Lung Adenocarcinomas based on Histologic Features

    OpenAIRE

    Zhou, Jianya; Zhao, Jing; Zheng, Jing; Kong, Mei; Sun, Ke; Wang, Bo; Chen, Xi; Ding, Wei; Zhou, Jianying

    2016-01-01

    Aims To identify the clinical and histological characteristics of ROS1-rearranged non-small-cell lung carcinomas (NSCLCs) and build a prediction model to prescreen suitable patients for molecular testing. Methods and Results We identified 27 cases of ROS1-rearranged lung adenocarcinomas in 1165 patients with NSCLCs confirmed by real-time PCR and FISH and performed univariate and multivariate analyses to identify predictive factors associated with ROS1 rearrangement and finally developed predi...

  12. Prediction of Land Use Change Based on Markov and GM(1,1 Models

    Directory of Open Access Journals (Sweden)

    SUN Yi-yang

    2016-05-01

    Full Text Available In order to explore the law of land use change in Laiwu City, Markov and GM(1,1 were respectively employed in the prediction of land use change in Laiwu from 2015 to 2050, after which the results were analyzed and discussed. The results showed that:(1The variational trends of all kinds of land use change predicted by the two models were consistent and the goodness of fit of the predictive value in corresponding years in the near future was high, illustrating that the predicted results in the near future were credible and the trend predicted in mid long term could be used as reference. (2The cultivated land would remanin almost no change from 2015 to 2020, and then gradually decreaseed in a small range from 2020 to 2050. The garden, the woodland, the grassland always reducing and the decreare range of the grassland was the largest. The urban village and industrial and mining land, the transportation land would be continuously increased and the range of urban village and industrial and mining land was the largest. The water and water conservancy facilities land and the other land would be always reduced in a very small range. It could be concluded that the results predicted by the two models in the near future were credible and could provide scientific basis for land use planning of Laiwu, while the method could provide reference for the prediction of land use change.

  13. A New Navigation Satellite Clock Bias Prediction Method Based on Modified Clock-bias Quadratic Polynomial Model

    Science.gov (United States)

    Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.

    2016-01-01

    In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.

  14. A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model

    Directory of Open Access Journals (Sweden)

    Yanbing Liu

    2014-01-01

    Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.

  15. Probability Models Based on Soil Properties for Predicting Presence-Absence of Pythium in Soybean Roots.

    Science.gov (United States)

    Zitnick-Anderson, Kimberly K; Norland, Jack E; Del Río Mendoza, Luis E; Fortuna, Ann-Marie; Nelson, Berlin D

    2017-10-01

    Associations between soil properties and Pythium groups on soybean roots were investigated in 83 commercial soybean fields in North Dakota. A data set containing 2877 isolates of Pythium which included 26 known spp. and 1 unknown spp. and 13 soil properties from each field were analyzed. A Pearson correlation analysis was performed with all soil properties to observe any significant correlation between properties. Hierarchical clustering, indicator spp., and multi-response permutation procedures were used to identify groups of Pythium. Logistic regression analysis using stepwise selection was employed to calculate probability models for presence of groups based on soil properties. Three major Pythium groups were identified and three soil properties were associated with these groups. Group 1, characterized by P. ultimum, was associated with zinc levels; as zinc increased, the probability of group 1 being present increased (α = 0.05). Pythium group 2, characterized by Pythium kashmirense and an unknown Pythium sp., was associated with cation exchange capacity (CEC) (α < 0.05); as CEC increased, these spp. increased. Group 3, characterized by Pythium heterothallicum and Pythium irregulare, were associated with CEC and calcium carbonate exchange (CCE); as CCE increased and CEC decreased, these spp. increased (α = 0.05). The regression models may have value in predicting pathogenic Pythium spp. in soybean fields in North Dakota and adjacent states.

  16. Prediction of paraquat exposure and toxicity in clinically ill poisoned patients: a model based approach.

    Science.gov (United States)

    Wunnapuk, Klintean; Mohammed, Fahim; Gawarammana, Indika; Liu, Xin; Verbeeck, Roger K; Buckley, Nicholas A; Roberts, Michael S; Musuamba, Flora T

    2014-10-01

    Paraquat poisoning is a medical problem in many parts of Asia and the Pacific. The mortality rate is extremely high as there is no effective treatment. We analyzed data collected during an ongoing cohort study on self-poisoning and from a randomized controlled trial assessing the efficacy of immunosuppressive therapy in hospitalized paraquat-intoxicated patients. The aim of this analysis was to characterize the toxicokinetics and toxicodynamics of paraquat in this population. A non-linear mixed effects approach was used to perform a toxicokinetic/toxicodynamic population analysis in a cohort of 78 patients. The paraquat plasma concentrations were best fitted by a two compartment toxicokinetic structural model with first order absorption and first order elimination. Changes in renal function were used for the assessment of paraquat toxicodynamics. The estimates of toxicokinetic parameters for the apparent clearance, the apparent volume of distribution and elimination half-life were 1.17 l h(-1) , 2.4 l kg(-1) and 87 h, respectively. Renal function, namely creatinine clearance, was the most significant covariate to explain between patient variability in paraquat clearance.This model suggested that a reduction in paraquat clearance occurred within 24 to 48 h after poison ingestion, and afterwards the clearance was constant over time. The model estimated that a paraquat concentration of 429 μg l(-1) caused 50% of maximum renal toxicity. The immunosuppressive therapy tested during this study was associated with only 8% improvement of renal function. The developed models may be useful as prognostic tools to predict patient outcome based on patient characteristics on admission and to assess drug effectiveness during antidote drug development. © 2014 The British Pharmacological Society.

  17. Ligand and structure-based classification models for Prediction of P-glycoprotein inhibitors

    DEFF Research Database (Denmark)

    Klepsch, Freya; Poongavanam, Vasanthanathan; Ecker, Gerhard Franz

    2014-01-01

    an algorithm based on Euclidean distance. Results show that random forest and SVM performed best for classification of P-gp inhibitors and non-inhibitors, correctly predicting 73/75 % of the external test set compounds. Classification based on the docking experiments using the scoring function Chem...

  18. A diffusivity model for predicting VOC diffusion in porous building materials based on fractal theory

    International Nuclear Information System (INIS)

    Liu, Yanfeng; Zhou, Xiaojun; Wang, Dengjia; Song, Cong; Liu, Jiaping

    2015-01-01

    Highlights: • Fractal theory is introduced into the prediction of VOC diffusion coefficient. • MSFC model of the diffusion coefficient is developed for porous building materials. • The MSFC model contains detailed pore structure parameters. • The accuracy of the MSFC model is verified by independent experiments. - Abstract: Most building materials are porous media, and the internal diffusion coefficients of such materials have an important influences on the emission characteristics of volatile organic compounds (VOCs). The pore structure of porous building materials has a significant impact on the diffusion coefficient. However, the complex structural characteristics bring great difficulties to the model development. The existing prediction models of the diffusion coefficient are flawed and need to be improved. Using scanning electron microscope (SEM) observations and mercury intrusion porosimetry (MIP) tests of typical porous building materials, this study developed a new diffusivity model: the multistage series-connection fractal capillary-bundle (MSFC) model. The model considers the variable-diameter capillaries formed by macropores connected in series as the main mass transfer paths, and the diameter distribution of the capillary bundles obeys a fractal power law in the cross section. In addition, the tortuosity of the macrocapillary segments with different diameters is obtained by the fractal theory. Mesopores serve as the connections between the macrocapillary segments rather than as the main mass transfer paths. The theoretical results obtained using the MSFC model yielded a highly accurate prediction of the diffusion coefficients and were in a good agreement with the VOC concentration measurements in the environmental test chamber.

  19. Prediction of Combine Economic Life Based on Repair and Maintenance Costs Model

    Directory of Open Access Journals (Sweden)

    A Rohani

    2014-09-01

    Full Text Available Farm machinery managers often need to make complex economic decisions on machinery replacement. Repair and maintenance costs can have significant impacts on this economic decision. The farm manager must be able to predict farm machinery repair and maintenance costs. This study aimed to identify a regression model that can adequately represent the repair and maintenance costs in terms of machine age in cumulative hours of use. The regression model has the ability to predict the repair and maintenance costs for longer time periods. Therefore, it can be used for the estimation of the economic life. The study was conducted using field data collected from 11 John-Deer 955 combine harvesters used in several western provinces of Iran. It was found that power model has a better performance for the prediction of combine repair and maintenance costs. The results showed that the optimum replacement age of John-Deer 955 combine was 54300 cumulative hours.

  20. To Set Up a Logistic Regression Prediction Model for Hepatotoxicity of Chinese Herbal Medicines Based on Traditional Chinese Medicine Theory

    Science.gov (United States)

    Liu, Hongjie; Li, Tianhao; Zhan, Sha; Pan, Meilan; Ma, Zhiguo; Li, Chenghua

    2016-01-01

    Aims. To establish a logistic regression (LR) prediction model for hepatotoxicity of Chinese herbal medicines (HMs) based on traditional Chinese medicine (TCM) theory and to provide a statistical basis for predicting hepatotoxicity of HMs. Methods. The correlations of hepatotoxic and nonhepatotoxic Chinese HMs with four properties, five flavors, and channel tropism were analyzed with chi-square test for two-way unordered categorical data. LR prediction model was established and the accuracy of the prediction by this model was evaluated. Results. The hepatotoxic and nonhepatotoxic Chinese HMs were related with four properties (p 0.05). There were totally 12 variables from four properties and five flavors for the LR. Four variables, warm and neutral of the four properties and pungent and salty of five flavors, were selected to establish the LR prediction model, with the cutoff value being 0.204. Conclusions. Warm and neutral of the four properties and pungent and salty of five flavors were the variables to affect the hepatotoxicity. Based on such results, the established LR prediction model had some predictive power for hepatotoxicity of Chinese HMs. PMID:27656240

  1. Predictive Accuracy of the PanCan Lung Cancer Risk Prediction Model -External Validation based on CT from the Danish Lung Cancer Screening Trial

    DEFF Research Database (Denmark)

    Winkler Wille, Mathilde M.; van Riel, Sarah J.; Saghir, Zaigham

    2015-01-01

    Objectives: Lung cancer risk models should be externally validated to test generalizability and clinical usefulness. The Danish Lung Cancer Screening Trial (DLCST) is a population-based prospective cohort study, used to assess the discriminative performances of the PanCan models. Methods: From...... the DLCST database, 1,152 nodules from 718 participants were included. Parsimonious and full PanCan risk prediction models were applied to DLCST data, and also coefficients of the model were recalculated using DLCST data. Receiver operating characteristics (ROC) curves and area under the curve (AUC) were...... used to evaluate risk discrimination. Results: AUCs of 0.826–0.870 were found for DLCST data based on PanCan risk prediction models. In the DLCST, age and family history were significant predictors (p = 0.001 and p = 0.013). Female sex was not confirmed to be associated with higher risk of lung cancer...

  2. Prediction of transient maximum heat flux based on a simple liquid layer evaporation model

    International Nuclear Information System (INIS)

    Serizawa, A.; Kataoka, I.

    1981-01-01

    A model of liquid layer evaporation with considerable supply of liquid has been formulated to predict burnout characteristics (maximum heat flux, life, etc.) during an increase of the power. The analytical description of the model is built upon the visual and photographic observations of the boiling configuration at near peak heat flux reported by other investigators. The prediction compares very favourably with water data presently available. It is suggested from the work reported here that the maximum heat flux occurs because of a balance between the consumption of the liquid film on the heated surface and the supply of liquid. Thickness of the liquid film is also very important. (author)

  3. Evaluation of an ARPS-based canopy flow modeling system for use in future operational smoke prediction efforts

    Science.gov (United States)

    M. T. Kiefer; S. Zhong; W. E. Heilman; J. J. Charney; X. Bian

    2013-01-01

    Efforts to develop a canopy flow modeling system based on the Advanced Regional Prediction System (ARPS) model are discussed. The standard version of ARPS is modified to account for the effect of drag forces on mean and turbulent flow through a vegetation canopy, via production and sink terms in the momentum and subgrid-scale turbulent kinetic energy (TKE) equations....

  4. Adjoint-based model predictive control of wind farms : Beyond the quasi steady-state power maximization

    NARCIS (Netherlands)

    Vali, M.; Petrović, Vlaho; Boersma, S.; van Wingerden, J.W.; Kuhn, Martin; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    2017-01-01

    In this paper, we extend our closed-loop optimal control framework for wind farms to minimize wake-induced power losses. We develop an adjoint-based model predictive controller which employs a medium-fidelity 2D dynamic wind farm model. The wind turbine axial induction factors are considered here

  5. Geoelectrical parameter-based multivariate regression borehole yield model for predicting aquifer yield in managing groundwater resource sustainability

    Directory of Open Access Journals (Sweden)

    Kehinde Anthony Mogaji

    2016-07-01

    Full Text Available This study developed a GIS-based multivariate regression (MVR yield rate prediction model of groundwater resource sustainability in the hard-rock geology terrain of southwestern Nigeria. This model can economically manage the aquifer yield rate potential predictions that are often overlooked in groundwater resources development. The proposed model relates the borehole yield rate inventory of the area to geoelectrically derived parameters. Three sets of borehole yield rate conditioning geoelectrically derived parameters—aquifer unit resistivity (ρ, aquifer unit thickness (D and coefficient of anisotropy (λ—were determined from the acquired and interpreted geophysical data. The extracted borehole yield rate values and the geoelectrically derived parameter values were regressed to develop the MVR relationship model by applying linear regression and GIS techniques. The sensitivity analysis results of the MVR model evaluated at P ⩽ 0.05 for the predictors ρ, D and λ provided values of 2.68 × 10−05, 2 × 10−02 and 2.09 × 10−06, respectively. The accuracy and predictive power tests conducted on the MVR model using the Theil inequality coefficient measurement approach, coupled with the sensitivity analysis results, confirmed the model yield rate estimation and prediction capability. The MVR borehole yield prediction model estimates were processed in a GIS environment to model an aquifer yield potential prediction map of the area. The information on the prediction map can serve as a scientific basis for predicting aquifer yield potential rates relevant in groundwater resources sustainability management. The developed MVR borehole yield rate prediction mode provides a good alternative to other methods used for this purpose.

  6. Analytical prediction of CHF by FIDAS code based on three-fluid and film-dryout model

    International Nuclear Information System (INIS)

    Sugawara, Satoru

    1990-01-01

    Analytical prediction model of critical heat flux (CHF) has been developed on the basis of film dryout criterion due to droplets deposition and entrainment in annular mist flow. Critical heat flux in round tubes were analyzed by the Film Dryout Analysis Code in Subchannels (FIDAS) which is based on the three-fluid, three-field and newly developed film dryout model. Predictions by FIDAS were compared with the world-wide experimental data on CHF obtained in water and Freon for uniformly and non-uniformly heated tubes under vertical upward flow condition. Furthermore, CHF prediction capability of FIDAS was compared with those of other film dryout models for annular flow and Katto's CHF correlation. The predictions of FIDAS are in sufficient agreement with the experimental CHF data, and indicate better agreement than the other film dryout models and empirical correlation of Katto. (author)

  7. Predictive model of nicotine dependence based on mental health indicators and self-concept

    Directory of Open Access Journals (Sweden)

    Hamid Kazemi Zahrani

    2014-12-01

    Full Text Available Background: The purpose of this research was to investigate the predictive power of anxiety, depression, stress and self-concept dimensions (Mental ability, job efficiency, physical attractiveness, social skills, and deficiencies and merits as predictors of nicotine dependency among university students in Isfahan. Methods: In this correlational study, 110 male nicotine-dependent students at Isfahan University were selected by convenience sampling. All samples were assessed by Depression Anxiety Stress Scale (DASS, self-concept test and Nicotine Dependence Syndrome Scale. Data were analyzed by Pearson correlation and stepwise regression. Results: The result showed that anxiety had the highest strength to predict nicotine dependence. In addition, the self-concept and its dimensions predicted only 12% of the variance in nicotine dependence, which was not significant. Conclusion: Emotional processing variables involved in mental health play an important role in presenting a model to predict students’ dependence on nicotine more than identity variables such as different dimensions of self-concept.

  8. Multiscale modeling of interwoven Kevlar fibers based on random walk to predict yarn structural response

    Science.gov (United States)

    Recchia, Stephen

    Kevlar is the most common high-end plastic filament yarn used in body armor, tire reinforcement, and wear resistant applications. Kevlar is a trade name for an aramid fiber. These are fibers in which the chain molecules are highly oriented along the fiber axis, so the strength of the chemical bond can be exploited. The bulk material is extruded into filaments that are bound together into yarn, which may be chorded with other materials as in car tires, woven into a fabric, or layered in an epoxy to make composite panels. The high tensile strength to low weight ratio makes this material ideal for designs that decrease weight and inertia, such as automobile tires, body panels, and body armor. For designs that use Kevlar, increasing the strength, or tenacity, to weight ratio would improve performance or reduce cost of all products that are based on this material. This thesis computationally and experimentally investigates the tenacity and stiffness of Kevlar yarns with varying twist ratios. The test boundary conditions were replicated with a geometrically accurate finite element model, resulting in a customized code that can reproduce tortuous filaments in a yarn was developed. The solid model geometry capturing filament tortuosity was implemented through a random walk method of axial geometry creation. A finite element analysis successfully recreated the yarn strength and stiffness dependency observed during the tests. The physics applied in the finite element model was reproduced in an analytical equation that was able to predict the failure strength and strain dependency of twist ratio. The analytical solution can be employed to optimize yarn design for high strength applications.

  9. A Comparison of Energy Consumption Prediction Models Based on Neural Networks of a Bioclimatic Building

    Directory of Open Access Journals (Sweden)

    Hamid R. Khosravani

    2016-01-01

    Full Text Available Energy consumption has been increasing steadily due to globalization and industrialization. Studies have shown that buildings are responsible for the biggest proportion of energy consumption; for example in European Union countries, energy consumption in buildings represents around 40% of the total energy consumption. In order to control energy consumption in buildings, different policies have been proposed, from utilizing bioclimatic architectures to the use of predictive models within control approaches. There are mainly three groups of predictive models including engineering, statistical and artificial intelligence models. Nowadays, artificial intelligence models such as neural networks and support vector machines have also been proposed because of their high potential capabilities of performing accurate nonlinear mappings between inputs and outputs in real environments which are not free of noise. The main objective of this paper is to compare a neural network model which was designed utilizing statistical and analytical methods, with a group of neural network models designed benefiting from a multi objective genetic algorithm. Moreover, the neural network models were compared to a naïve autoregressive baseline model. The models are intended to predict electric power demand at the Solar Energy Research Center (Centro de Investigación en Energía SOLar or CIESOL in Spanish bioclimatic building located at the University of Almeria, Spain. Experimental results show that the models obtained from the multi objective genetic algorithm (MOGA perform comparably to the model obtained through a statistical and analytical approach, but they use only 0.8% of data samples and have lower model complexity.

  10. A compressibility based model for predicting the tensile strength of directly compressed pharmaceutical powder mixtures.

    Science.gov (United States)

    Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J

    2017-10-05

    A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Nonlinear model predictive control of a wave energy converter based on differential flatness parameterisation

    Science.gov (United States)

    Li, Guang

    2017-01-01

    This paper presents a fast constrained optimization approach, which is tailored for nonlinear model predictive control of wave energy converters (WEC). The advantage of this approach relies on its exploitation of the differential flatness of the WEC model. This can reduce the dimension of the resulting nonlinear programming problem (NLP) derived from the continuous constrained optimal control of WEC using pseudospectral method. The alleviation of computational burden using this approach helps to promote an economic implementation of nonlinear model predictive control strategy for WEC control problems. The method is applicable to nonlinear WEC models, nonconvex objective functions and nonlinear constraints, which are commonly encountered in WEC control problems. Numerical simulations demonstrate the efficacy of this approach.

  12. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  13. An improved liquid film model to predict the CHF based on the influence of churn flow

    International Nuclear Information System (INIS)

    Wang, Ke; Bai, Bofeng; Ma, Weimin

    2014-01-01

    The critical heat flux (CHF) for boiling crisis is one of the most important parameters in thermal management and safe operation of many engineering systems. Traditionally, the liquid film flow model for “dryout” mechanism shows a good prediction in heated annular two-phase flow. However, a general assumption that the initial entrained fraction at the onset of annular flow shows a lack of reasonable physical interpretation. Since the droplets have great momentum and the length of churn flow is short, the droplets in churn flow show an inevitable effect on the downstream annular flow. To address this, we considered the effect of churn flow and developed the original liquid film flow model in vertical upward flow by suggesting that calculation starts from the onset of churn flow rather than annular flow. The results indicated satisfactory predictions with the experimental data and the developed model provided a better understanding about the effect of flow pattern on the CHF prediction. - Highlights: •The general assumption of initial entrained fraction is unreasonable. •The droplets in churn flow show an inevitable effect on downstream annular flow. •The original liquid film flow model for prediction of CHF was developed. •The integration process was modified to start from the onset of churn flow

  14. A Comparison of Three Models to Predict Liquidity Flows between Banks Based on Daily Payments Transactions

    NARCIS (Netherlands)

    Triepels, Ron; Daniels, Hennie

    2016-01-01

    The analysis of payment data has become an important task for operators and overseers of financial market infrastructures. Payment data provide an accurate description of how banks manage their liquidity over time. In this paper we compare three models to predict future liquidity flows from payment

  15. A Comparison of Three Models to Predict Liquidity Flows between Banks Based on Daily Payments Transactions

    NARCIS (Netherlands)

    R.J.M.A. Triepels (Ron); H.A.M. Daniels (Hennie)

    2016-01-01

    textabstractThe analysis of payment data has become an important task for operators and overseers of financial market infrastructures. Payment data provide an accurate description of how banks manage their liquidity over time. In this paper we compare three models to predict future liquidity flows

  16. Prediction of Placental Barrier Permeability: A Model Based on Partial Least Squares Variable Selection Procedure

    Directory of Open Access Journals (Sweden)

    Yong-Hong Zhang

    2015-05-01

    Full Text Available Assessing the human placental barrier permeability of drugs is very important to guarantee drug safety during pregnancy. Quantitative structure–activity relationship (QSAR method was used as an effective assessing tool for the placental transfer study of drugs, while in vitro human placental perfusion is the most widely used method. In this study, the partial least squares (PLS variable selection and modeling procedure was used to pick out optimal descriptors from a pool of 620 descriptors of 65 compounds and to simultaneously develop a QSAR model between the descriptors and the placental barrier permeability expressed by the clearance indices (CI. The model was subjected to internal validation by cross-validation and y-randomization and to external validation by predicting CI values of 19 compounds. It was shown that the model developed is robust and has a good predictive potential (r2 = 0.9064, RMSE = 0.09, q2 = 0.7323, rp2 = 0.7656, RMSP = 0.14. The mechanistic interpretation of the final model was given by the high variable importance in projection values of descriptors. Using PLS procedure, we can rapidly and effectively select optimal descriptors and thus construct a model with good stability and predictability. This analysis can provide an effective tool for the high-throughput screening of the placental barrier permeability of drugs.

  17. Predictive modelling for startup and investor relationship based on crowdfunding platform data

    Science.gov (United States)

    Alamsyah, Andry; Buono Asto Nugroho, Tri

    2018-03-01

    Crowdfunding platform is a place where startup shows off publicly their idea for the purpose to get their project funded. Crowdfunding platform such as Kickstarter are becoming popular today, it provides the efficient way for startup to get funded without liabilities, it also provides variety project category that can be participated. There is an available safety procedure to ensure achievable low-risk environment. The startup promoted project must accomplish their funded goal target. If they fail to reach the target, then there is no investment activity take place. It motivates startup to be more active to promote or disseminate their project idea and it also protect investor from losing money. The study objective is to predict the successfulness of proposed project and mapping investor trend using data mining framework. To achieve the objective, we proposed 3 models. First model is to predict whether a project is going to be successful or failed using K-Nearest Neighbour (KNN). Second model is to predict the number of successful project using Artificial Neural Network (ANN). Third model is to map the trend of investor in investing the project using K-Means clustering algorithm. KNN gives 99.04% model accuracy, while ANN best configuration gives 16-14-1 neuron layers and 0.2 learning rate, and K-Means gives 6 best separation clusters. The results of those models can help startup or investor to make decision regarding startup investment.

  18. Response surface and neural network based predictive models of cutting temperature in hard turning

    Directory of Open Access Journals (Sweden)

    Mozammel Mia

    2016-11-01

    Full Text Available The present study aimed to develop the predictive models of average tool-workpiece interface temperature in hard turning of AISI 1060 steels by coated carbide insert. The Response Surface Methodology (RSM and Artificial Neural Network (ANN were employed to predict the temperature in respect of cutting speed, feed rate and material hardness. The number and orientation of the experimental trials, conducted in both dry and high pressure coolant (HPC environments, were planned using full factorial design. The temperature was measured by using the tool-work thermocouple. In RSM model, two quadratic equations of temperature were derived from experimental data. The analysis of variance (ANOVA and mean absolute percentage error (MAPE were performed to suffice the adequacy of the models. In ANN model, 80% data were used to train and 20% data were employed for testing. Like RSM, herein, the error analysis was also conducted. The accuracy of the RSM and ANN model was found to be ⩾99%. The ANN models exhibit an error of ∼5% MAE for testing data. The regression coefficient was found to be greater than 99.9% for both dry and HPC. Both these models are acceptable, although the ANN model demonstrated a higher accuracy. These models, if employed, are expected to provide a better control of cutting temperature in turning of hardened steel.

  19. Electric Vehicle Longitudinal Stability Control Based on a New Multimachine Nonlinear Model Predictive Direct Torque Control

    Directory of Open Access Journals (Sweden)

    M’hamed Sekour

    2017-01-01

    Full Text Available In order to improve the driving performance and the stability of electric vehicles (EVs, a new multimachine robust control, which realizes the acceleration slip regulation (ASR and antilock braking system (ABS functions, based on nonlinear model predictive (NMP direct torque control (DTC, is proposed for four permanent magnet synchronous in-wheel motors. The in-wheel motor provides more possibilities of wheel control. One of its advantages is that it has low response time and almost instantaneous torque generation. Moreover, it can be independently controlled, enhancing the limits of vehicular control. For an EV equipped with four in-wheel electric motors, an advanced control may be envisaged. Taking advantage of the fast and accurate torque of in-wheel electric motors which is directly transmitted to the wheels, a new approach for longitudinal control realized by ASR and ABS is presented in this paper. In order to achieve a high-performance torque control for EVs, the NMP-DTC strategy is proposed. It uses the fuzzy logic control technique that determines online the accurate values of the weighting factors and generates the optimal switching states that optimize the EV drives’ decision. The simulation results built in Matlab/Simulink indicate that the EV can achieve high-performance vehicle longitudinal stability control.

  20. Road traffic noise prediction model for heterogeneous traffic based on ASJ-RTN Model 2008 with consideration of horn

    Science.gov (United States)

    Hustim, M.; Arifin, Z.; Aly, S. H.; Ramli, M. I.; Zakaria, R.; Liputo, A.

    2018-04-01

    This research aimed to predict the noise produced by the traffic in the road network in Makassar City using ASJ-RTN Model 2008 by calculating the horn sound. Observations were taken at 37 survey points on road side. The observations were conducted at 06.00 - 18.00 and 06.00 - 21.00 which research objects were motorcycle (MC), light vehicle (LV) and heavy vehicle (HV). The observed data were traffic volume, vehicle speed, number of horn and traffic noise using Sound Level Meter Tenmars TM-103. The research result indicates that prediction noise model by calculating the horn sound produces the average noise level value of 78.5 dB having the Pearson’s correlation and RMSE of 0.95 and 0.87. Therefore, ASJ-RTN Model 2008 prediction model by calculating the horn sound is said to be sufficiently good for predicting noise level.

  1. NRFixer: Sentiment Based Model for Predicting the Fixability of Non-Reproducible Bugs

    Directory of Open Access Journals (Sweden)

    Anjali Goyal

    2017-08-01

    Full Text Available Software maintenance is an essential step in software development life cycle. Nowadays, software companies spend approximately 45\\% of total cost in maintenance activities. Large software projects maintain bug repositories to collect, organize and resolve bug reports. Sometimes it is difficult to reproduce the reported bug with the information present in a bug report and thus this bug is marked with resolution non-reproducible (NR. When NR bugs are reconsidered, a few of them might get fixed (NR-to-fix leaving the others with the same resolution (NR. To analyse the behaviour of developers towards NR-to-fix and NR bugs, the sentiment analysis of NR bug report textual contents has been conducted. The sentiment analysis of bug reports shows that NR bugs' sentiments incline towards more negativity than reproducible bugs. Also, there is a noticeable opinion drift found in the sentiments of NR-to-fix bug reports. Observations driven from this analysis were an inspiration to develop a model that can judge the fixability of NR bugs. Thus a framework, {NRFixer,} which predicts the probability of NR bug fixation, is proposed. {NRFixer} was evaluated with two dimensions. The first dimension considers meta-fields of bug reports (model-1 and the other dimension additionally incorporates the sentiments (model-2 of developers for prediction. Both models were compared using various machine learning classifiers (Zero-R, naive Bayes, J48, random tree and random forest. The bug reports of Firefox and Eclipse projects were used to test {NRFixer}. In Firefox and Eclipse projects, J48 and Naive Bayes classifiers achieve the best prediction accuracy, respectively. It was observed that the inclusion of sentiments in the prediction model shows a rise in the prediction accuracy ranging from 2 to 5\\% for various classifiers.

  2. Development and validation of a prediction model for long-term sickness absence based on occupational health survey variables.

    Science.gov (United States)

    Roelen, Corné; Thorsen, Sannie; Heymans, Martijn; Twisk, Jos; Bültmann, Ute; Bjørner, Jakob

    2018-01-01

    The purpose of this study is to develop and validate a prediction model for identifying employees at increased risk of long-term sickness absence (LTSA), by using variables commonly measured in occupational health surveys. Based on the literature, 15 predictor variables were retrieved from the DAnish National working Environment Survey (DANES) and included in a model predicting incident LTSA (≥4 consecutive weeks) during 1-year follow-up in a sample of 4000 DANES participants. The 15-predictor model was reduced by backward stepwise statistical techniques and then validated in a sample of 2524 DANES participants, not included in the development sample. Identification of employees at increased LTSA risk was investigated by receiver operating characteristic (ROC) analysis; the area-under-the-ROC-curve (AUC) reflected discrimination between employees with and without LTSA during follow-up. The 15-predictor model was reduced to a 9-predictor model including age, gender, education, self-rated health, mental health, prior LTSA, work ability, emotional job demands, and recognition by the management. Discrimination by the 9-predictor model was significant (AUC = 0.68; 95% CI 0.61-0.76), but not practically useful. A prediction model based on occupational health survey variables identified employees with an increased LTSA risk, but should be further developed into a practically useful tool to predict the risk of LTSA in the general working population. Implications for rehabilitation Long-term sickness absence risk predictions would enable healthcare providers to refer high-risk employees to rehabilitation programs aimed at preventing or reducing work disability. A prediction model based on health survey variables discriminates between employees at high and low risk of long-term sickness absence, but discrimination was not practically useful. Health survey variables provide insufficient information to determine long-term sickness absence risk profiles. There is a need for

  3. An empirical model to predict road dust emissions based on pavement and traffic characteristics.

    Science.gov (United States)

    Padoan, Elio; Ajmone-Marsan, Franco; Querol, Xavier; Amato, Fulvio

    2018-06-01

    The relative impact of non-exhaust sources (i.e. road dust, tire wear, road wear and brake wear particles) on urban air quality is increasing. Among them, road dust resuspension has generally the highest impact on PM concentrations but its spatio-temporal variability has been rarely studied and modeled. Some recent studies attempted to observe and describe the time-variability but, as it is driven by traffic and meteorology, uncertainty remains on the seasonality of emissions. The knowledge gap on spatial variability is much wider, as several factors have been pointed out as responsible for road dust build-up: pavement characteristics, traffic intensity and speed, fleet composition, proximity to traffic lights, but also the presence of external sources. However, no parameterization is available as a function of these variables. We investigated mobile road dust smaller than 10 μm (MF10) in two cities with different climatic and traffic conditions (Barcelona and Turin), to explore MF10 seasonal variability and the relationship between MF10 and site characteristics (pavement macrotexture, traffic intensity and proximity to braking zone). Moreover, we provide the first estimates of emission factors in the Po Valley both in summer and winter conditions. Our results showed a good inverse relationship between MF10 and macro-texture, traffic intensity and distance from the nearest braking zone. We also found a clear seasonal effect of road dust emissions, with higher emission in summer, likely due to the lower pavement moisture. These results allowed building a simple empirical mode, predicting maximal dust loadings and, consequently, emission potential, based on the aforementioned data. This model will need to be scaled for meteorological effect, using methods accounting for weather and pavement moisture. This can significantly improve bottom-up emission inventory for spatial allocation of emissions and air quality management, to select those roads with higher emissions

  4. Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction

    Science.gov (United States)

    Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.

    2015-12-01

    Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.

  5. A meteo-hydrological prediction system based on a multi-model approach for precipitation forecasting

    Directory of Open Access Journals (Sweden)

    S. Davolio

    2008-02-01

    Full Text Available The precipitation forecasted by a numerical weather prediction model, even at high resolution, suffers from errors which can be considerable at the scales of interest for hydrological purposes. In the present study, a fraction of the uncertainty related to meteorological prediction is taken into account by implementing a multi-model forecasting approach, aimed at providing multiple precipitation scenarios driving the same hydrological model. Therefore, the estimation of that uncertainty associated with the quantitative precipitation forecast (QPF, conveyed by the multi-model ensemble, can be exploited by the hydrological model, propagating the error into the hydrological forecast.

    The proposed meteo-hydrological forecasting system is implemented and tested in a real-time configuration for several episodes of intense precipitation affecting the Reno river basin, a medium-sized basin located in northern Italy (Apennines. These episodes are associated with flood events of different intensity and are representative of different meteorological configurations responsible for severe weather affecting northern Apennines.

    The simulation results show that the coupled system is promising in the prediction of discharge peaks (both in terms of amount and timing for warning purposes. The ensemble hydrological forecasts provide a range of possible flood scenarios that proved to be useful for the support of civil protection authorities in their decision.

  6. New mechanistically based model for predicting reduction of biosolids waste by ozonation of return activated sludge

    International Nuclear Information System (INIS)

    Isazadeh, Siavash; Feng, Min; Urbina Rivas, Luis Enrique; Frigon, Dominic

    2014-01-01

    Highlights: • Biomass inactivation followed an exponential decay with increasing ozone doses. • From pure cultures, inactivation did not result in significant COD solubilization. • Ozone dose inactivation thresholds resulted from floc structure modifications. • Modeling description of biomass inactivation during RAS-ozonation was improved. • Model best describing inactivation resulted in best performance predictions. - Abstract: Two pilot-scale activated sludge reactors were operated for 98 days to provide the necessary data to develop and validate a new mathematical model predicting the reduction of biosolids production by ozonation of the return activated sludge (RAS). Three ozone doses were tested during the study. In addition to the pilot-scale study, laboratory-scale experiments were conducted with mixed liquor suspended solids and with pure cultures to parameterize the biomass inactivation process during exposure to ozone. The experiments revealed that biomass inactivation occurred even at the lowest doses, but that it was not associated with extensive COD solubilization. For validation, the model was used to simulate the temporal dynamics of the pilot-scale operational data. Increasing the description accuracy of the inactivation process improved the precision of the model in predicting the operational data

  7. New mechanistically based model for predicting reduction of biosolids waste by ozonation of return activated sludge

    Energy Technology Data Exchange (ETDEWEB)

    Isazadeh, Siavash; Feng, Min; Urbina Rivas, Luis Enrique; Frigon, Dominic, E-mail: dominic.frigon@mcgill.ca

    2014-04-01

    Highlights: • Biomass inactivation followed an exponential decay with increasing ozone doses. • From pure cultures, inactivation did not result in significant COD solubilization. • Ozone dose inactivation thresholds resulted from floc structure modifications. • Modeling description of biomass inactivation during RAS-ozonation was improved. • Model best describing inactivation resulted in best performance predictions. - Abstract: Two pilot-scale activated sludge reactors were operated for 98 days to provide the necessary data to develop and validate a new mathematical model predicting the reduction of biosolids production by ozonation of the return activated sludge (RAS). Three ozone doses were tested during the study. In addition to the pilot-scale study, laboratory-scale experiments were conducted with mixed liquor suspended solids and with pure cultures to parameterize the biomass inactivation process during exposure to ozone. The experiments revealed that biomass inactivation occurred even at the lowest doses, but that it was not associated with extensive COD solubilization. For validation, the model was used to simulate the temporal dynamics of the pilot-scale operational data. Increasing the description accuracy of the inactivation process improved the precision of the model in predicting the operational data.

  8. New mechanistically based model for predicting reduction of biosolids waste by ozonation of return activated sludge.

    Science.gov (United States)

    Isazadeh, Siavash; Feng, Min; Urbina Rivas, Luis Enrique; Frigon, Dominic

    2014-04-15

    Two pilot-scale activated sludge reactors were operated for 98 days to provide the necessary data to develop and validate a new mathematical model predicting the reduction of biosolids production by ozonation of the return activated sludge (RAS). Three ozone doses were tested during the study. In addition to the pilot-scale study, laboratory-scale experiments were conducted with mixed liquor suspended solids and with pure cultures to parameterize the biomass inactivation process during exposure to ozone. The experiments revealed that biomass inactivation occurred even at the lowest doses, but that it was not associated with extensive COD solubilization. For validation, the model was used to simulate the temporal dynamics of the pilot-scale operational data. Increasing the description accuracy of the inactivation process improved the precision of the model in predicting the operational data. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. A New Predictive Model Based on the ABC Optimized Multivariate Adaptive Regression Splines Approach for Predicting the Remaining Useful Life in Aircraft Engines

    Directory of Open Access Journals (Sweden)

    Paulino José García Nieto

    2016-05-01

    Full Text Available Remaining useful life (RUL estimation is considered as one of the most central points in the prognostics and health management (PHM. The present paper describes a nonlinear hybrid ABC–MARS-based model for the prediction of the remaining useful life of aircraft engines. Indeed, it is well-known that an accurate RUL estimation allows failure prevention in a more controllable way so that the effective maintenance can be carried out in appropriate time to correct impending faults. The proposed hybrid model combines multivariate adaptive regression splines (MARS, which have been successfully adopted for regression problems, with the artificial bee colony (ABC technique. This optimization technique involves parameter setting in the MARS training procedure, which significantly influences the regression accuracy. However, its use in reliability applications has not yet been widely explored. Bearing this in mind, remaining useful life values have been predicted here by using the hybrid ABC–MARS-based model from the remaining measured parameters (input variables for aircraft engines with success. A correlation coefficient equal to 0.92 was obtained when this hybrid ABC–MARS-based model was applied to experimental data. The agreement of this model with experimental data confirmed its good performance. The main advantage of this predictive model is that it does not require information about the previous operation states of the aircraft engine.

  10. Modeling and simulation of adaptive Neuro-fuzzy based intelligent system for predictive stabilization in structured overlay networks

    Directory of Open Access Journals (Sweden)

    Ramanpreet Kaur

    2017-02-01

    Full Text Available Intelligent prediction of neighboring node (k well defined neighbors as specified by the dht protocol dynamism is helpful to improve the resilience and can reduce the overhead associated with topology maintenance of structured overlay networks. The dynamic behavior of overlay nodes depends on many factors such as underlying user’s online behavior, geographical position, time of the day, day of the week etc. as reported in many applications. We can exploit these characteristics for efficient maintenance of structured overlay networks by implementing an intelligent predictive framework for setting stabilization parameters appropriately. Considering the fact that human driven behavior usually goes beyond intermittent availability patterns, we use a hybrid Neuro-fuzzy based predictor to enhance the accuracy of the predictions. In this paper, we discuss our predictive stabilization approach, implement Neuro-fuzzy based prediction in MATLAB simulation and apply this predictive stabilization model in a chord based overlay network using OverSim as a simulation tool. The MATLAB simulation results present that the behavior of neighboring nodes is predictable to a large extent as indicated by the very small RMSE. The OverSim based simulation results also observe significant improvements in the performance of chord based overlay network in terms of lookup success ratio, lookup hop count and maintenance overhead as compared to periodic stabilization approach.

  11. Development of an irrigation scheduling software based on model predicted crop water stress

    Science.gov (United States)

    Modern irrigation scheduling methods are generally based on sensor-monitored soil moisture regimes rather than crop water stress which is difficult to measure in real-time, but can be computed using agricultural system models. In this study, an irrigation scheduling software based on RZWQM2 model pr...

  12. Prediction of equibiaxial loading stress in collagen-based extracellular matrix using a three-dimensional unit cell model.

    Science.gov (United States)

    Susilo, Monica E; Bell, Brett J; Roeder, Blayne A; Voytik-Harbin, Sherry L; Kokini, Klod; Nauman, Eric A

    2013-03-01

    Mechanical signals are important factors in determining cell fate. Therefore, insights as to how mechanical signals are transferred between the cell and its surrounding three-dimensional collagen fibril network will provide a basis for designing the optimum extracellular matrix (ECM) microenvironment for tissue regeneration. Previously we described a cellular solid model to predict fibril microstructure-mechanical relationships of reconstituted collagen matrices due to unidirectional loads (Acta Biomater 2010;6:1471-86). The model consisted of representative volume elements made up of an interconnected network of flexible struts. The present study extends this work by adapting the model to account for microstructural anisotropy of the collagen fibrils and a biaxial loading environment. The model was calibrated based on uniaxial tensile data and used to predict the equibiaxial tensile stress-stretch relationship. Modifications to the model significantly improved its predictive capacity for equibiaxial loading data. With a comparable fibril length (model 5.9-8μm, measured 7.5μm) and appropriate fibril anisotropy the anisotropic model provides a better representation of the collagen fibril microstructure. Such models are important tools for tissue engineering because they facilitate prediction of microstructure-mechanical relationships for collagen matrices over a wide range of microstructures and provide a framework for predicting cell-ECM interactions. Copyright © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  13. Predicting effects of noncoding variants with deep learning-based sequence model.

    Science.gov (United States)

    Zhou, Jian; Troyanskaya, Olga G

    2015-10-01

    Identifying functional effects of noncoding variants is a major challenge in human genetics. To predict the noncoding-variant effects de novo from sequence, we developed a deep learning-based algorithmic framework, DeepSEA (http://deepsea.princeton.edu/), that directly learns a regulatory sequence code from large-scale chromatin-profiling data, enabling prediction of chromatin effects of sequence alterations with single-nucleotide sensitivity. We further used this capability to improve prioritization of functional variants including expression quantitative trait loci (eQTLs) and disease-associated variants.

  14. Model-Based Predictive Control Scheme for Cost Optimization and Balancing Services for Supermarket Refrigeration Systems

    DEFF Research Database (Denmark)

    Weerts, Hermanus H. M.; Shafiei, Seyed Ehsan; Stoustrup, Jakob

    2014-01-01

    A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predictive...... control design. It is however shown that taking into account the knowledge of different time scales in the dynamical subsystems makes possible a linear formulation of a centralized predictive controller. A realistic scenario of regulatory power services in the smart grid is considered and formulated...... in the same objective as of cost optimization one. A simulation benchmark validated against real data and including significant dynamics of the system are employed to show the effectiveness of the proposed control scheme....

  15. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine.

    Directory of Open Access Journals (Sweden)

    Qiang Shang

    Full Text Available Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS. Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM is proposed based on singular spectrum analysis (SSA and kernel extreme learning machine (KELM. SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA. Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.

  16. Model predictive controller-based multi-model control system for longitudinal stability of distributed drive electric vehicle.

    Science.gov (United States)

    Shi, Ke; Yuan, Xiaofang; Liu, Liang

    2018-01-01

    Distributed drive electric vehicle(DDEV) has been widely researched recently, its longitudinal stability is a very important research topic. Conventional wheel slip ratio control strategies are usually designed for one special operating mode and the optimal performance cannot be obtained as DDEV works under various operating modes. In this paper, a novel model predictive controller-based multi-model control system (MPC-MMCS) is proposed to solve the longitudinal stability problem of DDEV. Firstly, the operation state of DDEV is summarized as three kinds of typical operating modes. A submodel set is established to accurately represent the state value of the corresponding operating mode. Secondly, the matching degree between the state of actual DDEV and each submodel is analyzed. The matching degree is expressed as the weight coefficient and calculated by a modified recursive Bayes theorem. Thirdly, a nonlinear MPC is designed to achieve the optimal wheel slip ratio for each submodel. The optimal design of MPC is realized by parallel chaos optimization algorithm(PCOA)with computational accuracy and efficiency. Finally, the control output of MPC-MMCS is computed by the weighted output of each MPC to achieve smooth switching between operating modes. The proposed MPC-MMCS is evaluated on eight degrees of freedom(8DOF)DDEV model simulation platform and simulation results of different condition show the benefits of the proposed control system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Prediction of the Pharmacokinetics, Pharmacodynamics, and Efficacy of a Monoclonal Antibody, Using a Physiologically Based Pharmacokinetic FcRn Model

    Science.gov (United States)

    Chetty, Manoranjenni; Li, Linzhong; Rose, Rachel; Machavaram, Krishna; Jamei, Masoud; Rostami-Hodjegan, Amin; Gardner, Iain

    2015-01-01

    Although advantages of physiologically based pharmacokinetic models (PBPK) are now well established, PBPK models that are linked to pharmacodynamic (PD) models to predict pharmacokinetics (PK), PD, and efficacy of monoclonal antibodies (mAbs) in humans are uncommon. The aim of this study was to develop a PD model that could be linked to a physiologically based mechanistic FcRn model to predict PK, PD, and efficacy of efalizumab. The mechanistic FcRn model for mAbs with target-mediated drug disposition within the Simcyp population-based simulator was used to simulate the pharmacokinetic profiles for three different single doses and two multiple doses of efalizumab administered to virtual Caucasian healthy volunteers. The elimination of efalizumab was modeled with both a target-mediated component (specific) and catabolism in the endosome (non-specific). This model accounted for the binding between neonatal Fc receptor (FcRn) and efalizumab (protective against elimination) and for changes in CD11a target concentration. An integrated response model was then developed to predict the changes in mean Psoriasis Area and Severity Index (PASI) scores that were measured in a clinical study as an efficacy marker for efalizumab treatment. PASI scores were approximated as continuous and following a first-order asymptotic progression model. The reported steady state asymptote (Y ss) and baseline score [Y (0)] was applied and parameter estimation was used to determine the half-life of progression (Tp) of psoriasis. Results suggested that simulations using this model were able to recover the changes in PASI scores (indicating efficacy) observed during clinical studies. Simulations of both single dose and multiple doses of efalizumab concentration-time profiles as well as suppression of CD11a concentrations recovered clinical data reasonably well. It can be concluded that the developed PBPK FcRn model linked to a PD model adequately predicted PK, PD, and efficacy of efalizumab. PMID

  18. Improved prediction of residue flexibility by embedding optimized amino acid grouping into RSA-based linear models.

    Science.gov (United States)

    Zhang, Hua; Kurgan, Lukasz

    2014-12-01

    Knowledge of protein flexibility is vital for deciphering the corresponding functional mechanisms. This knowledge would help, for instance, in improving computational drug design and refinement in homology-based modeling. We propose a new predictor of the residue flexibility, which is expressed by B-factors, from protein chains that use local (in the chain) predicted (or native) relative solvent accessibility (RSA) and custom-derived amino acid (AA) alphabets. Our predictor is implemented as a two-stage linear regression model that uses RSA-based space in a local sequence window in the first stage and a reduced AA pair-based space in the second stage as the inputs. This method is easy to comprehend explicit linear form in both stages. Particle swarm optimization was used to find an optimal reduced AA alphabet to simplify the input space and improve the prediction performance. The average correlation coefficients between the native and predicted B-factors measured on a large benchmark dataset are improved from 0.65 to 0.67 when using the native RSA values and from 0.55 to 0.57 when using the predicted RSA values. Blind tests that were performed on two independent datasets show consistent improvements in the average correlation coefficients by a modest value of 0.02 for both native and predicted RSA-based predictions.

  19. A diffusivity model for predicting VOC diffusion in porous building materials based on fractal theory.

    Science.gov (United States)

    Liu, Yanfeng; Zhou, Xiaojun; Wang, Dengjia; Song, Cong; Liu, Jiaping

    2015-12-15

    Most building materials are porous media, and the internal diffusion coefficients of such materials have an important influences on the emission characteristics of volatile organic compounds (VOCs). The pore structure of porous building materials has a significant impact on the diffusion coefficient. However, the complex structural characteristics bring great difficulties to the model development. The existing prediction models of the diffusion coefficient are flawed and need to be improved. Using scanning electron microscope (SEM) observations and mercury intrusion porosimetry (MIP) tests of typical porous building materials, this study developed a new diffusivity model: the multistage series-connection fractal capillary-bundle (MSFC) model. The model considers the variable-diameter capillaries formed by macropores connected in series as the main mass transfer paths, and the diameter distribution of the capillary bundles obeys a fractal power law in the cross section. In addition, the tortuosity of the macrocapillary segments with different diameters is obtained by the fractal theory. Mesopores serve as the connections between the macrocapillary segments rather than as the main mass transfer paths. The theoretical results obtained using the MSFC model yielded a highly accurate prediction of the diffusion coefficients and were in a good agreement with the VOC concentration measurements in the environmental test chamber. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Prediction Model and Principle of End-of-Life Threshold for Lithium Ion Batteries Based on Open Circuit Voltage Drifts

    International Nuclear Information System (INIS)

    Cui, Yingzhi; Yang, Jie; Du, Chunyu; Zuo, Pengjian; Gao, Yunzhi; Cheng, Xinqun; Ma, Yulin; Yin, Geping

    2017-01-01

    Highlights: •Open circuit voltage evolution over ageing of lithium ion batteries is deciphered. •The mechanism responsible for the end-of-life (EOL) threshold is elaborated. •A new prediction model of EOL threshold with improved accuracy is developed. •This EOL prediction model is promising for the applications in electric vehicles. -- Abstract: The end-of-life (EOL) of a lithium ion battery (LIB) is defined as the time point when the LIB can no longer provide sufficient power or energy to accomplish its intended function. Generally, the EOL occurs abruptly when the degradation of a LIB reaches the threshold. Therefore, current prediction methods of EOL by extrapolating the early degradation behavior often result in significant errors. To address this problem, this paper analyzes the reason for the EOL threshold of a LIB with shallow depth of discharge. It is found that the sudden appearance of EOL threshold results from the drift of open circuit voltage (OCV) at the end of both shallow depth and full discharges. Further, a new EOL threshold prediction model with highly improved accuracy is developed based on the OCV drifts and their evolution mechanism, which can effectively avoid the misjudgment of EOL threshold. The accuracy of this EOL threshold prediction model is verified by comparing with experimental results. The EOL threshold prediction model can be applied to other battery chemistry systems and its possible application in electric vehicles is finally discussed.

  1. Prediction model of energy consumption in Jiangsu Province based on constraint condition of carbon emission

    Science.gov (United States)

    Chang, Z. G.; Xue, T. T.; Chen, Y. J.; Chao, X. H.

    2017-11-01

    In order to achieve the targets for energy conservation and economic development goals in Jiangsu Province under the constraint of carbon emission, this paper uses the gray GM (1,1) model to predict and optimize the consumption structure of major energy sources (coal, oil, natural gas, etc.) in Jiangsu province in the "13th Five-Year" period and the next seven years. The predictions meet the requirement of reducing carbon dioxide emissions per unit GDP of China by 50%. The results show that the proposed approach and model is effective. Finally, we put forward opinions and suggestions on the way of energy-saving and emission-reduction, the adjustment of energy structure and the policy of coal consumption in Jiangsu Province.

  2. An Analytical Model for Fatigue Life Prediction Based on Fracture Mechanics and Crack Closure

    DEFF Research Database (Denmark)

    Ibsø, Jan Behrend; Agerskov, Henning

    1996-01-01

    test specimens are compared with fatigue life predictions using a fracture mechanics approach. In the calculation of the fatigue life, the influence of the welding residual stresses and crack closure on the fatigue crack growth is considered. A description of the crack closure model for analytical...... determination of the fatigue life is included. Furthermore, the results obtained in studies of the various parameters that have an influence on the fatigue life, are given. A very good agreement between experimental and analytical results is obtained, when the crack closure model is used in determination...... of the analytical fatigue lives. Both the analytical and experimental results obtained show that the Miner rule may give quite unconservative predictions of the fatigue life for the types of stochastic loading studied....

  3. An Analytical Model for Fatigue Life Prediction Based on Fracture Mechanics and Crack Closure

    DEFF Research Database (Denmark)

    Ibsø, Jan Behrend; Agerskov, Henning

    1996-01-01

    test specimens are compared with fatigue life predictions using a fracture mechanics approach. In the calculation of the fatigue life, the influence of the welding residual stresses and crack closure on the fatigue crack growth is considered. A description of the crack closure model for analytical...... of the analytical fatigue lives. Both the analytical and experimental results obtained show that the Miner rule may give quite unconservative predictions of the fatigue life for the types of stochastic loading studied....... determination of the fatigue life is included. Furthermore, the results obtained in studies of the various parameters that have an influence on the fatigue life, are given. A very good agreement between experimental and analytical results is obtained, when the crack closure model is used in determination...

  4. Improved prediction of higher heating value of biomass using an artificial neural network model based on proximate analysis.

    Science.gov (United States)

    Uzun, Harun; Yıldız, Zeynep; Goldfarb, Jillian L; Ceylan, Selim

    2017-06-01

    As biomass becomes more integrated into our energy feedstocks, the ability to predict its combustion enthalpies from routine data such as carbon, ash, and moisture content enables rapid decisions about utilization. The present work constructs a novel artificial neural network model with a 3-3-1 tangent sigmoid architecture to predict biomasses' higher heating values from only their proximate analyses, requiring minimal specificity as compared to models based on elemental composition. The model presented has a considerably higher correlation coefficient (0.963) and lower root mean square (0.375), mean absolute (0.328), and mean bias errors (0.010) than other models presented in the literature which, at least when applied to the present data set, tend to under-predict the combustion enthalpy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A discriminant analysis prediction model of non-syndromic cleft lip with or without cleft palate based on risk factors.

    Science.gov (United States)

    Li, Huixia; Luo, Miyang; Luo, Jiayou; Zheng, Jianfei; Zeng, Rong; Du, Qiyun; Fang, Junqun; Ouyang, Na

    2016-11-23

    A risk prediction model of non-syndromic cleft lip with or without cleft palate (NSCL/P) was established by a discriminant analysis to predict the individual risk of NSCL/P in pregnant women. A hospital-based case-control study was conducted with 113 cases of NSCL/P and 226 controls without NSCL/P. The cases and the controls were obtained from 52 birth defects' surveillance hospitals in Hunan Province, China. A questionnaire was administered in person to collect the variables relevant to NSCL/P by face to face interviews. Logistic regression models were used to analyze the influencing factors of NSCL/P, and a stepwise Fisher discriminant analysis was subsequently used to construct the prediction model. In the univariate analysis, 13 influencing factors were related to NSCL/P, of which the following 8 influencing factors as predictors determined the discriminant prediction model: family income, maternal occupational hazards exposure, premarital medical examination, housing renovation, milk/soymilk intake in the first trimester of pregnancy, paternal occupational hazards exposure, paternal strong tea drinking, and family history of NSCL/P. The model had statistical significance (lambda = 0.772, chi-square = 86.044, df = 8, P Self-verification showed that 83.8 % of the participants were correctly predicted to be NSCL/P cases or controls with a sensitivity of 74.3 % and a specificity of 88.5 %. The area under the receiver operating characteristic curve (AUC) was 0.846. The prediction model that was established using the risk factors of NSCL/P can be useful for predicting the risk of NSCL/P. Further research is needed to improve the model, and confirm the validity and reliability of the model.

  6. Using Cutting-Edge Tree-Based Stochastic Models to Predict Credit Risk

    Directory of Open Access Journals (Sweden)

    Khaled Halteh

    2018-05-01

    Full Text Available Credit risk is a critical issue that affects banks and companies on a global scale. Possessing the ability to accurately predict the level of credit risk has the potential to help the lender and borrower. This is achieved by alleviating the number of loans provided to borrowers with poor financial health, thereby reducing the number of failed businesses, and, in effect, preventing economies from collapsing. This paper uses state-of-the-art stochastic models, namely: Decision trees, random forests, and stochastic gradient boosting to add to the current literature on credit-risk modelling. The Australian mining industry has been selected to test our methodology. Mining in Australia generates around $138 billion annually, making up more than half of the total goods and services. This paper uses publicly-available financial data from 750 risky and not risky Australian mining companies as variables in our models. Our results indicate that stochastic gradient boosting was the superior model at correctly classifying the good and bad credit-rated companies within the mining sector. Our model showed that ‘Property, Plant, & Equipment (PPE turnover’, ‘Invested Capital Turnover’, and ‘Price over Earnings Ratio (PER’ were the variables with the best explanatory power pertaining to predicting credit risk in the Australian mining sector.

  7. A Simple Physics-Based Model Predicts Oil Production from Thousands of Horizontal Wells in Shales

    KAUST Repository

    Patzek, Tadeusz

    2017-10-18

    Over the last six years, crude oil production from shales and ultra-deep GOM in the United States has accounted for most of the net increase of global oil production. Therefore, it is important to have a good predictive model of oil production and ultimate recovery in shale wells. Here we introduce a simple model of producing oil and solution gas from the horizontal hydrofractured wells. This model is consistent with the basic physics and geometry of the extraction process. We then apply our model thousands of wells in the Eagle Ford shale. Given well geometry, we obtain a one-dimensional nonlinear pressure diffusion equation that governs flow of mostly oil and solution gas. In principle, solutions of this equation depend on many parameters, but in practice and within a given oil shale, all but three can be fixed at typical values, leading to a nonlinear diffusion problem we linearize and solve exactly with a scaling

  8. Prediction of interindividual variation in drug plasma levels in vivo from individual enzyme kinetic data and physiologically based pharmacokinetic modeling

    NARCIS (Netherlands)

    Bogaards, J.J.P.; Hissink, E.M.; Briggs, M.; Weaver, R.; Jochemsen, R.; Jackson, P.; Bertrand, M.; Bladeren, P. van

    2000-01-01

    A strategy is presented to predict interindividual variation in drug plasma levels in vivo by the use of physiologically based pharmacokinetic modeling and human in vitro metabolic parameters, obtained through the combined use of microsomes containing single cytochrome P450 enzymes and a human liver

  9. Predictions on the Development Dimensions of Provincial Tourism Discipline Based on the Artificial Neural Network BP Model

    Science.gov (United States)

    Yang, Yang; Hu, Jun; Lv, Yingchun; Zhang, Mu

    2013-01-01

    As the tourism industry has gradually become the strategic mainstay industry of the national economy, the scope of the tourism discipline has developed rigorously. This paper makes a predictive study on the development of the scope of Guangdong provincial tourism discipline based on the artificial neural network BP model in order to find out how…

  10. The Relevance Voxel Machine (RVoxM): A Self-Tuning Bayesian Model for Informative Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2012-01-01

    This paper presents the relevance voxel machine (RVoxM), a dedicated Bayesian model for making predictions based on medical imaging data. In contrast to the generic machine learning algorithms that have often been used for this purpose, the method is designed to utilize a small number of spatially...

  11. Development and validation of a prediction model for long-term sickness absence based on occupational health survey variables

    NARCIS (Netherlands)

    Roelen, Corne; Thorsen, Sannie; Heymans, Martijn; Twisk, Jos; Bultmann, Ute; Bjorner, Jakob

    2018-01-01

    Purpose: The purpose of this study is to develop and validate a prediction model for identifying employees at increased risk of long-term sickness absence (LTSA), by using variables commonly measured in occupational health surveys. Materials and methods: Based on the literature, 15 predictor

  12. Empirical models based on the universal soil loss equation fail to predict sediment discharges from Chesapeake Bay catchments.

    Science.gov (United States)

    Boomer, Kathleen B; Weller, Donald E; Jordan, Thomas E

    2008-01-01

    The Universal Soil Loss Equation (USLE) and its derivatives are widely used for identifying watersheds with a high potential for degrading stream water quality. We compared sediment yields estimated from regional application of the USLE, the automated revised RUSLE2, and five sediment delivery ratio algorithms to measured annual average sediment delivery in 78 catchments of the Chesapeake Bay watershed. We did the same comparisons for another 23 catchments monitored by the USGS. Predictions exceeded observed sediment yields by more than 100% and were highly correlated with USLE erosion predictions (Pearson r range, 0.73-0.92; p USLE estimates (r = 0.87; p USLE model did not change the results. In ranked comparisons between observed and predicted sediment yields, the models failed to identify catchments with higher yields (r range, -0.28-0.00; p > 0.14). In a multiple regression analysis, soil erodibility, log (stream flow), basin shape (topographic relief ratio), the square-root transformed proportion of forest, and occurrence in the Appalachian Plateau province explained 55% of the observed variance in measured suspended sediment loads, but the model performed poorly (r(2) = 0.06) at predicting loads in the 23 USGS watersheds not used in fitting the model. The use of USLE or multiple regression models to predict sediment yields is not advisable despite their present widespread application. Integrated watershed models based on the USLE may also be unsuitable for making management decisions.

  13. Template-based and free modeling of I-TASSER and QUARK pipelines using predicted contact maps in CASP12.

    Science.gov (United States)

    Zhang, Chengxin; Mortuza, S M; He, Baoji; Wang, Yanting; Zhang, Yang

    2018-03-01

    We develop two complementary pipelines, "Zhang-Server" and "QUARK", based on I-TASSER and QUARK pipelines for template-based modeling (TBM) and free modeling (FM), and test them in the CASP12 experiment. The combination of I-TASSER and QUARK successfully folds three medium-size FM targets that have more than 150 residues, even though the interplay between the two pipelines still awaits further optimization. Newly developed sequence-based contact prediction by NeBcon plays a critical role to enhance the quality of models, particularly for FM targets, by the new pipelines. The inclusion of NeBcon predicted contacts as restraints in the QUARK simulations results in an average TM-score of 0.41 for the best in top five predicted models, which is 37% higher than that by the QUARK simulations without contacts. In particular, there are seven targets that are converted from non-foldable to foldable (TM-score >0.5) due to the use of contact restraints in the simulations. Another additional feature in the current pipelines is the local structure quality prediction by ResQ, which provides a robust residue-level modeling error estimation. Despite the success, significant challenges still remain in ab initio modeling of multi-domain proteins and folding of β-proteins with complicated topologies bound by long-range strand-strand interactions. Improvements on domain boundary and long-range contact prediction, as well as optimal use of the predicted contacts and multiple threading alignments, are critical to address these issues seen in the CASP12 experiment. © 2017 Wiley Periodicals, Inc.

  14. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, J. D. (Prostat, Mesa, AZ); Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

    2006-10-01

    Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

  15. Preventive Maintenance Interval Prediction: a Spare Parts Inventory Cost and Lost Earning Based Model

    Directory of Open Access Journals (Sweden)

    O. A. Adebimpe

    2015-06-01

    Full Text Available In this paper, some preventive maintenance parameters in manufacturing firms were identified and used to develop cost based functions in terms of machine preventive maintenance. The proposed cost based model considers system’s reliability, cost of keeping spare parts inventory and lost earnings in deriving optimal maintenance interval. A case of a manufacturing firm in Nigeria was observed and the data was used to evaluate the model.

  16. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  17. Early Prediction of Disease Progression in Small Cell Lung Cancer: Toward Model-Based Personalized Medicine in Oncology.

    Science.gov (United States)

    Buil-Bruna, Núria; Sahota, Tarjinder; López-Picazo, José-María; Moreno-Jiménez, Marta; Martín-Algarra, Salvador; Ribba, Benjamin; Trocóniz, Iñaki F

    2015-06-15

    Predictive biomarkers can play a key role in individualized disease monitoring. Unfortunately, the use of biomarkers in clinical settings has thus far been limited. We have previously shown that mechanism-based pharmacokinetic/pharmacodynamic modeling enables integration of nonvalidated biomarker data to provide predictive model-based biomarkers for response classification. The biomarker model we developed incorporates an underlying latent variable (disease) representing (unobserved) tumor size dynamics, which is assumed to drive biomarker production and to be influenced by exposure to treatment. Here, we show that by integrating CT scan data, the population model can be expanded to include patient outcome. Moreover, we show that in conjunction with routine medical monitoring data, the population model can support accurate individual predictions of outcome. Our combined model predicts that a change in disease of 29.2% (relative standard error 20%) between two consecutive CT scans (i.e., 6-8 weeks) gives a probability of disease progression of 50%. We apply this framework to an external dataset containing biomarker data from 22 small cell lung cancer patients (four patients progressing during follow-up). Using only data up until the end of treatment (a total of 137 lactate dehydrogenase and 77 neuron-specific enolase observations), the statistical framework prospectively identified 75% of the individuals as having a predictable outcome in follow-up visits. This included two of the four patients who eventually progressed. In all identified individuals, the model-predicted outcomes matched the observed outcomes. This framework allows at risk patients to be identified early and therapeutic intervention/monitoring to be adjusted individually, which may improve overall patient survival. ©2015 American Association for Cancer Research.

  18. A Riccati-Based Interior Point Method for Efficient Model Predictive Control of SISO Systems

    DEFF Research Database (Denmark)

    Hagdrup, Morten; Johansson, Rolf; Bagterp Jørgensen, John

    2017-01-01

    model parts separate. The controller is designed based on the deterministic model, while the Kalman filter results from the stochastic part. The controller is implemented as a primal-dual interior point (IP) method using Riccati recursion and the computational savings possible for SISO systems...

  19. Prediction of slope stability based on numerical modeling of stress–strain state of rocks

    Science.gov (United States)

    Kozhogulov Nifadyev, KCh, VI; Usmanov, SF

    2018-03-01

    The paper presents the developed technique for the estimation of rock mass stability based on the finite element modeling of stress–strain state of rocks. The modeling results on the pit wall landslide as a flow of particles along a sloped surface are described.

  20. Evaluating crown fire rate of spread predictions from physics-based models

    Science.gov (United States)

    C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont

    2015-01-01

    Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...

  1. Predicting corrosion product transport in nuclear power stations using a solubility-based model for flow-accelerated corrosion

    International Nuclear Information System (INIS)

    Burrill, K.A.; Cheluget, E.L.

    1995-01-01

    A general model of solubility-driven flow-accelerated corrosion of carbon steel was derived based on the assumption that the solubilities of ferric oxyhydroxide and magnetite control the rate of film dissolution. This process involves the dissolution of an oxide film due to fast-flowing coolant unsaturated in iron. The soluble iron is produced by (i) the corrosion of base metal under a porous oxide film and (ii) the dissolution of the oxide film at the fluid-oxide film interface. The iron released at the pipe wall is transferred into the bulk flow by turbulent mass transfer. The model is suitable for calculating concentrations of dissolved iron in feedtrain lines. These iron levels were used to calculate sludge transport rates around the feedtrain. The model was used to predict sludge transport rates due to flow accelerated corrosion of major feedtrain piping in a CANDU reactor. The predictions of the model compare well with plant measurements

  2. Development and validation of a prediction model for long-term sickness absence based on occupational health survey variables

    DEFF Research Database (Denmark)

    Roelen, Corné; Thorsen, Sannie; Heymans, Martijn

    2018-01-01

    LTSA during follow-up. Results: The 15-predictor model was reduced to a 9-predictor model including age, gender, education, self-rated health, mental health, prior LTSA, work ability, emotional job demands, and recognition by the management. Discrimination by the 9-predictor model was significant (AUC...... population. Implications for rehabilitation Long-term sickness absence risk predictions would enable healthcare providers to refer high-risk employees to rehabilitation programs aimed at preventing or reducing work disability. A prediction model based on health survey variables discriminates between...... employees at high and low risk of long-term sickness absence, but discrimination was not practically useful. Health survey variables provide insufficient information to determine long-term sickness absence risk profiles. There is a need for new variables, based on the knowledge and experience...

  3. A prediction model-based algorithm for computer-assisted database screening of adverse drug reactions in the Netherlands.

    Science.gov (United States)

    Scholl, Joep H G; van Hunsel, Florence P A M; Hak, Eelko; van Puijenbroek, Eugène P

    2018-02-01

    The statistical screening of pharmacovigilance databases containing spontaneously reported adverse drug reactions (ADRs) is mainly based on disproportionality analysis. The aim of this study was to improve the efficiency of full database screening using a prediction model-based approach. A logistic regression-based prediction model containing 5 candidate predictors was developed and internally validated using the Summary of Product Characteristics as the gold standard for the outcome. All drug-ADR associations, with the exception of those related to vaccines, with a minimum of 3 reports formed the training data for the model. Performance was based on the area under the receiver operating characteristic curve (AUC). Results were compared with the current method of database screening based on the number of previously analyzed associations. A total of 25 026 unique drug-ADR associations formed the training data for the model. The final model contained all 5 candidate predictors (number of reports, disproportionality, reports from healthcare professionals, reports from marketing authorization holders, Naranjo score). The AUC for the full model was 0.740 (95% CI; 0.734-0.747). The internal validity was good based on the calibration curve and bootstrapping analysis (AUC after bootstrapping = 0.739). Compared with the old method, the AUC increased from 0.649 to 0.740, and the proportion of potential signals increased by approximately 50% (from 12.3% to 19.4%). A prediction model-based approach can be a useful tool to create priority-based listings for signal detection in databases consisting of spontaneous ADRs. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  4. A soil-based model to predict radionuclide transfer in a soil-plant system

    International Nuclear Information System (INIS)

    Roig, M.; Vidal, M.; Tent, J.; Rauret, G.; Roca, M.C.; Vallejo, V.R.

    1998-01-01

    The aim of this work was to check if the main soil parameters predefined as ruling soil-plant transfer were sufficient to predict a relative scale of radionuclide mobility in mineral soils. Two agricultural soils, two radionuclides ( 85 Sr and 134 Cs), and two crops (lettuce and pea) were used in these experiments following radioactive aerosol deposition simulating the conditions of a site some distance far away from the center of a nuclear accident, for which condensed deposition would be the more significant contribution. The available fraction of these radionuclides was estimated in these soils from experiments in which various reagents were tested and several experimental conditions were compared. As a general conclusion, the soil parameters seemed to be sufficient for prediction purposes, although the model should be improved through the consideration of physiological aspects, especially those depending of the plant selectivity according to the composition of the soil solution

  5. M5 model tree based predictive modeling of road accidents on non-urban sections of highways in India.

    Science.gov (United States)

    Singh, Gyanendra; Sachdeva, S N; Pal, Mahesh

    2016-11-01

    This work examines the application of M5 model tree and conventionally used fixed/random effect negative binomial (FENB/RENB) regression models for accident prediction on non-urban sections of highway in Haryana (India). Road accident data for a period of 2-6 years on different sections of 8 National and State Highways in Haryana was collected from police records. Data related to road geometry, traffic and road environment related variables was collected through field studies. Total two hundred and twenty two data points were gathered by dividing highways into sections with certain uniform geometric characteristics. For prediction of accident frequencies using fifteen input parameters, two modeling approaches: FENB/RENB regression and M5 model tree were used. Results suggest that both models perform comparably well in terms of correlation coefficient and root mean square error values. M5 model tree provides simple linear equations that are easy to interpret and provide better insight, indicating that this approach can effectively be used as an alternative to RENB approach if the sole purpose is to predict motor vehicle crashes. Sensitivity analysis using M5 model tree also suggests that its results reflect the physical conditions. Both models clearly indicate that to improve safety on Indian highways minor accesses to the highways need to be properly designed and controlled, the service roads to be made functional and dispersion of speeds is to be brought down. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Prediction of bakery products nutritive value based on mathematical modeling of biochemical reactions

    Directory of Open Access Journals (Sweden)

    E. I. Ponomareva

    2013-01-01

    Full Text Available Researches are devoted to identifying changes in the chemical composition of whole-grain wheat bread during baking and to forecasting of food value of bakery products by mathematical modeling of biochemical transformations. The received model represents the invariant composition, considering speed of biochemical reactions at a batch of bakery products, and allowing conduct virtual experiments to develop new types of bread for various categories of the population, including athletes. The offered way of modeling of biochemical transformations at a stage of heat treatment allows to predict food value of bakery products, without spending funds for raw materials and large volume of experiment that will provide possibility of economy of material resources at a stage of development of new types of bakery products and possibility of production efficiency increase.

  7. Real-time prediction of respiratory motion based on a local dynamic model in an augmented space.

    Science.gov (United States)

    Hong, S-M; Jung, B-H; Ruan, D

    2011-03-21

    Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively

  8. Model predictive control of the solid oxide fuel cell stack temperature with models based on experimental data

    Science.gov (United States)

    Pohjoranta, Antti; Halinen, Matias; Pennanen, Jari; Kiviaho, Jari

    2015-03-01

    Generalized predictive control (GPC) is applied to control the maximum temperature in a solid oxide fuel cell (SOFC) stack and the temperature difference over the stack. GPC is a model predictive control method and the models utilized in this work are ARX-type (autoregressive with extra input), multiple input-multiple output, polynomial models that were identified from experimental data obtained from experiments with a complete SOFC system. The proposed control is evaluated by simulation with various input-output combinations, with and without constraints. A comparison with conventional proportional-integral-derivative (PID) control is also made. It is shown that if only the stack maximum temperature is controlled, a standard PID controller can be used to obtain output performance comparable to that obtained with the significantly more complex model predictive controller. However, in order to control the temperature difference over the stack, both the stack minimum and the maximum temperature need to be controlled and this cannot be done with a single PID controller. In such a case the model predictive controller provides a feasible and effective solution.

  9. Research on the Wire Network Signal Prediction Based on the Improved NNARX Model

    Science.gov (United States)

    Zhang, Zipeng; Fan, Tao; Wang, Shuqing

    It is difficult to obtain accurately the wire net signal of power system's high voltage power transmission lines in the process of monitoring and repairing. In order to solve this problem, the signal measured in remote substation or laboratory is employed to make multipoint prediction to gain the needed data. But, the obtained power grid frequency signal is delay. In order to solve the problem, an improved NNARX network which can predict frequency signal based on multi-point data collected by remote substation PMU is describes in this paper. As the error curved surface of the NNARX network is more complicated, this paper uses L-M algorithm to train the network. The result of the simulation shows that the NNARX network has preferable predication performance which provides accurate real time data for field testing and maintenance.

  10. Antioxidant-capacity-based models for the prediction of acrylamide reduction by flavonoids.

    Science.gov (United States)

    Cheng, Jun; Chen, Xinyu; Zhao, Sheng; Zhang, Yu

    2015-02-01

    The aim of this study was to investigate the applicability of artificial neural network (ANN) and multiple linear regression (MLR) models for the estimation of acrylamide reduction by flavonoids, using multiple antioxidant capacities of Maillard reaction products as variables via a microwave food processing workstation. The addition of selected flavonoids could effectively reduce acrylamide formation, which may be closely related to the number of phenolic hydroxyl groups of flavonoids (R: 0.735-0.951, Pcapacity (ΔTEAC) measured by DPPH (R(2)=0.833), ABTS (R(2)=0.860) or FRAP (R(2)=0.824) assay. Both ANN and MLR models could effectively serve as predictive tools for estimating the reduction of acrylamide affected by flavonoids. The current predictive model study provides a low-cost and easy-to-use approach to the estimation of rates at which acrylamide is degraded, while avoiding tedious sample pretreatment procedures and advanced instrumental analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Implementation of a phenomenological DNB prediction model based on macroscale boiling flow processes in PWR fuel bundles

    International Nuclear Information System (INIS)

    Mohitpour, Maryam; Jahanfarnia, Gholamreza; Shams, Mehrzad

    2014-01-01

    Highlights: • A numerical framework was developed to mechanistically predict DNB in PWR bundles. • The DNB evaluation module was incorporated into the two-phase flow solver module. • Three-dimensional two-fluid model was the basis of two-phase flow solver module. • Liquid sublayer dryout model was adapted as CHF-triggering mechanism in DNB module. • Ability of DNB modeling approach was studied based on PSBT DNB tests in rod bundle. - Abstract: In this study, a numerical framework, comprising of a two-phase flow subchannel solver module and a Departure from Nucleate Boiling (DNB) evaluation module, was developed to mechanistically predict DNB in rod bundles of Pressurized Water Reactor (PWR). In this regard, the liquid sublayer dryout model was adapted as the Critical Heat Flux (CHF) triggering mechanism to reduce the dependency of the model on empirical correlations in the DNB evaluation module. To predict local flow boiling processes, a three-dimensional two-fluid formalism coupled with heat conduction was selected as the basic tool for the development of the two-phase flow subchannel analysis solver. Evaluation of the DNB modeling approach was performed against OECD/NRC NUPEC PWR Bundle tests (PSBT Benchmark) which supplied an extensive database for the development of truly mechanistic and consistent models for boiling transition and CHF. The results of the analyses demonstrated the need for additional assessment of the subcooled boiling model and the bulk condensation model implemented in the two-phase flow solver module. The proposed model slightly under-predicts the DNB power in comparison with the ones obtained from steady-state benchmark measurements. However, this prediction is acceptable compared with other codes. Another point about the DNB prediction model is that it has a conservative behavior. Examination of the axial and radial position of the first detected DNB using code-to-code comparisons on the basis of PSBT data indicated that the our

  12. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    Science.gov (United States)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  13. SRMDAP: SimRank and Density-Based Clustering Recommender Model for miRNA-Disease Association Prediction

    Directory of Open Access Journals (Sweden)

    Xiaoying Li

    2018-01-01

    Full Text Available Aberrant expression of microRNAs (miRNAs can be applied for the diagnosis, prognosis, and treatment of human diseases. Identifying the relationship between miRNA and human disease is important to further investigate the pathogenesis of human diseases. However, experimental identification of the associations between diseases and miRNAs is time-consuming and expensive. Computational methods are efficient approaches to determine the potential associations between diseases and miRNAs. This paper presents a new computational method based on the SimRank and density-based clustering recommender model for miRNA-disease associations prediction (SRMDAP. The AUC of 0.8838 based on leave-one-out cross-validation and case studies suggested the excellent performance of the SRMDAP in predicting miRNA-disease associations. SRMDAP could also predict diseases without any related miRNAs and miRNAs without any related diseases.

  14. A Gaussian mixture copula model based localized Gaussian process regression approach for long-term wind speed prediction

    International Nuclear Information System (INIS)

    Yu, Jie; Chen, Kuilin; Mori, Junichi; Rashid, Mudassir M.

    2013-01-01

    Optimizing wind power generation and controlling the operation of wind turbines to efficiently harness the renewable wind energy is a challenging task due to the intermittency and unpredictable nature of wind speed, which has significant influence on wind power production. A new approach for long-term wind speed forecasting is developed in this study by integrating GMCM (Gaussian mixture copula model) and localized GPR (Gaussian process regression). The time series of wind speed is first classified into multiple non-Gaussian components through the Gaussian mixture copula model and then Bayesian inference strategy is employed to incorporate the various non-Gaussian components using the posterior probabilities. Further, the localized Gaussian process regression models corresponding to different non-Gaussian components are built to characterize the stochastic uncertainty and non-stationary seasonality of the wind speed data. The various localized GPR models are integrated through the posterior probabilities as the weightings so that a global predictive model is developed for the prediction of wind speed. The proposed GMCM–GPR approach is demonstrated using wind speed data from various wind farm locations and compared against the GMCM-based ARIMA (auto-regressive integrated moving average) and SVR (support vector regression) methods. In contrast to GMCM–ARIMA and GMCM–SVR methods, the proposed GMCM–GPR model is able to well characterize the multi-seasonality and uncertainty of wind speed series for accurate long-term prediction. - Highlights: • A novel predictive modeling method is proposed for long-term wind speed forecasting. • Gaussian mixture copula model is estimated to characterize the multi-seasonality. • Localized Gaussian process regression models can deal with the random uncertainty. • Multiple GPR models are integrated through Bayesian inference strategy. • The proposed approach shows higher prediction accuracy and reliability

  15. Modeling and Prediction of Coal Ash Fusion Temperature based on BP Neural Network

    Directory of Open Access Journals (Sweden)

    Miao Suzhen

    2016-01-01

    Full Text Available Coal ash is the residual generated from combustion of coal. The ash fusion temperature (AFT of coal gives detail information on the suitability of a coal source for gasification procedures, and specifically to which extent ash agglomeration or clinkering is likely to occur within the gasifier. To investigate the contribution of oxides in coal ash to AFT, data of coal ash chemical compositions and Softening Temperature (ST in different regions of China were collected in this work and a BP neural network model was established by XD-APC PLATFORM. In the BP model, the inputs were the ash compositions and the output was the ST. In addition, the ash fusion temperature prediction model was obtained by industrial data and the model was generalized by different industrial data. Compared to empirical formulas, the BP neural network obtained better results. By different tests, the best result and the best configurations for the model were obtained: hidden layer nodes of the BP network was setted as three, the component contents (SiO2, Al2O3, Fe2O3, CaO, MgO were used as inputs and ST was used as output of the model.

  16. Automatic Offline Formulation of Robust Model Predictive Control Based on Linear Matrix Inequalities Method

    Directory of Open Access Journals (Sweden)

    Longge Zhang

    2013-01-01

    Full Text Available Two automatic robust model predictive control strategies are presented for uncertain polytopic linear plants with input and output constraints. A sequence of nested geometric proportion asymptotically stable ellipsoids and controllers is constructed offline first. Then the feedback controllers are automatically selected with the receding horizon online in the first strategy. Finally, a modified automatic offline robust MPC approach is constructed to improve the closed system's performance. The new proposed strategies not only reduce the conservatism but also decrease the online computation. Numerical examples are given to illustrate their effectiveness.

  17. Boundary-layer transition prediction using a simplified correlation-based model

    Directory of Open Access Journals (Sweden)

    Xia Chenchao

    2016-02-01

    Full Text Available This paper describes a simplified transition model based on the recently developed correlation-based γ-Reθt transition model. The transport equation of transition momentum thickness Reynolds number is eliminated for simplicity, and new transition length function and critical Reynolds number correlation are proposed. The new model is implemented into an in-house computational fluid dynamics (CFD code and validated for low and high-speed flow cases, including the zero pressure flat plate, airfoils, hypersonic flat plate and double wedge. Comparisons between the simulation results and experimental data show that the boundary-layer transition phenomena can be reasonably illustrated by the new model, which gives rise to significant improvements over the fully laminar and fully turbulent results. Moreover, the new model has comparable features of accuracy and applicability when compared with the original γ-Reθt model. In the meantime, the newly proposed model takes only one transport equation of intermittency factor and requires fewer correlations, which simplifies the original model greatly. Further studies, especially on separation-induced transition flows, are required for the improvement of the new model.

  18. A predictive ligand-based Bayesian model for human drug-induced liver injury.

    Science.gov (United States)

    Ekins, Sean; Williams, Antony J; Xu, Jinghai J

    2010-12-01

    Drug-induced liver injury (DILI) is one of the most important reasons for drug development failure at both preapproval and postapproval stages. There has been increased interest in developing predictive in vivo, in vitro, and in silico models to identify compounds that cause idiosyncratic hepatotoxicity. In the current study, we applied machine learning, a Bayesian modeling method with extended connectivity fingerprints and other interpretable descriptors. The model that was developed and internally validated (using a training set of 295 compounds) was then applied to a large test set relative to the training set (237 compounds) for external validation. The resulting concordance of 60%, sensitivity of 56%, and specificity of 67% were comparable to results for internal validation. The Bayesian model with extended connectivity functional class fingerprints of maximum diameter 6 (ECFC_6) and interpretable descriptors suggested several substructures that are chemically reactive and may also be important for DILI-causing compounds, e.g., ketones, diols, and α-methyl styrene type structures. Using Smiles Arbitrary Target Specification (SMARTS) filters published by several pharmaceutical companies, we evaluated whether such reactive substructures could be readily detected by any of the published filters. It was apparent that the most stringent filters used in this study, such as the Abbott alerts, which captures thiol traps and other compounds, may be of use in identifying DILI-causing compounds (sensitivity 67%). A significant outcome of the present study is that we provide predictions for many compounds that cause DILI by using the knowledge we have available from previous studies. These computational models may represent cost-effective selection criteria before in vitro or in vivo experimental studies.

  19. Predictive accuracy of the PanCan lung cancer risk prediction model - external validation based on CT from the Danish Lung Cancer Screening Trial

    International Nuclear Information System (INIS)

    Winkler Wille, Mathilde M.; Dirksen, Asger; Riel, Sarah J. van; Jacobs, Colin; Scholten, Ernst T.; Ginneken, Bram van; Saghir, Zaigham; Pedersen, Jesper Holst; Hohwue Thomsen, Laura; Skovgaard, Lene T.

    2015-01-01

    Lung cancer risk models should be externally validated to test generalizability and clinical usefulness. The Danish Lung Cancer Screening Trial (DLCST) is a population-based prospective cohort study, used to assess the discriminative performances of the PanCan models. From the DLCST database, 1,152 nodules from 718 participants were included. Parsimonious and full PanCan risk prediction models were applied to DLCST data, and also coefficients of the model were recalculated using DLCST data. Receiver operating characteristics (ROC) curves and area under the curve (AUC) were used to evaluate risk discrimination. AUCs of 0.826-0.870 were found for DLCST data based on PanCan risk prediction models. In the DLCST, age and family history were significant predictors (p = 0.001 and p = 0.013). Female sex was not confirmed to be associated with higher risk of lung cancer; in fact opposing effects of sex were observed in the two cohorts. Thus, female sex appeared to lower the risk (p = 0.047 and p = 0.040) in the DLCST. High risk discrimination was validated in the DLCST cohort, mainly determined by nodule size. Age and family history of lung cancer were significant predictors and could be included in the parsimonious model. Sex appears to be a less useful predictor. (orig.)

  20. Predictive accuracy of the PanCan lung cancer risk prediction model - external validation based on CT from the Danish Lung Cancer Screening Trial

    Energy Technology Data Exchange (ETDEWEB)

    Winkler Wille, Mathilde M.; Dirksen, Asger [Gentofte Hospital, Department of Respiratory Medicine, Hellerup (Denmark); Riel, Sarah J. van; Jacobs, Colin; Scholten, Ernst T.; Ginneken, Bram van [Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Saghir, Zaigham [Herlev Hospital, Department of Respiratory Medicine, Herlev (Denmark); Pedersen, Jesper Holst [Copenhagen University Hospital, Department of Thoracic Surgery, Rigshospitalet, Koebenhavn Oe (Denmark); Hohwue Thomsen, Laura [Hvidovre Hospital, Department of Respiratory Medicine, Hvidovre (Denmark); Skovgaard, Lene T. [University of Copenhagen, Department of Biostatistics, Koebenhavn Oe (Denmark)

    2015-10-15

    Lung cancer risk models should be externally validated to test generalizability and clinical usefulness. The Danish Lung Cancer Screening Trial (DLCST) is a population-based prospective cohort study, used to assess the discriminative performances of the PanCan models. From the DLCST database, 1,152 nodules from 718 participants were included. Parsimonious and full PanCan risk prediction models were applied to DLCST data, and also coefficients of the model were recalculated using DLCST data. Receiver operating characteristics (ROC) curves and area under the curve (AUC) were used to evaluate risk discrimination. AUCs of 0.826-0.870 were found for DLCST data based on PanCan risk prediction models. In the DLCST, age and family history were significant predictors (p = 0.001 and p = 0.013). Female sex was not confirmed to be associated with higher risk of lung cancer; in fact opposing effects of sex were observed in the two cohorts. Thus, female sex appeared to lower the risk (p = 0.047 and p = 0.040) in the DLCST. High risk discrimination was validated in the DLCST cohort, mainly determined by nodule size. Age and family history of lung cancer were significant predictors and could be included in the parsimonious model. Sex appears to be a less useful predictor. (orig.)

  1. MLP based models to predict PM10, O3 concentrations, in Sines industrial area

    Science.gov (United States)

    Durao, R.; Pereira, M. J.

    2012-04-01

    Sines is an important Portuguese industrial area located southwest cost of Portugal with important nearby protected natural areas. The main economical activities are related with this industrial area, the deep-water port, petrochemical and thermo-electric industry. Nevertheless, tourism is also an important economic activity especially in summer time with potential to grow. The aim of this study is to develop prediction models of pollutant concentration categories (e.g. low concentration and high concentration) in order to provide early warnings to the competent authorities who are responsible for the air quality management. The knowledge in advanced of pollutant high concentrations occurrence will allow the implementation of mitigation actions and the release of precautionary alerts to population. The regional air quality monitoring network consists in three monitoring stations where a set of pollutants' concentrations are registered on a continuous basis. From this set stands out the tropospheric ozone (O3) and particulate matter (PM10) due to the high concentrations occurring in the region and their adverse effects on human health. Moreover, the major industrial plants of the region monitor SO2, NO2 and particles emitted flows at the principal chimneys (point sources), also on a continuous basis,. Therefore Artificial neuronal networks (ANN) were the applied methodology to predict next day pollutant concentrations; due to the ANNs structure they have the ability to capture the non-linear relationships between predictor variables. Hence the first step of this study was to apply multivariate exploratory techniques to select the best predictor variables. The classification trees methodology (CART) was revealed to be the most appropriate in this case.. Results shown that pollutants atmospheric concentrations are mainly dependent on industrial emissions and a complex combination of meteorological factors and the time of the year. In the second step, the Multi

  2. State-Space Modeling and Performance Analysis of Variable-Speed Wind Turbine Based on a Model Predictive Control Approach

    Directory of Open Access Journals (Sweden)

    H. Bassi

    2017-04-01

    Full Text Available Advancements in wind energy technologies have led wind turbines from fixed speed to variable speed operation. This paper introduces an innovative version of a variable-speed wind turbine based on a model predictive control (MPC approach. The proposed approach provides maximum power point tracking (MPPT, whose main objective is to capture the maximum wind energy in spite of the variable nature of the wind’s speed. The proposed MPC approach also reduces the constraints of the two main functional parts of the wind turbine: the full load and partial load segments. The pitch angle for full load and the rotating force for the partial load have been fixed concurrently in order to balance power generation as well as to reduce the operations of the pitch angle. A mathematical analysis of the proposed system using state-space approach is introduced. The simulation results using MATLAB/SIMULINK show that the performance of the wind turbine with the MPC approach is improved compared to the traditional PID controller in both low and high wind speeds.

  3. An instantaneous spatiotemporal model to predict a bicyclist's Black Carbon exposure based on mobile noise measurements

    Science.gov (United States)

    Dekoninck, Luc; Botteldooren, Dick; Int Panis, Luc

    2013-11-01

    Several studies have shown that a significant amount of daily air pollution exposure, in particular Black Carbon (BC), is inhaled during trips. Assessing this contribution to exposure remains difficult because on the one hand local air pollution maps lack spatio-temporal resolution, at the other hand direct measurement of particulate matter concentration remains expensive. This paper proposes to use in-traffic noise measurements in combination with geographical and meteorological information for predicting BC exposure during commuting trips. Mobile noise measurements are cheaper and easier to perform than mobile air pollution measurements and can easily be used in participatory sensing campaigns. The uniqueness of the proposed model lies in the choice of noise indicators that goes beyond the traditional overall A-weighted noise level used in previous work. Noise and BC exposures are both related to the traffic intensity but also to traffic speed and traffic dynamics. Inspired by theoretical knowledge on the emission of noise and BC, the low frequency engine related noise and the difference between high frequency and low frequency noise that indicates the traffic speed, are introduced in the model. In addition, it is shown that splitting BC in a local and a background component significantly improves the model. The coefficients of the proposed model are extracted from 200 commuter bicycle trips. The predicted average exposure over a single trip correlates with measurements with a Pearson coefficient of 0.78 using only four parameters: the low frequency noise level, wind speed, the difference between high and low frequency noise and a street canyon index expressing local air pollution dispersion properties.

  4. Longitudinal connectome-based predictive modeling for REM sleep behavior disorder from structural brain connectivity

    Science.gov (United States)

    Giancardo, Luca; Ellmore, Timothy M.; Suescun, Jessika; Ocasio, Laura; Kamali, Arash; Riascos-Castaneda, Roy; Schiess, Mya C.

    2018-02-01

    Methods to identify neuroplasticity patterns in human brains are of the utmost importance in understanding and potentially treating neurodegenerative diseases. Parkinson disease (PD) research will greatly benefit and advance from the discovery of biomarkers to quantify brain changes in the early stages of the disease, a prodromal period when subjects show no obvious clinical symptoms. Diffusion tensor imaging (DTI) allows for an in-vivo estimation of the structural connectome inside the brain and may serve to quantify the degenerative process before the appearance of clinical symptoms. In this work, we introduce a novel strategy to compute longitudinal structural connectomes in the context of a whole-brain data-driven pipeline. In these initial tests, we show that our predictive models are able to distinguish controls from asymptomatic subjects at high risk of developing PD (REM sleep behavior disorder, RBD) with an area under the receiving operating characteristic curve of 0.90 (pParkinson's Progression Markers Initiative. By analyzing the brain connections most relevant for the predictive ability of the best performing model, we find connections that are biologically relevant to the disease.

  5. Data-Reconciliation Based Fault-Tolerant Model Predictive Control for a Biomass Boiler

    Directory of Open Access Journals (Sweden)

    Palash Sarkar

    2017-02-01

    Full Text Available This paper presents a novel, effective method to handle critical sensor faults affecting a control system devised to operate a biomass boiler. In particular, the proposed method consists of integrating a data reconciliation algorithm in a model predictive control loop, so as to annihilate the effects of faults occurring in the sensor of the flue gas oxygen concentration, by feeding the controller with the reconciled measurements. Indeed, the oxygen content in flue gas is a key variable in control of biomass boilers due its close connections with both combustion efficiency and polluting emissions. The main benefit of including the data reconciliation algorithm in the loop, as a fault tolerant component, with respect to applying standard fault tolerant methods, is that controller reconfiguration is not required anymore, since the original controller operates on the restored, reliable data. The integrated data reconciliation–model predictive control (MPC strategy has been validated by running simulations on a specific type of biomass boiler—the KPA Unicon BioGrate boiler.

  6. Method of critical power prediction based on film flow model coupled with subchannel analysis

    International Nuclear Information System (INIS)

    Tomiyama, Akio; Yokomizo, Osamu; Yoshimoto, Yuichiro; Sugawara, Satoshi.

    1988-01-01

    A new method was developed to predict critical powers for a wide variety of BWR fuel bundle designs. This method couples subchannel analysis with a liquid film flow model, instead of taking the conventional way which couples subchannel analysis with critical heat flux correlations. Flow and quality distributions in a bundle are estimated by the subchannel analysis. Using these distributions, film flow rates along fuel rods are then calculated with the film flow model. Dryout is assumed to occur where one of the film flows disappears. This method is expected to give much better adaptability to variations in geometry, heat flux, flow rate and quality distributions than the conventional methods. In order to verify the method, critical power data under BWR conditions were analyzed. Measured and calculated critical powers agreed to within ±7%. Furthermore critical power data for a tight-latticed bundle obtained by LeTourneau et al. were compared with critical powers calculated by the present method and two conventional methods, CISE correlation and subchannel analysis coupled with the CISE correlation. It was confirmed that the present method can predict critical powers more accurately than the conventional methods. (author)

  7. Warranty optimisation based on the prediction of costs to the manufacturer using neural network model and Monte Carlo simulation

    Science.gov (United States)

    Stamenkovic, Dragan D.; Popovic, Vladimir M.

    2015-02-01

    Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.

  8. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    Science.gov (United States)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  9. Predictability and interpretability of hybrid link-level crash frequency models for urban arterials compared to cluster-based and general negative binomial regression models.

    Science.gov (United States)

    Najaf, Pooya; Duddu, Venkata R; Pulugurtha, Srinivas S

    2018-03-01

    Machine learning (ML) techniques have higher prediction accuracy compared to conventional statistical methods for crash frequency modelling. However, their black-box nature limits the interpretability. The objective of this research is to combine both ML and statistical methods to develop hybrid link-level crash frequency models with high predictability and interpretability. For this purpose, M5' model trees method (M5') is introduced and applied to classify the crash data and then calibrate a model for each homogenous class. The data for 1134 and 345 randomly selected links on urban arterials in the city of Charlotte, North Carolina was used to develop and validate models, respectively. The outputs from the hybrid approach are compared with the outputs from cluster-based negative binomial regression (NBR) and general NBR models. Findings indicate that M5' has high predictability and is very reliable to interpret the role of different attributes on crash frequency compared to other developed models.

  10. Combining process-based and correlative models improves predictions of climate change effects on Schistosoma mansoni transmission in eastern Africa

    Directory of Open Access Journals (Sweden)

    Anna-Sofie Stensgaard

    2016-03-01

    Full Text Available Currently, two broad types of approach for predicting the impact of climate change on vector-borne diseases can be distinguished: i empirical-statistical (correlative approaches that use statistical models of relationships between vector and/or pathogen presence and environmental factors; and ii process-based (mechanistic approaches that seek to simulate detailed biological or epidemiological processes that explicitly describe system behavior. Both have advantages and disadvantages, but it is generally acknowledged that both approaches have value in assessing the response of species in general to climate change. Here, we combine a previously developed dynamic, agentbased model of the temperature-sensitive stages of the Schistosoma mansoni and intermediate host snail lifecycles, with a statistical model of snail habitat suitability for eastern Africa. Baseline model output compared to empirical prevalence data suggest that the combined model performs better than a temperature-driven model alone, and highlights the importance of including snail habitat suitability when modeling schistosomiasis risk. There was general agreement among models in predicting changes in risk, with 24-36% of the eastern Africa region predicted to experience an increase in risk of up-to 20% as a result of increasing temperatures over the next 50 years. Vice versa the models predicted a general decrease in risk in 30-37% of the study area. The snail habitat suitability models also suggest that anthropogenically altered habitat play a vital role for the current distribution of the intermediate snail host, and hence we stress the importance of accounting for land use changes in models of future changes in schistosomiasis risk.

  11. Multi-gene genetic programming based predictive models for municipal solid waste gasification in a fluidized bed gasifier.

    Science.gov (United States)

    Pandey, Daya Shankar; Pan, Indranil; Das, Saptarshi; Leahy, James J; Kwapinski, Witold

    2015-03-01

    A multi-gene genetic programming technique is proposed as a new method to predict syngas yield production and the lower heating value for municipal solid waste gasification in a fluidized bed gasifier. The study shows that the predicted outputs of the municipal solid waste gasification process are in good agreement with the experimental dataset and also generalise well to validation (untrained) data. Published experimental datasets are used for model training and validation purposes. The results show the effectiveness of the genetic programming technique for solving complex nonlinear regression problems. The multi-gene genetic programming are also compared with a single-gene genetic programming model to show the relative merits and demerits of the technique. This study demonstrates that the genetic programming based data-driven modelling strategy can be a good candidate for developing models for other types of fuels as well. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    Science.gov (United States)

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  13. Application of physiologically based pharmacokinetic modeling in predicting drug–drug interactions for sarpogrelate hydrochloride in humans

    Directory of Open Access Journals (Sweden)

    Min JS

    2016-09-01

    Full Text Available Jee Sun Min,1 Doyun Kim,1 Jung Bae Park,1 Hyunjin Heo,1 Soo Hyeon Bae,2 Jae Hong Seo,1 Euichaul Oh,1 Soo Kyung Bae1 1Integrated Research Institute of Pharmaceutical Sciences, College of Pharmacy, The Catholic University of Korea, Bucheon, 2Department of Pharmacology, College of Medicine, The Catholic University of Korea, Seocho-gu, Seoul, South Korea Background: Evaluating the potential risk of metabolic drug–drug interactions (DDIs is clinically important. Objective: To develop a physiologically based pharmacokinetic (PBPK model for sarpogrelate hydrochloride and its active metabolite, (R,S-1-{2-[2-(3-methoxyphenylethyl]-phenoxy}-3-(dimethylamino-2-propanol (M-1, in order to predict DDIs between sarpogrelate and the clinically relevant cytochrome P450 (CYP 2D6 substrates, metoprolol, desipramine, dextromethorphan, imipramine, and tolterodine. Methods: The PBPK model was developed, incorporating the physicochemical and pharmacokinetic properties of sarpogrelate hydrochloride, and M-1 based on the findings from in vitro and in vivo studies. Subsequently, the model was verified by comparing the predicted concentration-time profiles and pharmacokinetic parameters of sarpogrelate and M-1 to the observed clinical data. Finally, the verified model was used to simulate clinical DDIs between sarpogrelate hydrochloride and sensitive CYP2D6 substrates. The predictive performance of the model was assessed by comparing predicted results to observed data after coadministering sarpogrelate hydrochloride and metoprolol. Results: The developed PBPK model accurately predicted sarpogrelate and M-1 plasma concentration profiles after single or multiple doses of sarpogrelate hydrochloride. The simulated ratios of area under the curve and maximum plasma concentration of metoprolol in the presence of sarpogrelate hydrochloride to baseline were in good agreement with the observed ratios. The predicted fold-increases in the area under the curve ratios of metoprolol

  14. Model-Based Load Estimation for Predictive Condition Monitoring of Wind Turbines

    DEFF Research Database (Denmark)

    Perisic, Nevena; Pederen, Bo Juul; Grunnet, Jacob Deleuran

    signal is performed online, and a Load Indicator Signal (LIS) is formulated as a ratio between current estimated accumulated fatigue loads and its expected value based only on a priori knowledge (WTG dynamics and wind climate). LOT initialisation is based on a priori knowledge and can be obtained using...... programme for pre-maintenance actions. The performance of LOT is demonstrated by applying it to one of the most critical WTG components, the gearbox. Model-based load CMS for gearbox requires only standard WTG SCADA data. Direct measuring of gearbox fatigue loads requires high cost and low reliability...... measurement equipment. Thus, LOT can significantly reduce the price of load monitoring....

  15. Prediction of human CNS pharmacokinetics using a physiologically-based pharmacokinetic modeling approach

    NARCIS (Netherlands)

    Yamamoto, Yumi; Valitalo, Pyry A.; Wong, Yin Cheong; Huntjens, Dymphy R.; Proost, Johannes H.; Vermeulen, An; Krauwinkel, Walter; Beukers, Margot W.; Kokki, Hannu; Kokki, Merja; Danhof, Meindert; van Hasselt, Johan G. C.; de Lange, Elizabeth C. M.

    2018-01-01

    Knowledge of drug concentration-time profiles at the central nervous system (CNS) target-site is critically important for rational development of CNS targeted drugs. Our aim was to translate a recently published comprehensive CNS physiologically-based pharmacokinetic (PBPK) model from rat to human,

  16. Predicting forest dieback in Maine, USA: a simple model based on soil frost and drought

    Science.gov (United States)

    Allan N.D. Auclair; Warren E. Heilman; Blondel. Brinkman

    2010-01-01

    Tree roots of northern hardwoods are shallow rooted, winter active, and minimally frost hardened; dieback is a winter freezing injury to roots incited by frost penetration in the absence of adequate snow cover and exacerbated by drought in summer. High soil water content greatly increases conductivity of frost. We develop a model based on the sum of z-scores of soil...

  17. A Model Predictive Control-Based Power Converter System for Oscillating Water Column Wave Energy Converters

    Directory of Open Access Journals (Sweden)

    Gimara Rajapakse

    2017-10-01

    Full Text Available Despite the predictability and availability at large scale, wave energy conversion (WEC has still not become a mainstream renewable energy technology. One of the main reasons is the large variations in the extracted power which could lead to instabilities in the power grid. In addition, maintaining the speed of the turbine within optimal range under changing wave conditions is another control challenge, especially in oscillating water column (OWC type WEC systems. As a solution to the first issue, this paper proposes the direct connection of a battery bank into the dc-link of the back-to-back power converter system, thereby smoothening the power delivered to the grid. For the second issue, model predictive controllers (MPCs are developed for the rectifier and the inverter of the back-to-back converter system aiming to maintain the turbine speed within its optimum range. In addition, MPC controllers are designed to control the battery current as well, in both charging and discharging conditions. Operations of the proposed battery direct integration scheme and control solutions are verified through computer simulations. Simulation results show that the proposed integrated energy storage and control solutions are capable of delivering smooth power to the grid while maintaining the turbine speed within its optimum range under varying wave conditions.

  18. Predictive mapping of soil organic carbon in wet cultivated lands using classification-tree based models

    DEFF Research Database (Denmark)

    Kheir, Rania Bou; Greve, Mogens Humlekrog; Bøcher, Peder Klith

    2010-01-01

    the geographic distribution of SOC across Denmark using remote sensing (RS), geographic information systems (GISs) and decision-tree modeling (un-pruned and pruned classification trees). Seventeen parameters, i.e. parent material, soil type, landscape type, elevation, slope gradient, slope aspect, mean curvature...... field measurements in the area of interest (Denmark). A large number of tree-based classification models (588) were developed using (i) all of the parameters, (ii) all Digital Elevation Model (DEM) parameters only, (iii) the primary DEM parameters only, (iv), the remote sensing (RS) indices only, (v......) selected pairs of parameters, (vi) soil type, parent material and landscape type only, and (vii) the parameters having a high impact on SOC distribution in built pruned trees. The best constructed classification tree models (in the number of three) with the lowest misclassification error (ME...

  19. Dynamic Prediction of Power Storage and Delivery by Data-Based Fractional Differential Models of a Lithium Iron Phosphate Battery

    Directory of Open Access Journals (Sweden)

    Yunfeng Jiang

    2016-07-01

    Full Text Available A fractional derivative system identification approach for modeling battery dynamics is presented in this paper, where fractional derivatives are applied to approximate non-linear dynamic behavior of a battery system. The least squares-based state-variable filter (LSSVF method commonly used in the identification of continuous-time models is extended to allow the estimation of fractional derivative coefficents and parameters of the battery models by monitoring a charge/discharge demand signal and a power storage/delivery signal. In particular, the model is combined by individual fractional differential models (FDMs, where the parameters can be estimated by a least-squares algorithm. Based on experimental data, it is illustrated how the fractional derivative model can be utilized to predict the dynamics of the energy storage and delivery of a lithium iron phosphate battery (LiFePO 4 in real-time. The results indicate that a FDM can accurately capture the dynamics of the energy storage and delivery of the battery over a large operating range of the battery. It is also shown that the fractional derivative model exhibits improvements on prediction performance compared to standard integer derivative model, which in beneficial for a battery management system.

  20. Decentralized model predictive based load frequency control in an interconnected power system

    Energy Technology Data Exchange (ETDEWEB)

    Mohamed, T.H., E-mail: tarekhie@yahoo.co [High Institute of Energy, South Valley University (Egypt); Bevrani, H., E-mail: bevrani@ieee.or [Dept. of Electrical Engineering and Computer Science, University of Kurdistan (Iran, Islamic Republic of); Hassan, A.A., E-mail: aahsn@yahoo.co [Faculty of Engineering, Dept. of Electrical Engineering, Minia University, Minia (Egypt); Hiyama, T., E-mail: hiyama@cs.kumamoto-u.ac.j [Dept. of Electrical Engineering and Computer Science, Kumamoto University, Kumamoto (Japan)

    2011-02-15

    This paper presents a new load frequency control (LFC) design using the model predictive control (MPC) technique in a multi-area power system. The MPC technique has been designed such that the effect of the uncertainty due to governor and turbine parameters variation and load disturbance is reduced. Each local area controller is designed independently such that stability of the overall closed-loop system is guaranteed. A frequency response model of multi-area power system is introduced, and physical constraints of the governors and turbines are considered. The model was employed in the MPC structures. Digital simulations for both two and three-area power systems are provided to validate the effectiveness of the proposed scheme. The results show that, with the proposed MPC technique, the overall closed-loop system performance demonstrated robustness in the face of uncertainties due to governors and turbines parameters variation and loads disturbances. A performance comparison between the proposed controller and a classical integral control scheme is carried out confirming the superiority of the proposed MPC technique.

  1. Decentralized model predictive based load frequency control in an interconnected power system

    International Nuclear Information System (INIS)

    Mohamed, T.H.; Bevrani, H.; Hassan, A.A.; Hiyama, T.

    2011-01-01

    This paper presents a new load frequency control (LFC) design using the model predictive control (MPC) technique in a multi-area power system. The MPC technique has been designed such that the effect of the uncertainty due to governor and turbine parameters variation and load disturbance is reduced. Each local area controller is designed independently such that stability of the overall closed-loop system is guaranteed. A frequency response model of multi-area power system is introduced, and physical constraints of the governors and turbines are considered. The model was employed in the MPC structures. Digital simulations for both two and three-area power systems are provided to validate the effectiveness of the proposed scheme. The results show that, with the proposed MPC technique, the overall closed-loop system performance demonstrated robustness in the face of uncertainties due to governors and turbines parameters variation and loads disturbances. A performance comparison between the proposed controller and a classical integral control scheme is carried out confirming the superiority of the proposed MPC technique.

  2. Profile control simulations and experiments on TCV: a controller test environment and results using a model-based predictive controller

    Science.gov (United States)

    Maljaars, E.; Felici, F.; Blanken, T. C.; Galperti, C.; Sauter, O.; de Baar, M. R.; Carpanese, F.; Goodman, T. P.; Kim, D.; Kim, S. H.; Kong, M.; Mavkov, B.; Merle, A.; Moret, J. M.; Nouailletas, R.; Scheffer, M.; Teplukhina, A. A.; Vu, N. M. T.; The EUROfusion MST1-team; The TCV-team

    2017-12-01

    The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety factor profile (q-profile) and kinetic plasma parameters such as the plasma beta. This demands to establish reliable profile control routines in presently operational tokamaks. We present a model predictive profile controller that controls the q-profile and plasma beta using power requests to two clusters of gyrotrons and the plasma current request. The performance of the controller is analyzed in both simulation and TCV L-mode discharges where successful tracking of the estimated inverse q-profile as well as plasma beta is demonstrated under uncertain plasma conditions and the presence of disturbances. The controller exploits the knowledge of the time-varying actuator limits in the actuator input calculation itself such that fast transitions between targets are achieved without overshoot. A software environment is employed to prepare and test this and three other profile controllers in parallel in simulations and experiments on TCV. This set of tools includes the rapid plasma transport simulator RAPTOR and various algorithms to reconstruct the plasma equilibrium and plasma profiles by merging the available measurements with model-based predictions. In this work the estimated q-profile is merely based on RAPTOR model predictions due to the absence of internal current density measurements in TCV. These results encourage to further exploit model predictive profile control in experiments on TCV and other (future) tokamaks.

  3. Nonlinear Model-Based Predictive Control applied to Large Scale Cryogenic Facilities

    CERN Document Server

    Blanco Vinuela, Enrique; de Prada Moraga, Cesar

    2001-01-01

    The thesis addresses the study, analysis, development, and finally the real implementation of an advanced control system for the 1.8 K Cooling Loop of the LHC (Large Hadron Collider) accelerator. The LHC is the next accelerator being built at CERN (European Center for Nuclear Research), it will use superconducting magnets operating below a temperature of 1.9 K along a circumference of 27 kilometers. The temperature of these magnets is a control parameter with strict operating constraints. The first control implementations applied a procedure that included linear identification, modelling and regulation using a linear predictive controller. It did improve largely the overall performance of the plant with respect to a classical PID regulator, but the nature of the cryogenic processes pointed out the need of a more adequate technique, such as a nonlinear methodology. This thesis is a first step to develop a global regulation strategy for the overall control of the LHC cells when they will operate simultaneously....

  4. Model predictive control of PMSG-based wind turbines for frequency regulation in an isolated grid

    DEFF Research Database (Denmark)

    Wang, Haixin; Yang, Junyou; Ma, Yiming

    2017-01-01

    This paper proposes a frequency regulation strategy applied to wind turbine generators (WTGs) in an isolated grid. In order to complement active power shortage caused by sudden load or wind speed change, an improved deloading method is proposed to solve inconsistent regulation capabilities...... in different speed regions and provide WTGs a certain capacity of power reserves. Considering the torque compensation may bring about power oscillation, speed reference of conventional pitch control system should be reset. Moreover, to suppress disturbances of load and wind speed as well as overcome dependence...... on system parameters, a model predictive controller (MPC) of wind farm is designed to generate torque compensation for each deloaded WTG. The key feature of this strategy is that each WTG reacts to grid disturbances in different ways, which depends on generator speeds. Hardware-in-the-loop simulation...

  5. Incorporating a prediction of postgrazing herbage mass into a whole-farm model for pasture-based dairy systems.

    Science.gov (United States)

    Gregorini, P; Galli, J; Romera, A J; Levy, G; Macdonald, K A; Fernandez, H H; Beukes, P C

    2014-07-01

    The DairyNZ whole-farm model (WFM; DairyNZ, Hamilton, New Zealand) consists of a framework that links component models for animal, pastures, crops, and soils. The model was developed to assist with analysis and design of pasture-based farm systems. New (this work) and revised (e.g., cow, pasture, crops) component models can be added to the WFM, keeping the model flexible and up to date. Nevertheless, the WFM does not account for plant-animal relationships determining herbage-depletion dynamics. The user has to preset the maximum allowable level of herbage depletion [i.e., postgrazing herbage mass (residuals)] throughout the year. Because residuals have a direct effect on herbage regrowth, the WFM in its current form does not dynamically simulate the effect of grazing pressure on herbage depletion and consequent effect on herbage regrowth. The management of grazing pressure is a key component of pasture-based dairy systems. Thus, the main objective of the present work was to develop a new version of the WFM able to predict residuals, and thereby simulate related effects of grazing pressure dynamically at the farm scale. This objective was accomplished by incorporating a new component model into the WFM. This model represents plant-animal relationships, for example sward structure and herbage intake rate, and resulting level of herbage depletion. The sensitivity of the new version of the WFM was evaluated and then the new WFM was tested against an experimental data set previously used to evaluate the WFM and to illustrate the adequacy and improvement of the model development. Key outputs variables of the new version pertinent to this work (milk production, herbage dry matter intake, intake rate, harvesting efficiency, and residuals) responded acceptably to a range of input variables. The relative prediction errors for monthly and mean annual residual predictions were 20 and 5%, respectively. Monthly predictions of residuals had a line bias (1.5%), with a proportion

  6. Novel CNS drug discovery and development approach: model-based integration to predict neuro-pharmacokinetics and pharmacodynamics.

    Science.gov (United States)

    de Lange, Elizabeth C M; van den Brink, Willem; Yamamoto, Yumi; de Witte, Wilhelmus E A; Wong, Yin Cheong

    2017-12-01

    CNS drug development has been hampered by inadequate consideration of CNS pharmacokinetic (PK), pharmacodynamics (PD) and disease complexity (reductionist approach). Improvement is required via integrative model-based approaches. Areas covered: The authors summarize factors that have played a role in the high attrition rate of CNS compounds. Recent advances in CNS research and drug discovery are presented, especially with regard to assessment of relevant neuro-PK parameters. Suggestions for further improvements are also discussed. Expert opinion: Understanding time- and condition dependent interrelationships between neuro-PK and neuro-PD processes is key to predictions in different conditions. As a first screen, it is suggested to use in silico/in vitro derived molecular properties of candidate compounds and predict concentration-time profiles of compounds in multiple compartments of the human CNS, using time-course based physiology-based (PB) PK models. Then, for selected compounds, one can include in vitro drug-target binding kinetics to predict target occupancy (TO)-time profiles in humans. This will improve neuro-PD prediction. Furthermore, a pharmaco-omics approach is suggested, providing multilevel and paralleled data on systems processes from individuals in a systems-wide manner. Thus, clinical trials will be better informed, using fewer animals, while also, needing fewer individuals and samples per individual for proof of concept in humans.

  7. A Model for Traffic Accidents Prediction Based on Driver Personality Traits Assessment

    Directory of Open Access Journals (Sweden)

    Marjana Čubranić-Dobrodolac

    2017-12-01

    Full Text Available The model proposed in this paper uses four psychological instruments for assessing driver behaviour and personality traits aiming to find a relationship between the considered constructs and the occurrence of traffic accidents. A Barratt Impulsiveness Scale (BIS-11 was used for the assessment of impulsivity, Aggressive Driving Behaviour Questionnaire (ADBQ for assessing the aggressiveness while driving, Manchester Driver Attitude Questionnaire (DAQ and the Questionnaire for self-assessment of driving ability. Besides these instruments, the participants filled out an extensive demographic survey. Within the statistical analysis, in addition to the descriptive indicators, correlation coefficients were calculated and four hierarchical regression analyses were performed to determine the predictive power of personality traits on the occurrence of traffic accidents. Further, to confirm the results and to obtain additional information about the relationship between the considered variables, the structural equation modelling and binary logistic regression have been implemented. A sample of this research covered 305 drivers, of which there were 100 bus drivers and 102 truck drivers, as well as 103 drivers of privately owned vehicles. The results indicate that BIS-11 and ADBQ questionnaires show the best predictive power which means that impulsivity and aggressiveness as personality traits have the greatest influence on the occurrence of traffic accidents. This research could be useful in many fields, such as the design of selection procedures for professional drivers, development of programs for the prevention of traffic accidents and violations of law, rehabilitation of drivers who have been deprived of the driving license, etc.

  8. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  9. Discovering urban mobility patterns with PageRank based traffic modeling and prediction

    Science.gov (United States)

    Wang, Minjie; Yang, Su; Sun, Yi; Gao, Jun

    2017-11-01

    Urban transportation system can be viewed as complex network with time-varying traffic flows as links to connect adjacent regions as networked nodes. By computing urban traffic evolution on such temporal complex network with PageRank, it is found that for most regions, there exists a linear relation between the traffic congestion measure at present time and the PageRank value of the last time. Since the PageRank measure of a region does result from the mutual interactions of the whole network, it implies that the traffic state of a local region does not evolve independently but is affected by the evolution of the whole network. As a result, the PageRank values can act as signatures in predicting upcoming traffic congestions. We observe the aforementioned laws experimentally based on the trajectory data of 12000 taxies in Beijing city for one month.

  10. Modeling and control design of a stand alone wind energy conversion system based on functional model predictive control

    Energy Technology Data Exchange (ETDEWEB)

    Kassem, Ahmed M. [Beni-Suef University, Electrical Dept., Beni Suef (Egypt)

    2012-09-15

    This paper investigates the application of the model predictive control (MPC) approach to control the voltage and frequency of a stand alone wind generation system. This scheme consists of a wind turbine which drives an induction generator feeding an isolated load. A static VAR compensator is connected at the induction generator terminals to regulate the load voltage. The rotor speed, and thereby the load frequency are controlled via adjusting the mechanical power input using the blade pitch-angle. The MPC is used to calculate the optimal control actions including system constraints. To alleviate computational effort and to reduce numerical problems, particularly in large prediction horizon, an exponentially weighted functional model predictive control (FMPC) is employed. Digital simulations have been carried out in order to validate the effectiveness of the proposed scheme. The proposed controller has been tested through step changes in the wind speed and the load impedance. Simulation results show that adequate performance of the proposed wind energy scheme has been achieved. Moreover, this scheme is robust against the parameters variation and eliminates the influence of modeling and measurement errors. (orig.)

  11. Support-Vector-Machine-Based Reduced-Order Model for Limit Cycle Oscillation Prediction of Nonlinear Aeroelastic System

    Directory of Open Access Journals (Sweden)

    Gang Chen

    2012-01-01

    Full Text Available It is not easy for the system identification-based reduced-order model (ROM and even eigenmode based reduced-order model to predict the limit cycle oscillation generated by the nonlinear unsteady aerodynamics. Most of these traditional ROMs are sensitive to the flow parameter variation. In order to deal with this problem, a support vector machine- (SVM- based ROM was investigated and the general construction framework was proposed. The two-DOF aeroelastic system for the NACA 64A010 airfoil in transonic flow was then demonstrated for the new SVM-based ROM. The simulation results show that the new ROM can capture the LCO behavior of the nonlinear aeroelastic system with good accuracy and high efficiency. The robustness and computational efficiency of the SVM-based ROM would provide a promising tool for real-time flight simulation including nonlinear aeroelastic effects.

  12. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... signal based on a process model, coping with constraints on inputs and ... paper, we will present an introduction to the theory and application of MPC with Matlab codes ... section 5 presents the simulation results and section 6.

  13. Development of a QTL-environment-based predictive model for node addition rate in common bean.

    Science.gov (United States)

    Zhang, Li; Gezan, Salvador A; Eduardo Vallejos, C; Jones, James W; Boote, Kenneth J; Clavijo-Michelangeli, Jose A; Bhakta, Mehul; Osorno, Juan M; Rao, Idupulapati; Beebe, Stephen; Roman-Paoli, Elvin; Gonzalez, Abiezer; Beaver, James; Ricaurte, Jaumer; Colbert, Raphael; Correll, Melanie J

    2017-05-01

    This work reports the effects of the genetic makeup, the environment and the genotype by environment interactions for node addition rate in an RIL population of common bean. This information was used to build a predictive model for node addition rate. To select a plant genotype that will thrive in targeted environments it is critical to understand the genotype by environment interaction (GEI). In this study, multi-environment QTL analysis was used to characterize node addition rate (NAR, node day - 1 ) on the main stem of the common bean (Phaseolus vulgaris L). This analysis was carried out with field data of 171 recombinant inbred lines that were grown at five sites (Florida, Puerto Rico, 2 sites in Colombia, and North Dakota). Four QTLs (Nar1, Nar2, Nar3 and Nar4) were identified, one of which had significant QTL by environment interactions (QEI), that is, Nar2 with temperature. Temperature was identified as the main environmental factor affecting NAR while day length and solar radiation played a minor role. Integration of sites as covariates into a QTL mixed site-effect model, and further replacing the site component with explanatory environmental covariates (i.e., temperature, day length and solar radiation) yielded a model that explained 73% of the phenotypic variation for NAR with root mean square error of 16.25% of the mean. The QTL consistency and stability was examined through a tenfold cross validation with different sets of genotypes and these four QTLs were always detected with 50-90% probability. The final model was evaluated using leave-one-site-out method to assess the influence of site on node addition rate. These analyses provided a quantitative measure of the effects on NAR of common beans exerted by the genetic makeup, the environment and their interactions.

  14. Prediction of PWSCC in nickel base alloys using crack growth rate models

    International Nuclear Information System (INIS)

    Thompson, C.D.

    1995-01-01

    The Ford/Andresen slip dissolution SCC model, originally developed for stainless steel components in BWR environments, has been applied to Alloy 600 and Alloy X-750 tested in deaerated pure water chemistry. A method is described whereby the crack growth rates measured in compact tension specimens can be used to estimate crack growth in a component. Good agreement was found between model prediction and measured SCC in X-750 threaded fasteners over a wide range of temperatures, stresses, and material condition. Most data support the basic assumption of this model that cracks initiate early in life. The evidence supporting a particular SCC mechanism is mixed. Electrochemical repassivation data and estimates of oxide fracture strain indicate that the slip dissolution model can account for the observed crack growth rates, provided primary rather than secondary creep rates are used. However, approximately 100 cross-sectional TEM foils of SCC cracks including crack tips reveal no evidence of enhanced plasticity or unique dislocation patterns at the crack tip or along the crack to support a classic slip dissolution mechanism. No voids, hydrides,, or microcracks are found in the vicinity of the crack tips creating doubt about classic hydrogen related mechanisms. The bulk oxide films exhibit a surface oxide which is often different than the oxide found within a crack. Although bulk chromium concentration affects the rate of SCC, analytical data indicates the mechanism does not result from chromium depletion at the grain boundaries. The overall findings support a corrosion/dissolution mechanism but not one necessarily related to slip at the crack tip. (author). 12 refs, 27 figs

  15. A CN-Based Ensembled Hydrological Model for Enhanced Watershed Runoff Prediction

    Directory of Open Access Journals (Sweden)

    Muhammad Ajmal

    2016-01-01

    Full Text Available A major structural inconsistency of the traditional curve number (CN model is its dependence on an unstable fixed initial abstraction, which normally results in sudden jumps in runoff estimation. Likewise, the lack of pre-storm soil moisture accounting (PSMA procedure is another inherent limitation of the model. To circumvent those problems, we used a variable initial abstraction after ensembling the traditional CN model and a French four-parameter (GR4J model to better quantify direct runoff from ungauged watersheds. To mimic the natural rainfall-runoff transformation at the watershed scale, our new parameterization designates intrinsic parameters and uses a simple structure. It exhibited more accurate and consistent results than earlier methods in evaluating data from 39 forest-dominated watersheds, both for small and large watersheds. In addition, based on different performance evaluation indicators, the runoff reproduction results show that the proposed model produced more consistent results for dry, normal, and wet watershed conditions than the other models used in this study.

  16. A novel model to combine clinical and pathway-based transcriptomic information for the prognosis prediction of breast cancer.

    Directory of Open Access Journals (Sweden)

    Sijia Huang

    2014-09-01

    Full Text Available Breast cancer is the most common malignancy in women worldwide. With the increasing awareness of heterogeneity in breast cancers, better prediction of breast cancer prognosis is much needed for more personalized treatment and disease management. Towards this goal, we have developed a novel computational model for breast cancer prognosis by combining the Pathway Deregulation Score (PDS based pathifier algorithm, Cox regression and L1-LASSO penalization method. We trained the model on a set of 236 patients with gene expression data and clinical information, and validated the performance on three diversified testing data sets of 606 patients. To evaluate the performance of the model, we conducted survival analysis of the dichotomized groups, and compared the areas under the curve based on the binary classification. The resulting prognosis genomic model is composed of fifteen pathways (e.g., P53 pathway that had previously reported cancer relevance, and it successfully differentiated relapse in the training set (log rank p-value = 6.25e-12 and three testing data sets (log rank p-value < 0.0005. Moreover, the pathway-based genomic models consistently performed better than gene-based models on all four data sets. We also find strong evidence that combining genomic information with clinical information improved the p-values of prognosis prediction by at least three orders of magnitude in comparison to using either genomic or clinical information alone. In summary, we propose a novel prognosis model that harnesses the pathway-based dysregulation as well as valuable clinical information. The selected pathways in our prognosis model are promising targets for therapeutic intervention.

  17. Prediction of axillary lymph node metastasis in primary breast cancer patients using a decision tree-based model

    Directory of Open Access Journals (Sweden)

    Takada Masahiro

    2012-06-01

    Full Text Available Abstract Background The aim of this study was to develop a new data-mining model to predict axillary lymph node (AxLN metastasis in primary breast cancer. To achieve this, we used a decision tree-based prediction method—the alternating decision tree (ADTree. Methods Clinical datasets for primary breast cancer patients who underwent sentinel lymph node biopsy or AxLN dissection without prior treatment were collected from three institutes (institute A, n = 148; institute B, n = 143; institute C, n = 174 and were used for variable selection, model training and external validation, respectively. The models were evaluated using area under the receiver operating characteristics (ROC curve analysis to discriminate node-positive patients from node-negative patients. Results The ADTree model selected 15 of 24 clinicopathological variables in the variable selection dataset. The resulting area under the ROC curve values were 0.770 [95% confidence interval (CI, 0.689–0.850] for the model training dataset and 0.772 (95% CI: 0.689–0.856 for the validation dataset, demonstrating high accuracy and generalization ability of the model. The bootstrap value of the validation dataset was 0.768 (95% CI: 0.763–0.774. Conclusions Our prediction model showed high accuracy for predicting nodal metastasis in patients with breast cancer using commonly recorded clinical variables. Therefore, our model might help oncologists in the decision-making process for primary breast cancer patients before starting treatment.

  18. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  19. A biological network-based regularized artificial neural network model for robust phenotype prediction from gene expression data.

    Science.gov (United States)

    Kang, Tianyu; Ding, Wei; Zhang, Luoyan; Ziemek, Daniel; Zarringhalam, Kourosh

    2017-12-19

    Stratification of patient subpopulations that respond favorably to treatment or experience and adverse reaction is an essential step toward development of new personalized therapies and diagnostics. It is currently feasible to generate omic-scale biological measurements for all patients in a study, providing an opportunity for machine learning models to identify molecular markers for disease diagnosis and progression. However, the high variability of genetic background in human populations hampers the reproducibility of omic-scale markers. In this paper, we develop a biological network-based regularized artificial neural network model for prediction of phenotype from transcriptomic measurements in clinical trials. To improve model sparsity and the overall reproducibility of the model, we incorporate regularization for simultaneous shrinkage of gene sets based on active upstream regulatory mechanisms into the model. We benchmark our method against various regression, support vector machines and artificial neural network models and demonstrate the ability of our method in predicting the clinical outcomes using clinical trial data on acute rejection in kidney transplantation and response to Infliximab in ulcerative colitis. We show that integration of prior biological knowledge into the classification as developed in this paper, significantly improves the robustness and generalizability of predictions to independent datasets. We provide a Java code of our algorithm along with a parsed version of the STRING DB database. In summary, we present a method for prediction of clinical phenotypes using baseline genome-wide expression data that makes use of prior biological knowledge on gene-regulatory interactions in order to increase robustness and reproducibility of omic-scale markers. The integrated group-wise regularization methods increases the interpretability of biological signatures and gives stable performance estimates across independent test sets.

  20. A Non-linear Predictive Model of Borderline Personality Disorder Based on Multilayer Perceptron.

    Science.gov (United States)

    Maldonato, Nelson M; Sperandeo, Raffaele; Moretto, Enrico; Dell'Orco, Silvia

    2018-01-01

    Borderline Personality Disorder is a serious mental disease, classified in Cluster B of DSM IV-TR personality disorders. People with this syndrome presents an anamnesis of traumatic experiences and shows dissociative symptoms. Since not all subjects who have been victims of trauma develop a Borderline Personality Disorder, the emergence of this serious disease seems to have the fragility of character as a predisposing condition. Infect, numerous studies show that subjects positive for diagnosis of Borderline Personality Disorder had scores extremely high or extremely low to some temperamental dimensions (harm Avoidance and reward dependence) and character dimensions (cooperativeness and self directedness). In a sample of 602 subjects, who have had consecutive access to an Outpatient Mental Health Service, it was evaluated the presence of Borderline Personality Disorder using the semi-structured interview for the DSM IV-TR personality disorders. In this population we assessed the presence of dissociative symptoms with the Dissociative Experiences Scale and the personality traits with the Temperament and Character Inventory developed by Cloninger. To assess the weight and the predictive value of these psychopathological dimensions in relation to the Borderline Personality Disorder diagnosis, a neural network statistical model called "multilayer perceptron," was implemented. This model was developed with a dichotomous dependent variable, consisting in the presence or absence of the diagnosis of borderline personality disorder and with five covariates. The first one is the taxonomic subscale of dissociative experience scale, the others are temperamental and characterial traits: Novelty-Seeking, Harm-Avoidance, Self-Directedness and Cooperativeness. The statistical model, that results satisfactory, showed a significance capacity (89%) to predict the presence of borderline personality disorder. Furthermore, the dissociative symptoms seem to have a greater influence than

  1. A Non-linear Predictive Model of Borderline Personality Disorder Based on Multilayer Perceptron

    Directory of Open Access Journals (Sweden)

    Nelson M. Maldonato

    2018-04-01

    Full Text Available Borderline Personality Disorder is a serious mental disease, classified in Cluster B of DSM IV-TR personality disorders. People with this syndrome presents an anamnesis of traumatic experiences and shows dissociative symptoms. Since not all subjects who have been victims of trauma develop a Borderline Personality Disorder, the emergence of this serious disease seems to have the fragility of character as a predisposing condition. Infect, numerous studies show that subjects positive for diagnosis of Borderline Personality Disorder had scores extremely high or extremely low to some temperamental dimensions (harm Avoidance and reward dependence and character dimensions (cooperativeness and self directedness. In a sample of 602 subjects, who have had consecutive access to an Outpatient Mental Health Service, it was evaluated the presence of Borderline Personality Disorder using the semi-structured interview for the DSM IV-TR personality disorders. In this population we assessed the presence of dissociative symptoms with the Dissociative Experiences Scale and the personality traits with the Temperament and Character Inventory developed by Cloninger. To assess the weight and the predictive value of these psychopathological dimensions in relation to the Borderline Personality Disorder diagnosis, a neural network statistical model called “multilayer perceptron,” was implemented. This model was developed with a dichotomous dependent variable, consisting in the presence or absence of the diagnosis of borderline personality disorder and with five covariates. The first one is the taxonomic subscale of dissociative experience scale, the others are temperamental and characterial traits: Novelty-Seeking, Harm-Avoidance, Self-Directedness and Cooperativeness. The statistical model, that results satisfactory, showed a significance capacity (89% to predict the presence of borderline personality disorder. Furthermore, the dissociative symptoms seem to have a

  2. MicroRNA prediction using a fixed-order Markov model based on the secondary structure pattern.

    Directory of Open Access Journals (Sweden)

    Wei Shen

    Full Text Available Predicting miRNAs is an arduous task, due to the diversity of the precursors and complexity of enzyme processes. Although several prediction approaches have reached impressive performances, few of them could achieve a full-function recognition of mature miRNA directly from the candidate hairpins across species. Therefore, researchers continue to seek a more powerful model close to biological recognition to miRNA structure. In this report, we describe a novel miRNA prediction algorithm, known as FOMmiR, using a fixed-order Markov model based on the secondary structural pattern. For a training dataset containing 809 human pre-miRNAs and 6441 human pseudo-miRNA hairpins, the model's parameters were defined and evaluated. The results showed that FOMmiR reached 91% accuracy on the human dataset through 5-fold cross-validation. Moreover, for the independent test datasets, the FOMmiR presented an outstanding prediction in human and other species including vertebrates, Drosophila, worms and viruses, even plants, in contrast to the well-known algorithms and models. Especially, the FOMmiR was not only able to distinguish the miRNA precursors from the hairpins, but also locate the position and strand of the mature miRNA. Therefore, this study provides a new generation of miRNA prediction algorithm, which successfully realizes a full-function recognition of the mature miRNAs directly from the hairpin sequences. And it presents a new understanding of the biological recognition based on the strongest signal's location detected by FOMmiR, which might be closely associated with the enzyme cleavage mechanism during the miRNA maturation.

  3. Autoregressive-moving-average hidden Markov model for vision-based fall prediction-An application for walker robot.

    Science.gov (United States)

    Taghvaei, Sajjad; Jahanandish, Mohammad Hasan; Kosuge, Kazuhiro

    2017-01-01

    Population aging of the societies requires providing the elderly with safe and dependable assistive technologies in daily life activities. Improving the fall detection algorithms can play a major role in achieving this goal. This article proposes a real-time fall prediction algorithm based on the acquired visual data of a user with walking assistive system from a depth sensor. In the lack of a coupled dynamic model of the human and the assistive walker a hybrid "system identification-machine learning" approach is used. An autoregressive-moving-average (ARMA) model is fitted on the time-series walking data to forecast the upcoming states, and a hidden Markov model (HMM) based classifier is built on the top of the ARMA model to predict falling in the upcoming time frames. The performance of the algorithm is evaluated through experiments with four subjects including an experienced physiotherapist while using a walker robot in five different falling scenarios; namely, fall forward, fall down, fall back, fall left, and fall right. The algorithm successfully predicts the fall with a rate of 84.72%.

  4. Neuro-Fuzzy Prediction of Cooperation Interaction Profile of Flexible Road Train Based on Hybrid Automaton Modeling

    Directory of Open Access Journals (Sweden)

    Banjanovic-Mehmedovic Lejla

    2016-01-01

    Full Text Available Accurate prediction of traffic information is important in many applications in relation to Intelligent Transport systems (ITS, since it reduces the uncertainty of future traffic states and improves traffic mobility. There is a lot of research done in the field of traffic information predictions such as speed, flow and travel time. The most important research was done in the domain of cooperative intelligent transport system (C-ITS. The goal of this paper is to introduce the novel cooperation behaviour profile prediction through the example of flexible Road Trains useful road cooperation parameter, which contributes to the improvement of traffic mobility in Intelligent Transportation Systems. This paper presents an approach towards the control and cooperation behaviour modelling of vehicles in the flexible Road Train based on hybrid automaton and neuro-fuzzy (ANFIS prediction of cooperation profile of the flexible Road Train. Hybrid automaton takes into account complex dynamics of each vehicle as well as discrete cooperation approach. The ANFIS is a particular class of the ANN family with attractive estimation and learning potentials. In order to provide statistical analysis, RMSE (root mean square error, coefficient of determination (R2 and Pearson coefficient (r, were utilized. The study results suggest that ANFIS would be an efficient soft computing methodology, which could offer precise predictions of cooperative interactions between vehicles in Road Train, which is useful for prediction mobility in Intelligent Transport systems.

  5. Shelf-Life Prediction of Extra Virgin Olive Oils Using an Empirical Model Based on Standard Quality Tests

    Directory of Open Access Journals (Sweden)

    Claudia Guillaume

    2016-01-01

    Full Text Available Extra virgin olive oil shelf-life could be defined as the length of time under normal storage conditions within which no off-flavours or defects are developed and quality parameters such as peroxide value and specific absorbance are retained within accepted limits for this commercial category. Prediction of shelf-life is a desirable goal in the food industry. Even when extra virgin olive oil shelf-life should be one of the most important quality markers for extra virgin olive oil, it is not recognised as a legal parameter in most regulations and standards around the world. The proposed empirical formula to be evaluated in the present study is based on common quality tests with known and predictable result changes over time and influenced by different aspects of extra virgin olive oil with a meaningful influence over its shelf-life. The basic quality tests considered in the formula are Rancimat® or induction time (IND; 1,2-diacylglycerols (DAGs; pyropheophytin a (PPP; and free fatty acids (FFA. This paper reports research into the actual shelf-life of commercially packaged extra virgin olive oils versus the predicted shelf-life of those oils determined by analysing the expected deterioration curves for the three basic quality tests detailed above. Based on the proposed model, shelf-life is predicted by choosing the lowest predicted shelf-life of any of those three tests.

  6. A prediction model based on an artificial intelligence system for moderate to severe obstructive sleep apnea.

    Science.gov (United States)

    Sun, Lei Ming; Chiu, Hung-Wen; Chuang, Chih Yuan; Liu, Li

    2011-09-01

    Obstructive sleep apnea (OSA) is a major concern in modern medicine; however, it is difficult to diagnose. Screening questionnaires such as the Berlin questionnaire, Rome questionnaire, and BASH'IM score are used to identify patients with OSA. However, the sensitivity and specificity of these tools are not satisfactory. We aim to introduce an artificial intelligence method to screen moderate to severe OSA patients (apnea-hypopnea index ≧15). One hundred twenty patients were asked to complete a newly developed questionnaire before undergoing an overnight polysomnography (PSG) study. One hundred ten validated questionnaires were enrolled in this study. Genetic algorithm (GA) was used to build the five best models based on these questionnaires. The same data were analyzed with logistic regression (LR) for comparison. The sensitivity of the GA models varied from 81.8% to 88.0%, with a specificity of 95% to 97%. On the other hand, the sensitivity and specificity of the LR model were 55.6% and 57.9%, respectively. GA provides a good solution to build models for screening moderate to severe OSA patients, who require PSG evaluation and medical intervention. The questionnaire did not require any special biochemistry data and was easily self-administered. The sensitivity and specificity of the GA models are satisfactory and may improve when more patients are recruited.

  7. A neural network based computational model to predict the output power of different types of photovoltaic cells.

    Science.gov (United States)

    Xiao, WenBo; Nazario, Gina; Wu, HuaMing; Zhang, HuaMing; Cheng, Feng

    2017-01-01

    In this article, we introduced an artificial neural network (ANN) based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-), multi-crystalline (multi-), and amorphous (amor-) crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.

  8. A neural network based computational model to predict the output power of different types of photovoltaic cells.

    Directory of Open Access Journals (Sweden)

    WenBo Xiao

    Full Text Available In this article, we introduced an artificial neural network (ANN based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-, multi-crystalline (multi-, and amorphous (amor- crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.

  9. Stochastic Model Predictive Fault Tolerant Control Based on Conditional Value at Risk for Wind Energy Conversion System

    Directory of Open Access Journals (Sweden)

    Yun-Tao Shi

    2018-01-01

    Full Text Available Wind energy has been drawing considerable attention in recent years. However, due to the random nature of wind and high failure rate of wind energy conversion systems (WECSs, how to implement fault-tolerant WECS control is becoming a significant issue. This paper addresses the fault-tolerant control problem of a WECS with a probable actuator fault. A new stochastic model predictive control (SMPC fault-tolerant controller with the Conditional Value at Risk (CVaR objective function is proposed in this paper. First, the Markov jump linear model is used to describe the WECS dynamics, which are affected by many stochastic factors, like the wind. The Markov jump linear model can precisely model the random WECS properties. Second, the scenario-based SMPC is used as the controller to address the control problem of the WECS. With this controller, all the possible realizations of the disturbance in prediction horizon are enumerated by scenario trees so that an uncertain SMPC problem can be transformed into a deterministic model predictive control (MPC problem. Finally, the CVaR object function is adopted to improve the fault-tolerant control performance of the SMPC controller. CVaR can provide a balance between the performance and random failure risks of the system. The Min-Max performance index is introduced to compare the fault-tolerant control performance with the proposed controller. The comparison results show that the proposed method has better fault-tolerant control performance.

  10. Prediction of passive blood-brain partitioning: straightforward and effective classification models based on in silico derived physicochemical descriptors.

    Science.gov (United States)

    Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano

    2010-06-01

    The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of logBB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (logP), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental logBB data had been determined in vivo. In particular, since molecules with logBB>0.3 cross the blood-brain barrier (BBB) readily while molecules with logBB<-1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the logBB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. Published by Elsevier Inc.

  11. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    Science.gov (United States)

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Development of a lifetime prediction model for lithium-ion batteries based on extended accelerated aging test data

    Science.gov (United States)

    Ecker, Madeleine; Gerschler, Jochen B.; Vogel, Jan; Käbitz, Stefan; Hust, Friedrich; Dechent, Philipp; Sauer, Dirk Uwe

    2012-10-01

    Battery lifetime prognosis is a key requirement for successful market introduction of electric and hybrid vehicles. This work aims at the development of a lifetime prediction approach based on an aging model for lithium-ion batteries. A multivariable analysis of a detailed series of accelerated lifetime experiments representing typical operating conditions in hybrid electric vehicle is presented. The impact of temperature and state of charge on impedance rise and capacity loss is quantified. The investigations are based on a high-power NMC/graphite lithium-ion battery with good cycle lifetime. The resulting mathematical functions are physically motivated by the occurring aging effects and are used for the parameterization of a semi-empirical aging model. An impedance-based electric-thermal model is coupled to the aging model to simulate the dynamic interaction between aging of the battery and the thermal as well as electric behavior. Based on these models different drive cycles and management strategies can be analyzed with regard to their impact on lifetime. It is an important tool for vehicle designers and for the implementation of business models. A key contribution of the paper is the parameterization of the aging model by experimental data, while aging simulation in the literature usually lacks a robust empirical foundation.

  13. A model-based approach to predict muscle synergies using optimization: application to feedback control

    Directory of Open Access Journals (Sweden)

    Reza eSharif Razavian

    2015-10-01

    Full Text Available This paper presents a new model-based method to define muscle synergies. Unlike the conventional factorization approach, which extracts synergies from electromyographic data, the proposed method employs a biomechanical model and formally defines the synergies as the solution of an optimal control problem. As a result, the number of required synergies is directly related to the dimensions of the operational space. The estimated synergies are posture-dependent, which correlate well with the results of standard factorization methods. Two examples are used to showcase this method: a two-dimensional forearm model, and a three-dimensional driver arm model. It has been shown here that the synergies need to be task-specific (i.e. they are defined for the specific operational spaces: the elbow angle and the steering wheel angle in the two systems. This functional definition of synergies results in a low-dimensional control space, in which every force in the operational space is accurately created by a unique combination of synergies. As such, there is no need for extra criteria (e.g., minimizing effort in the process of motion control. This approach is motivated by the need for fast and bio-plausible feedback control of musculoskeletal systems, and can have important implications in engineering, motor control, and biomechanics.

  14. A model-based approach to predict muscle synergies using optimization: application to feedback control.

    Science.gov (United States)

    Sharif Razavian, Reza; Mehrabi, Naser; McPhee, John

    2015-01-01

    This paper presents a new model-based method to define muscle synergies. Unlike the conventional factorization approach, which extracts synergies from electromyographic data, the proposed method employs a biomechanical model and formally defines the synergies as the solution of an optimal control problem. As a result, the number of required synergies is directly related to the dimensions of the operational space. The estimated synergies are posture-dependent, which correlate well with the results of standard factorization methods. Two examples are used to showcase this method: a two-dimensional forearm model, and a three-dimensional driver arm model. It has been shown here that the synergies need to be task-specific (i.e., they are defined for the specific operational spaces: the elbow angle and the steering wheel angle in the two systems). This functional definition of synergies results in a low-dimensional control space, in which every force in the operational space is accurately created by a unique combination of synergies. As such, there is no need for extra criteria (e.g., minimizing effort) in the process of motion control. This approach is motivated by the need for fast and bio-plausible feedback control of musculoskeletal systems, and can have important implications in engineering, motor control, and biomechanics.

  15. Data-Based Predictive Control with Multirate Prediction Step

    Science.gov (United States)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the