WorldWideScience

Sample records for model predictions based

  1. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  2. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations ...

  3. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...

  4. Statistical model based gender prediction for targeted NGS clinical panels

    Directory of Open Access Journals (Sweden)

    Palani Kannan Kandavel

    2017-12-01

    The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.

  5. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  6. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  7. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    algorithm when extrapolating beyond the range of data used to build the model. The effects of these factors should be carefully considered when using this modelling approach to predict species ranges. Main conclusions We highlight an important source of uncertainty in assessments of the impacts of climate......Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions......, identify key reasons why model output may differ and discuss the implications that model uncertainty has for policy-guiding applications. Location The Western Cape of South Africa. Methods We applied nine of the most widely used modelling techniques to model potential distributions under current...

  8. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  9. New Approaches for Channel Prediction Based on Sinusoidal Modeling

    Directory of Open Access Journals (Sweden)

    Ekman Torbjörn

    2007-01-01

    Full Text Available Long-range channel prediction is considered to be one of the most important enabling technologies to future wireless communication systems. The prediction of Rayleigh fading channels is studied in the frame of sinusoidal modeling in this paper. A stochastic sinusoidal model to represent a Rayleigh fading channel is proposed. Three different predictors based on the statistical sinusoidal model are proposed. These methods outperform the standard linear predictor (LP in Monte Carlo simulations, but underperform with real measurement data, probably due to nonstationary model parameters. To mitigate these modeling errors, a joint moving average and sinusoidal (JMAS prediction model and the associated joint least-squares (LS predictor are proposed. It combines the sinusoidal model with an LP to handle unmodeled dynamics in the signal. The joint LS predictor outperforms all the other sinusoidal LMMSE predictors in suburban environments, but still performs slightly worse than the standard LP in urban environments.

  10. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  11. Cloud Based Metalearning System for Predictive Modeling of Biomedical Data

    Directory of Open Access Journals (Sweden)

    Milan Vukićević

    2014-01-01

    Full Text Available Rapid growth and storage of biomedical data enabled many opportunities for predictive modeling and improvement of healthcare processes. On the other side analysis of such large amounts of data is a difficult and computationally intensive task for most existing data mining algorithms. This problem is addressed by proposing a cloud based system that integrates metalearning framework for ranking and selection of best predictive algorithms for data at hand and open source big data technologies for analysis of biomedical data.

  12. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    Science.gov (United States)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  13. Bayesian Predictive Modeling Based on Multidimensional Connectivity Profiling

    Science.gov (United States)

    Herskovits, Edward

    2015-01-01

    Dysfunction of brain structural and functional connectivity is increasingly being recognized as playing an important role in many brain disorders. Diffusion tensor imaging (DTI) and functional magnetic resonance (fMR) imaging are widely used to infer structural and functional connectivity, respectively. How to combine structural and functional connectivity patterns for predictive modeling is an important, yet open, problem. We propose a new method, called Bayesian prediction based on multidimensional connectivity profiling (BMCP), to distinguish subjects at the individual level based on structural and functional connectivity patterns. BMCP combines finite mixture modeling and Bayesian network classification. We demonstrate its use in distinguishing young and elderly adults based on DTI and resting-state fMR data. PMID:25924166

  14. [Hyperspectrum based prediction model for nitrogen content of apple flowers].

    Science.gov (United States)

    Zhu, Xi-Cun; Zhao, Geng-Xing; Wang, Ling; Dong, Fang; Lei, Tong; Zhan, Bing

    2010-02-01

    The present paper aims to quantitatively retrieve nitrogen content in apple flowers, so as to provide an important basis for apple informationization management. By using ASD FieldSpec 3 field spectrometer, hyperspectral reflectivity of 120 apple flower samples in full-bloom stage was measured and their nitrogen contents were analyzed. Based on the apple flower original spectrum and first derivative spectral characteristics, correlation analysis was carried out between apple flowers original spectrum and first derivative spectrum reflectivity and nitrogen contents, so as to determine the sensitive bands. Based on characteristic spectral parameters, prediction models were built, optimized and tested. The results indicated that the nitrogen content of apple was very significantly negatively correlated with the original spectral reflectance in the 374-696, 1 340-1 890 and 2 052-2 433 nm, while in 736-913 nm they were very significantly positively correlated; the first derivative spectrum in 637-675 nm was very significantly negatively correlated, and in 676-746 nm was very significantly positively correlated. All the six spectral parameters established were significantly correlated with the nitrogen content of apple flowers. Through further comparison and selection, the prediction models built with original spectral reflectance of 640 and 676 nm were determined as the best for nitrogen content prediction of apple flowers. The test results showed that the coefficients of determination (R2) of the two models were 0.825 8 and 0.893 6, the total root mean square errors (RMSE) were 0.732 and 0.638 6, and the slopes were 0.836 1 and 1.019 2 respectively. Therefore the models produced desired results for nitrogen content prediction of apple flowers with average prediction accuracy of 92.9% and 94.0%. This study will provide theoretical basis and technical support for rapid apple flower nitrogen content prediction and nutrition diagnosis.

  15. Human Posture and Movement Prediction based on Musculoskeletal Modeling

    DEFF Research Database (Denmark)

    Farahani, Saeed Davoudabadi

    2014-01-01

    Abstract This thesis explores an optimization-based formulation, so-called inverse-inverse dynamics, for the prediction of human posture and motion dynamics performing various tasks. It is explained how this technique enables us to predict natural kinematic and kinetic patterns for human posture...... and motion using AnyBody Modeling System (AMS). AMS uses inverse dynamics to analyze musculoskeletal systems and is, therefore, limited by its dependency on input kinematics. We propose to alleviate this dependency by assuming that voluntary postures and movement strategies in humans are guided by a desire...... specifications. The model is then scaled to the desired anthropometric data by means of one of the existing scaling law in AMS. If the simulation results are to be compared with the experimental measurements, the model should be scaled to match the involved subjects. Depending on the scientific question...

  16. Construction Worker Fatigue Prediction Model Based on System Dynamic

    Directory of Open Access Journals (Sweden)

    Wahyu Adi Tri Joko

    2017-01-01

    Full Text Available Construction accident can be caused by internal and external factors such as worker fatigue and unsafe project environment. Tight schedule of construction project forcing construction worker to work overtime in long period. This situation leads to worker fatigue. This paper proposes a model to predict construction worker fatigue based on system dynamic (SD. System dynamic is used to represent correlation among internal and external factors and to simulate level of worker fatigue. To validate the model, 93 construction workers whom worked in a high rise building construction projects, were used as case study. The result shows that excessive workload, working elevation and age, are the main factors lead to construction worker fatigue. Simulation result also shows that these factors can increase worker fatigue level to 21.2% times compared to normal condition. Beside predicting worker fatigue level this model can also be used as early warning system to prevent construction worker accident

  17. Vehicle Driving Risk Prediction Based on Markov Chain Model

    Directory of Open Access Journals (Sweden)

    Xiaoxia Xiong

    2018-01-01

    Full Text Available A driving risk status prediction algorithm based on Markov chain is presented. Driving risk states are classified using clustering techniques based on feature variables describing the instantaneous risk levels within time windows, where instantaneous risk levels are determined in time-to-collision and time-headway two-dimension plane. Multinomial Logistic models with recursive feature variable estimation method are developed to improve the traditional state transition probability estimation, which also takes into account the comprehensive effects of driving behavior, traffic, and road environment factors on the evolution of driving risk status. The “100-car” natural driving data from Virginia Tech is employed for the training and validation of the prediction model. The results show that, under the 5% false positive rate, the prediction algorithm could have high prediction accuracy rate for future medium-to-high driving risks and could meet the timeliness requirement of collision avoidance warning. The algorithm could contribute to timely warning or auxiliary correction to drivers in the approaching-danger state.

  18. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  19. Decline curve based models for predicting natural gas well performance

    Directory of Open Access Journals (Sweden)

    Arash Kamari

    2017-06-01

    Full Text Available The productivity of a gas well declines over its production life as cannot cover economic policies. To overcome such problems, the production performance of gas wells should be predicted by applying reliable methods to analyse the decline trend. Therefore, reliable models are developed in this study on the basis of powerful artificial intelligence techniques viz. the artificial neural network (ANN modelling strategy, least square support vector machine (LSSVM approach, adaptive neuro-fuzzy inference system (ANFIS, and decision tree (DT method for the prediction of cumulative gas production as well as initial decline rate multiplied by time as a function of the Arps' decline curve exponent and ratio of initial gas flow rate over total gas flow rate. It was concluded that the results obtained based on the models developed in current study are in satisfactory agreement with the actual gas well production data. Furthermore, the results of comparative study performed demonstrates that the LSSVM strategy is superior to the other models investigated for the prediction of both cumulative gas production, and initial decline rate multiplied by time.

  20. Coal demand prediction based on a support vector machine model

    Energy Technology Data Exchange (ETDEWEB)

    Jia, Cun-liang; Wu, Hai-shan; Gong, Dun-wei [China University of Mining & Technology, Xuzhou (China). School of Information and Electronic Engineering

    2007-01-15

    A forecasting model for coal demand of China using a support vector regression was constructed. With the selected embedding dimension, the output vectors and input vectors were constructed based on the coal demand of China from 1980 to 2002. After compared with lineal kernel and Sigmoid kernel, a radial basis function(RBF) was adopted as the kernel function. By analyzing the relationship between the error margin of prediction and the model parameters, the proper parameters were chosen. The support vector machines (SVM) model with multi-input and single output was proposed. Compared the predictor based on RBF neural networks with test datasets, the results show that the SVM predictor has higher precision and greater generalization ability. In the end, the coal demand from 2003 to 2006 is accurately forecasted. l0 refs., 2 figs., 4 tabs.

  1. Fuzzy subtractive clustering based prediction model for brand association analysis

    Directory of Open Access Journals (Sweden)

    Widodo Imam Djati

    2018-01-01

    Full Text Available The brand is one of the crucial elements that determine the success of a product. Consumers in determining the choice of a product will always consider product attributes (such as features, shape, and color, however consumers are also considering the brand. Brand will guide someone to associate a product with specific attributes and qualities. This study was designed to identify the product attributes and predict brand performance with those attributes. A survey was run to obtain the attributes affecting the brand. Subtractive Fuzzy Clustering was used to classify and predict product brand association based aspects of the product under investigation. The result indicates that the five attributes namely shape, ease, image, quality and price can be used to classify and predict the brand. Training step gives best FSC model with radii (ra = 0.1. It develops 70 clusters/rules with MSE (Training is 9.7093e-016. By using 14 data testing, the model can predict brand very well (close to the target with MSE is 0.6005 and its’ accuracy rate is 71%.

  2. A neural network based model for urban noise prediction.

    Science.gov (United States)

    Genaro, N; Torija, A; Ramos-Ridao, A; Requena, I; Ruiz, D P; Zamorano, M

    2010-10-01

    Noise is a global problem. In 1972 the World Health Organization (WHO) classified noise as a pollutant. Since then, most industrialized countries have enacted laws and local regulations to prevent and reduce acoustic environmental pollution. A further aim is to alert people to the dangers of this type of pollution. In this context, urban planners need to have tools that allow them to evaluate the degree of acoustic pollution. Scientists in many countries have modeled urban noise, using a wide range of approaches, but their results have not been as good as expected. This paper describes a model developed for the prediction of environmental urban noise using Soft Computing techniques, namely Artificial Neural Networks (ANN). The model is based on the analysis of variables regarded as influential by experts in the field and was applied to data collected on different types of streets. The results were compared to those obtained with other models. The study found that the ANN system was able to predict urban noise with greater accuracy, and thus, was an improvement over those models. The principal component analysis (PCA) was also used to try to simplify the model. Although there was a slight decline in the accuracy of the results, the values obtained were also quite acceptable.

  3. Optimization of arterial age prediction models based in pulse wave

    International Nuclear Information System (INIS)

    Scandurra, A G; Meschino, G J; Passoni, L I; Dai Pra, A L; Introzzi, A R; Clara, F M

    2007-01-01

    We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff

  4. Optimization of arterial age prediction models based in pulse wave

    Energy Technology Data Exchange (ETDEWEB)

    Scandurra, A G [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Meschino, G J [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Passoni, L I [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Dai Pra, A L [Engineering Aplied Artificial Intelligence Group, Mathematics Department, Mar del Plata University (Argentina); Introzzi, A R [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Clara, F M [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina)

    2007-11-15

    We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff.

  5. Model Predictive Control-Based Fast Charging for Vehicular Batteries

    Directory of Open Access Journals (Sweden)

    Zhibin Song

    2011-08-01

    Full Text Available Battery fast charging is one of the most significant and difficult techniques affecting the commercialization of electric vehicles (EVs. In this paper, we propose a fast charge framework based on model predictive control, with the aim of simultaneously reducing the charge duration, which represents the out-of-service time of vehicles, and the increase in temperature, which represents safety and energy efficiency during the charge process. The RC model is employed to predict the future State of Charge (SOC. A single mode lumped-parameter thermal model and a neural network trained by real experimental data are also applied to predict the future temperature in simulations and experiments respectively. A genetic algorithm is then applied to find the best charge sequence under a specified fitness function, which consists of two objectives: minimizing the charging duration and minimizing the increase in temperature. Both simulation and experiment demonstrate that the Pareto front of the proposed method dominates that of the most popular constant current constant voltage (CCCV charge method.

  6. Predictive SIRT dosimetry based on a territorial model

    Directory of Open Access Journals (Sweden)

    Nadine Spahr

    2017-10-01

    Full Text Available Abstract Background In the planning of selective internal radiation therapy (SIRT for liver cancer treatment, one major aspect is to determine the prescribed activity and to estimate the resulting absorbed dose inside normal liver and tumor tissue. An optimized partition model for SIRT dosimetry based on arterial liver territories is proposed. This model is dedicated to characterize the variability of dose within the whole liver. For an arbitrary partition, the generalized absorbed dose is derived from the classical partition model. This enables to consider normal liver partitions for each arterial perfusion supply area and one partition for each tumor for activity and dose calculation. The proposed method excludes a margin of 11 mm emitting range around tumor volumes from normal liver to investigate the impact on activity calculation. Activity and dose calculation was performed for five patients using the body-surface-area (BSA method, the classical and territorial partition model. Results The territorial model reaches smaller normal liver doses and significant higher tumor doses compared to the classical partition model. The exclusion of a small region around tumors has a significant impact on mean liver dose. Determined tumor activities for the proposed method are higher in all patients when limited by normal liver dose. Activity calculation based on BSA achieves in all cases the lowest amount. Conclusions The territorial model provides a more local and patient-individual dose distribution in normal liver taking into account arterial supply areas. This proposed arterial liver territory-based partition model may be used for SPECT-independent activity calculation and dose prediction under the condition of an artery-based simulation for particle distribution.

  7. Learning-based Nonlinear Model Predictive Control to Improve Vision-based Mobile Robot Path Tracking

    Science.gov (United States)

    2015-07-01

    Traditional path- tracking controllers would represent the robot using a bicycle model (Figure 8) with steering angle, δcmd,k, and linear velocity...Learning-based Nonlinear Model Predictive Control to Improve Vision-based Mobile Robot Path Tracking Chris J. Ostafew Institute for Aerospace Studies...paper presents a Learning-based Nonlinear Model Predictive Control (LB-NMPC) algorithm to achieve high-performance path tracking in challenging off-road

  8. Predicting chick body mass by artificial intelligence-based models

    Directory of Open Access Journals (Sweden)

    Patricia Ferreira Ponciano Ferraz

    2014-07-01

    Full Text Available The objective of this work was to develop, validate, and compare 190 artificial intelligence-based models for predicting the body mass of chicks from 2 to 21 days of age subjected to different duration and intensities of thermal challenge. The experiment was conducted inside four climate-controlled wind tunnels using 210 chicks. A database containing 840 datasets (from 2 to 21-day-old chicks - with the variables dry-bulb air temperature, duration of thermal stress (days, chick age (days, and the daily body mass of chicks - was used for network training, validation, and tests of models based on artificial neural networks (ANNs and neuro-fuzzy networks (NFNs. The ANNs were most accurate in predicting the body mass of chicks from 2 to 21 days of age after they were subjected to the input variables, and they showed an R² of 0.9993 and a standard error of 4.62 g. The ANNs enable the simulation of different scenarios, which can assist in managerial decision-making, and they can be embedded in the heating control systems.

  9. Predicting fatigue crack initiation through image-based micromechanical modeling

    International Nuclear Information System (INIS)

    Cheong, K.-S.; Smillie, Matthew J.; Knowles, David M.

    2007-01-01

    The influence of individual grain orientation on early fatigue crack initiation in a four-point bend fatigue test was investigated numerically and experimentally. The 99.99% aluminium test sample was subjected to high cycle fatigue (HCF) and the top surface microstructure within the inner span of the sample was characterized using electron-beam backscattering diffraction (EBSD). Applying a finite-element submodelling approach, the microstructure was digitally reconstructed and refined studies carried out in regions where fatigue damage was observed. The constitutive behaviour of aluminium was described by a crystal plasticity model which considers the evolution of dislocations and accumulation of edge dislocation dipoles. Using an energy-based approach to quantify fatigue damage, the model correctly predicts regions in grains where early fatigue crack initiation was observed. The tendency for fatigue cracks to initiate in these grains appears to be strongly linked to the orientations of the grains relative to the direction of loading - grains less favourably aligned with respect to the loading direction appear more susceptible to fatigue crack initiation. The limitations of this modelling approach are also highlighted and discussed, as some grains predicted to initiate cracks did not show any visible signs of fatigue cracking in the same locations during testing

  10. Predictive Multiscale Modeling of Nanocellulose Based Materials and Systems

    International Nuclear Information System (INIS)

    Kovalenko, Andriy

    2014-01-01

    Cellulose Nanocrysals (CNC) is a renewable biodegradable biopolymer with outstanding mechanical properties made from highly abundant natural source, and therefore is very attractive as reinforcing additive to replace petroleum-based plastics in biocomposite materials, foams, and gels. Large-scale applications of CNC are currently limited due to its low solubility in non-polar organic solvents used in existing polymerization technologies. The solvation properties of CNC can be improved by chemical modification of its surface. Development of effective surface modifications has been rather slow because extensive chemical modifications destabilize the hydrogen bonding network of cellulose and deteriorate the mechanical properties of CNC. We employ predictive multiscale theory, modeling, and simulation to gain a fundamental insight into the effect of CNC surface modifications on hydrogen bonding, CNC crystallinity, solvation thermodynamics, and CNC compatibilization with the existing polymerization technologies, so as to rationally design green nanomaterials with improved solubility in non-polar solvents, controlled liquid crystal ordering and optimized extrusion properties. An essential part of this multiscale modeling approach is the statistical- mechanical 3D-RISM-KH molecular theory of solvation, coupled with quantum mechanics, molecular mechanics, and multistep molecular dynamics simulation. The 3D-RISM-KH theory provides predictive modeling of both polar and non-polar solvents, solvent mixtures, and electrolyte solutions in a wide range of concentrations and thermodynamic states. It properly accounts for effective interactions in solution such as steric effects, hydrophobicity and hydrophilicity, hydrogen bonding, salt bridges, buffer, co-solvent, and successfully predicts solvation effects and processes in bulk liquids, solvation layers at solid surface, and in pockets and other inner spaces of macromolecules and supramolecular assemblies. This methodology

  11. Predictive Multiscale Modeling of Nanocellulose Based Materials and Systems

    Science.gov (United States)

    Kovalenko, Andriy

    2014-08-01

    Cellulose Nanocrysals (CNC) is a renewable biodegradable biopolymer with outstanding mechanical properties made from highly abundant natural source, and therefore is very attractive as reinforcing additive to replace petroleum-based plastics in biocomposite materials, foams, and gels. Large-scale applications of CNC are currently limited due to its low solubility in non-polar organic solvents used in existing polymerization technologies. The solvation properties of CNC can be improved by chemical modification of its surface. Development of effective surface modifications has been rather slow because extensive chemical modifications destabilize the hydrogen bonding network of cellulose and deteriorate the mechanical properties of CNC. We employ predictive multiscale theory, modeling, and simulation to gain a fundamental insight into the effect of CNC surface modifications on hydrogen bonding, CNC crystallinity, solvation thermodynamics, and CNC compatibilization with the existing polymerization technologies, so as to rationally design green nanomaterials with improved solubility in non-polar solvents, controlled liquid crystal ordering and optimized extrusion properties. An essential part of this multiscale modeling approach is the statistical- mechanical 3D-RISM-KH molecular theory of solvation, coupled with quantum mechanics, molecular mechanics, and multistep molecular dynamics simulation. The 3D-RISM-KH theory provides predictive modeling of both polar and non-polar solvents, solvent mixtures, and electrolyte solutions in a wide range of concentrations and thermodynamic states. It properly accounts for effective interactions in solution such as steric effects, hydrophobicity and hydrophilicity, hydrogen bonding, salt bridges, buffer, co-solvent, and successfully predicts solvation effects and processes in bulk liquids, solvation layers at solid surface, and in pockets and other inner spaces of macromolecules and supramolecular assemblies. This methodology

  12. Stand diameter distribution modelling and prediction based on Richards function.

    Directory of Open Access Journals (Sweden)

    Ai-guo Duan

    Full Text Available The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM or maximum likelihood estimates method (MLEM were applied to estimate the parameters of models, and the parameter prediction method (PPM and parameter recovery method (PRM were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1 R distribution presented a more accurate simulation than three-parametric Weibull function; (2 the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3 the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4 the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.

  13. A Deep Learning Prediction Model Based on Extreme-Point Symmetric Mode Decomposition and Cluster Analysis

    Directory of Open Access Journals (Sweden)

    Guohui Li

    2017-01-01

    Full Text Available Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN is used to predict it. Finally, the reconstructed IMFs and residuals are the final prediction results. Six kinds of prediction models are compared, which are DBN prediction model, EMD-DBN prediction model, EEMD-DBN prediction model, CEEMD-DBN prediction model, ESMD-DBN prediction model, and the proposed model in this paper. The same sunspots time series are predicted with six kinds of prediction models. The experimental results show that the proposed model has better prediction accuracy and smaller error.

  14. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Scanpath Based N-Gram Models for Predicting Reading Behavior

    DEFF Research Database (Denmark)

    Mishra, Abhijit; Bhattacharyya, Pushpak; Carl, Michael

    2013-01-01

    Predicting reading behavior is a difficult task. Reading behavior depends on various linguistic factors (e.g. sentence length, structural complexity etc.) and other factors (e.g individual's reading style, age etc.). Ideally, a reading model should be similar to a language model where the model i...

  16. Domain-Based Predictive Models for Protein-Protein Interaction Prediction

    Directory of Open Access Journals (Sweden)

    Chen Xue-Wen

    2006-01-01

    Full Text Available Protein interactions are of biological interest because they orchestrate a number of cellular processes such as metabolic pathways and immunological recognition. Recently, methods for predicting protein interactions using domain information are proposed and preliminary results have demonstrated their feasibility. In this paper, we develop two domain-based statistical models (neural networks and decision trees for protein interaction predictions. Unlike most of the existing methods which consider only domain pairs (one domain from one protein and assume that domain-domain interactions are independent of each other, the proposed methods are capable of exploring all possible interactions between domains and make predictions based on all the domains. Compared to maximum-likelihood estimation methods, our experimental results show that the proposed schemes can predict protein-protein interactions with higher specificity and sensitivity, while requiring less computation time. Furthermore, the decision tree-based model can be used to infer the interactions not only between two domains, but among multiple domains as well.

  17. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  18. Embryo quality predictive models based on cumulus cells gene expression

    Directory of Open Access Journals (Sweden)

    Devjak R

    2016-06-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in clinical practice of infertility treatment, the indicators for high quality embryos were investigated. Cumulus cells (CC have a specific gene expression profile according to the developmental potential of the oocyte they are surrounding, and therefore, specific gene expression could be used as a biomarker. The aim of our study was to combine more than one biomarker to observe improvement in prediction value of embryo development. In this study, 58 CC samples from 17 IVF patients were analyzed. This study was approved by the Republic of Slovenia National Medical Ethics Committee. Gene expression analysis [quantitative real time polymerase chain reaction (qPCR] for five genes, analyzed according to embryo quality level, was performed. Two prediction models were tested for embryo quality prediction: a binary logistic and a decision tree model. As the main outcome, gene expression levels for five genes were taken and the area under the curve (AUC for two prediction models were calculated. Among tested genes, AMHR2 and LIF showed significant expression difference between high quality and low quality embryos. These two genes were used for the construction of two prediction models: the binary logistic model yielded an AUC of 0.72 ± 0.08 and the decision tree model yielded an AUC of 0.73 ± 0.03. Two different prediction models yielded similar predictive power to differentiate high and low quality embryos. In terms of eventual clinical decision making, the decision tree model resulted in easy-to-interpret rules that are highly applicable in clinical practice.

  19. CLINICAL DATABASE ANALYSIS USING DMDT BASED PREDICTIVE MODELLING

    Directory of Open Access Journals (Sweden)

    Srilakshmi Indrasenan

    2013-04-01

    Full Text Available In recent years, predictive data mining techniques play a vital role in the field of medical informatics. These techniques help the medical practitioners in predicting various classes which is useful in prediction treatment. One of such major difficulty is prediction of survival rate in breast cancer patients. Breast cancer is a common disease these days and fighting against it is a tough battle for both the surgeons and the patients. To predict the survivability rate in breast cancer patients which helps the medical practitioner to select the type of treatment a predictive data mining technique called Diversified Multiple Decision Tree (DMDT classification is used. Additionally, to avoid difficulties from the outlier and skewed data, it is also proposed to perform the improvement of training space by outlier filtering and over sampling. As a result, this novel approach gives the survivability rate of the cancer patients based on which the medical practitioners can choose the type of treatment.

  20. Model predictive control based on reduced order models applied to belt conveyor system.

    Science.gov (United States)

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Intra prediction based on Markov process modeling of images.

    Science.gov (United States)

    Kamisli, Fatih

    2013-10-01

    In recent video coding standards, intraprediction of a block of pixels is performed by copying neighbor pixels of the block along an angular direction inside the block. Each block pixel is predicted from only one or few directionally aligned neighbor pixels of the block. Although this is a computationally efficient approach, it ignores potentially useful correlation of other neighbor pixels of the block. To use this correlation, a general linear prediction approach is proposed, where each block pixel is predicted using a weighted sum of all neighbor pixels of the block. The disadvantage of this approach is the increased complexity because of the large number of weights. In this paper, we propose an alternative approach to intraprediction, where we model image pixels with a Markov process. The Markov process model accounts for the ignored correlation in standard intraprediction methods, but uses few neighbor pixels and enables a computationally efficient recursive prediction algorithm. Compared with the general linear prediction approach that has a large number of independent weights, the Markov process modeling approach uses a much smaller number of independent parameters and thus offers significantly reduced memory or computation requirements, while achieving similar coding gains with offline computed parameters.

  2. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  3. Prediction of Geological Subsurfaces Based on Gaussian Random Field Models

    Energy Technology Data Exchange (ETDEWEB)

    Abrahamsen, Petter

    1997-12-31

    During the sixties, random functions became practical tools for predicting ore reserves with associated precision measures in the mining industry. This was the start of the geostatistical methods called kriging. These methods are used, for example, in petroleum exploration. This thesis reviews the possibilities for using Gaussian random functions in modelling of geological subsurfaces. It develops methods for including many sources of information and observations for precise prediction of the depth of geological subsurfaces. The simple properties of Gaussian distributions make it possible to calculate optimal predictors in the mean square sense. This is done in a discussion of kriging predictors. These predictors are then extended to deal with several subsurfaces simultaneously. It is shown how additional velocity observations can be used to improve predictions. The use of gradient data and even higher order derivatives are also considered and gradient data are used in an example. 130 refs., 44 figs., 12 tabs.

  4. Structure-Based Predictive model for Coal Char Combustion.

    Energy Technology Data Exchange (ETDEWEB)

    Hurt, R.; Colo, J [Brown Univ., Providence, RI (United States). Div. of Engineering; Essenhigh, R.; Hadad, C [Ohio State Univ., Columbus, OH (United States). Dept. of Chemistry; Stanley, E. [Boston Univ., MA (United States). Dept. of Physics

    1997-09-24

    During the third quarter of this project, progress was made on both major technical tasks. Progress was made in the chemistry department at OSU on the calculation of thermodynamic properties for a number of model organic compounds. Modelling work was carried out at Brown to adapt a thermodynamic model of carbonaceous mesophase formation, originally applied to pitch carbonization, to the prediction of coke texture in coal combustion. This latter work makes use of the FG-DVC model of coal pyrolysis developed by Advanced Fuel Research to specify the pool of aromatic clusters that participate in the order/disorder transition. This modelling approach shows promise for the mechanistic prediction of the rank dependence of char structure and will therefore be pursued further. Crystalline ordering phenomena were also observed in a model char prepared from phenol-formaldehyde carbonized at 900{degrees}C and 1300{degrees}C using high-resolution TEM fringe imaging. Dramatic changes occur in the structure between 900 and 1300{degrees}C, making this char a suitable candidate for upcoming in situ work on the hot stage TEM. Work also proceeded on molecular dynamics simulations at Boston University and on equipment modification and testing for the combustion experiments with widely varying flame types at Ohio State.

  5. Competency-Based Model for Predicting Construction Project Managers Performance

    OpenAIRE

    Dainty, A. R. J.; Cheng, M.; Moore, D. R.

    2005-01-01

    Using behavioral competencies to influence human resource management decisions is gaining popularity in business organizations. This study identifies the core competencies associated with the construction management role and further, develops a predictive model to inform human resource selection and development decisions within large construction organizations. A range of construction managers took part in behavioral event interviews where staffs were asked to recount critical management inci...

  6. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    Science.gov (United States)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  7. In Silico Modeling of Gastrointestinal Drug Absorption: Predictive Performance of Three Physiologically Based Absorption Models.

    Science.gov (United States)

    Sjögren, Erik; Thörn, Helena; Tannergren, Christer

    2016-06-06

    Gastrointestinal (GI) drug absorption is a complex process determined by formulation, physicochemical and biopharmaceutical factors, and GI physiology. Physiologically based in silico absorption models have emerged as a widely used and promising supplement to traditional in vitro assays and preclinical in vivo studies. However, there remains a lack of comparative studies between different models. The aim of this study was to explore the strengths and limitations of the in silico absorption models Simcyp 13.1, GastroPlus 8.0, and GI-Sim 4.1, with respect to their performance in predicting human intestinal drug absorption. This was achieved by adopting an a priori modeling approach and using well-defined input data for 12 drugs associated with incomplete GI absorption and related challenges in predicting the extent of absorption. This approach better mimics the real situation during formulation development where predictive in silico models would be beneficial. Plasma concentration-time profiles for 44 oral drug administrations were calculated by convolution of model-predicted absorption-time profiles and reported pharmacokinetic parameters. Model performance was evaluated by comparing the predicted plasma concentration-time profiles, Cmax, tmax, and exposure (AUC) with observations from clinical studies. The overall prediction accuracies for AUC, given as the absolute average fold error (AAFE) values, were 2.2, 1.6, and 1.3 for Simcyp, GastroPlus, and GI-Sim, respectively. The corresponding AAFE values for Cmax were 2.2, 1.6, and 1.3, respectively, and those for tmax were 1.7, 1.5, and 1.4, respectively. Simcyp was associated with underprediction of AUC and Cmax; the accuracy decreased with decreasing predicted fabs. A tendency for underprediction was also observed for GastroPlus, but there was no correlation with predicted fabs. There were no obvious trends for over- or underprediction for GI-Sim. The models performed similarly in capturing dependencies on dose and

  8. Evaluation of Artificial Intelligence Based Models for Chemical Biodegradability Prediction

    Directory of Open Access Journals (Sweden)

    Aleksandar Sabljic

    2004-12-01

    Full Text Available This study presents a review of biodegradability modeling efforts including a detailed assessment of two models developed using an artificial intelligence based methodology. Validation results for these models using an independent, quality reviewed database, demonstrate that the models perform well when compared to another commonly used biodegradability model, against the same data. The ability of models induced by an artificial intelligence methodology to accommodate complex interactions in detailed systems, and the demonstrated reliability of the approach evaluated by this study, indicate that the methodology may have application in broadening the scope of biodegradability models. Given adequate data for biodegradability of chemicals under environmental conditions, this may allow for the development of future models that include such things as surface interface impacts on biodegradability for example.

  9. Predicting Plywood Properties with Wood-based Composite Models

    Science.gov (United States)

    Christopher Adam Senalik; Robert J. Ross

    2015-01-01

    Previous research revealed that stress wave nondestructive testing techniques could be used to evaluate the tensile and flexural properties of wood-based composite materials. Regression models were developed that related stress wave transmission characteristics (velocity and attenuation) to modulus of elasticity and strength. The developed regression models accounted...

  10. Feature-Based and String-Based Models for Predicting RNA-Protein Interaction

    Directory of Open Access Journals (Sweden)

    Donald Adjeroh

    2018-03-01

    Full Text Available In this work, we study two approaches for the problem of RNA-Protein Interaction (RPI. In the first approach, we use a feature-based technique by combining extracted features from both sequences and secondary structures. The feature-based approach enhanced the prediction accuracy as it included much more available information about the RNA-protein pairs. In the second approach, we apply search algorithms and data structures to extract effective string patterns for prediction of RPI, using both sequence information (protein and RNA sequences, and structure information (protein and RNA secondary structures. This led to different string-based models for predicting interacting RNA-protein pairs. We show results that demonstrate the effectiveness of the proposed approaches, including comparative results against leading state-of-the-art methods.

  11. A rule-based backchannel prediction model using pitch and pause information

    NARCIS (Netherlands)

    Truong, Khiet Phuong; Poppe, Ronald Walter; Heylen, Dirk K.J.

    We manually designed rules for a backchannel (BC) prediction model based on pitch and pause information. In short, the model predicts a BC when there is a pause of a certain length that is preceded by a falling or rising pitch. This model was validated against the Dutch IFADV Corpus in a

  12. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  13. A physiologically based in silico kinetic model predicting plasma cholesterol concentrations in humans

    NARCIS (Netherlands)

    Pas, N.C.A. van de; Woutersen, R.A.; Ommen, B. van; Rietjens, I.M.C.M.; Graaf, A.A. de

    2012-01-01

    Increased plasma cholesterol concentration is associated with increased risk of cardiovascular disease. This study describes the development, validation, and analysis of a physiologically based kinetic (PBK) model for the prediction of plasma cholesterol concentrations in humans. This model was

  14. STRUCTURE-BASED PREDICTIVE MODEL FOR COAL CHAR COMBUSTION

    Energy Technology Data Exchange (ETDEWEB)

    CHRISTOPHER M. HADAD; JOSEPH M. CALO; ROBERT H. ESSENHIGH; ROBERT H. HURT

    1998-06-04

    During the past quarter of this project, significant progress continued was made on both major technical tasks. Progress was made at OSU on advancing the application of computational chemistry to oxidative attack on model polyaromatic hydrocarbons (PAHs) and graphitic structures. This work is directed at the application of quantitative ab initio molecular orbital theory to address the decomposition products and mechanisms of coal char reactivity. Previously, it was shown that the �hybrid� B3LYP method can be used to provide quantitative information concerning the stability of the corresponding radicals that arise by hydrogen atom abstraction from monocyclic aromatic rings. In the most recent quarter, these approaches have been extended to larger carbocyclic ring systems, such as coronene, in order to compare the properties of a large carbonaceous PAH to that of the smaller, monocyclic aromatic systems. It was concluded that, at least for bond dissociation energy considerations, the properties of the large PAHs can be modeled reasonably well by smaller systems. In addition to the preceding work, investigations were initiated on the interaction of selected radicals in the �radical pool� with the different types of aromatic structures. In particular, the different pathways for addition vs. abstraction to benzene and furan by H and OH radicals were examined. Thus far, the addition channel appears to be significantly favored over abstraction on both kinetic and thermochemical grounds. Experimental work at Brown University in support of the development of predictive structural models of coal char combustion was focused on elucidating the role of coal mineral matter impurities on reactivity. An �inverse� approach was used where a carbon material was doped with coal mineral matter. The carbon material was derived from a high carbon content fly ash (Fly Ash 23 from the Salem Basin Power Plant. The ash was obtained from Pittsburgh #8 coal (PSOC 1451). Doped

  15. Structure Based Predictive Model for Coal Char Combustion

    Energy Technology Data Exchange (ETDEWEB)

    Robert Hurt; Joseph Calo; Robert Essenhigh; Christopher Hadad

    2000-12-30

    This unique collaborative project has taken a very fundamental look at the origin of structure, and combustion reactivity of coal chars. It was a combined experimental and theoretical effort involving three universities and collaborators from universities outside the U.S. and from U.S. National Laboratories and contract research companies. The project goal was to improve our understanding of char structure and behavior by examining the fundamental chemistry of its polyaromatic building blocks. The project team investigated the elementary oxidative attack on polyaromatic systems, and coupled with a study of the assembly processes that convert these polyaromatic clusters to mature carbon materials (or chars). We believe that the work done in this project has defined a powerful new science-based approach to the understanding of char behavior. The work on aromatic oxidation pathways made extensive use of computational chemistry, and was led by Professor Christopher Hadad in the Department of Chemistry at Ohio State University. Laboratory experiments on char structure, properties, and combustion reactivity were carried out at both OSU and Brown, led by Principle Investigators Joseph Calo, Robert Essenhigh, and Robert Hurt. Modeling activities were divided into two parts: first unique models of crystal structure development were formulated by the team at Brown (PI'S Hurt and Calo) with input from Boston University and significant collaboration with Dr. Alan Kerstein at Sandia and with Dr. Zhong-Ying chen at SAIC. Secondly, new combustion models were developed and tested, led by Professor Essenhigh at OSU, Dieter Foertsch (a collaborator at the University of Stuttgart), and Professor Hurt at Brown. One product of this work is the CBK8 model of carbon burnout, which has already found practical use in CFD codes and in other numerical models of pulverized fuel combustion processes, such as EPRI's NOxLOI Predictor. The remainder of the report consists of detailed

  16. Comparison of short term rainfall forecasts for model based flow prediction in urban drainage systems

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Poulsen, Troels Sander; Bøvith, Thomas

    2012-01-01

    Forecast based flow prediction in drainage systems can be used to implement real time control of drainage systems. This study compares two different types of rainfall forecasts – a radar rainfall extrapolation based nowcast model and a numerical weather prediction model. The models are applied...... performance of the system is found using the radar nowcast for the short leadtimes and weather model for larger lead times....

  17. Comparison Of Short Term Rainfall Forecasts For Model Based Flow Prediction In Urban Drainage Systems

    DEFF Research Database (Denmark)

    Thorndahl, Søren Liedtke; Poulsen, Troels Sander; Bøvith, Thomas

    2012-01-01

    Forecast based flow prediction in drainage systems can be used to implement real time control of drainage systems. This study compares two different types of rainfall forecasts – a radar rainfall extrapolation based nowcast model and a numerical weather prediction model. The models are applied...... performance of the system is found using the radar nowcast for the short leadtimes and weather model for larger lead times....

  18. A Simulation-Based Model for Final Price Prediction in Online Auctions

    OpenAIRE

    Shihyu Chou; Chin-Shien Lin; Chi-hong Chen; Tai-Ru Ho; Yu-Chen Hsieh

    2007-01-01

    Online auctions, a profitable, exciting, and dynamic part of e-commerce, have enjoyed increasing public interest. However, there is still a paucity of literature on final price prediction for online auctions. Although Markov process models provide a mathematical approach to predicting online auction prices, estimating parameters of a Markov process model in practice is a challenging task. In this paper we propose a simulation-based model as an alternative approach to predicting the final pric...

  19. Predictive Models of Alcohol Use Based on Attitudes and Individual Values

    Science.gov (United States)

    Del Castillo Rodríguez, José A. García; López-Sánchez, Carmen; Soler, M. Carmen Quiles; Del Castillo-López, Álvaro García; Pertusa, Mónica Gázquez; Campos, Juan Carlos Marzo; Inglés, Cándido J.

    2013-01-01

    Two predictive models are developed in this article: the first is designed to predict people' attitudes to alcoholic drinks, while the second sets out to predict the use of alcohol in relation to selected individual values. University students (N = 1,500) were recruited through stratified sampling based on sex and academic discipline. The…

  20. An Efficient Deterministic Approach to Model-based Prediction Uncertainty

    Data.gov (United States)

    National Aeronautics and Space Administration — Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the...

  1. Characteristic Model-Based Robust Model Predictive Control for Hypersonic Vehicles with Constraints

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2017-06-01

    Full Text Available Designing robust control for hypersonic vehicles in reentry is difficult, due to the features of the vehicles including strong coupling, non-linearity, and multiple constraints. This paper proposed a characteristic model-based robust model predictive control (MPC for hypersonic vehicles with reentry constraints. First, the hypersonic vehicle is modeled by a characteristic model composed of a linear time-varying system and a lumped disturbance. Then, the identification data are regenerated by the accumulative sum idea in the gray theory, which weakens effects of the random noises and strengthens regularity of the identification data. Based on the regenerated data, the time-varying parameters and the disturbance are online estimated according to the gray identification. At last, the mixed H2/H∞ robust predictive control law is proposed based on linear matrix inequalities (LMIs and receding horizon optimization techniques. Using active tackling system constraints of MPC, the input and state constraints are satisfied in the closed-loop control system. The validity of the proposed control is verified theoretically according to Lyapunov theory and illustrated by simulation results.

  2. BAYESIAN FORECASTS COMBINATION TO IMPROVE THE ROMANIAN INFLATION PREDICTIONS BASED ON ECONOMETRIC MODELS

    Directory of Open Access Journals (Sweden)

    Mihaela Simionescu

    2014-12-01

    Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.

  3. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  4. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  5. A CBR-Based and MAHP-Based Customer Value Prediction Model for New Product Development

    Science.gov (United States)

    Zhao, Yu-Jie; Luo, Xin-xing; Deng, Li

    2014-01-01

    In the fierce market environment, the enterprise which wants to meet customer needs and boost its market profit and share must focus on the new product development. To overcome the limitations of previous research, Chan et al. proposed a dynamic decision support system to predict the customer lifetime value (CLV) for new product development. However, to better meet the customer needs, there are still some deficiencies in their model, so this study proposes a CBR-based and MAHP-based customer value prediction model for a new product (C&M-CVPM). CBR (case based reasoning) can reduce experts' workload and evaluation time, while MAHP (multiplicative analytic hierarchy process) can use actual but average influencing factor's effectiveness in stimulation, and at same time C&M-CVPM uses dynamic customers' transition probability which is more close to reality. This study not only introduces the realization of CBR and MAHP, but also elaborates C&M-CVPM's three main modules. The application of the proposed model is illustrated and confirmed to be sensible and convincing through a stimulation experiment. PMID:25162050

  6. A CBR-based and MAHP-based customer value prediction model for new product development.

    Science.gov (United States)

    Zhao, Yu-Jie; Luo, Xin-xing; Deng, Li

    2014-01-01

    In the fierce market environment, the enterprise which wants to meet customer needs and boost its market profit and share must focus on the new product development. To overcome the limitations of previous research, Chan et al. proposed a dynamic decision support system to predict the customer lifetime value (CLV) for new product development. However, to better meet the customer needs, there are still some deficiencies in their model, so this study proposes a CBR-based and MAHP-based customer value prediction model for a new product (C&M-CVPM). CBR (case based reasoning) can reduce experts' workload and evaluation time, while MAHP (multiplicative analytic hierarchy process) can use actual but average influencing factor's effectiveness in stimulation, and at same time C&M-CVPM uses dynamic customers' transition probability which is more close to reality. This study not only introduces the realization of CBR and MAHP, but also elaborates C&M-CVPM's three main modules. The application of the proposed model is illustrated and confirmed to be sensible and convincing through a stimulation experiment.

  7. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  8. Population PK modelling and simulation based on fluoxetine and norfluoxetine concentrations in milk: a milk concentration-based prediction model.

    Science.gov (United States)

    Tanoshima, Reo; Bournissen, Facundo Garcia; Tanigawara, Yusuke; Kristensen, Judith H; Taddio, Anna; Ilett, Kenneth F; Begg, Evan J; Wallach, Izhar; Ito, Shinya

    2014-10-01

    Population pharmacokinetic (pop PK) modelling can be used for PK assessment of drugs in breast milk. However, complex mechanistic modelling of a parent and an active metabolite using both blood and milk samples is challenging. We aimed to develop a simple predictive pop PK model for milk concentration-time profiles of a parent and a metabolite, using data on fluoxetine (FX) and its active metabolite, norfluoxetine (NFX), in milk. Using a previously published data set of drug concentrations in milk from 25 women treated with FX, a pop PK model predictive of milk concentration-time profiles of FX and NFX was developed. Simulation was performed with the model to generate FX and NFX concentration-time profiles in milk of 1000 mothers. This milk concentration-based pop PK model was compared with the previously validated plasma/milk concentration-based pop PK model of FX. Milk FX and NFX concentration-time profiles were described reasonably well by a one compartment model with a FX-to-NFX conversion coefficient. Median values of the simulated relative infant dose on a weight basis (sRID: weight-adjusted daily doses of FX and NFX through breastmilk to the infant, expressed as a fraction of therapeutic FX daily dose per body weight) were 0.028 for FX and 0.029 for NFX. The FX sRID estimates were consistent with those of the plasma/milk-based pop PK model. A predictive pop PK model based on only milk concentrations can be developed for simultaneous estimation of milk concentration-time profiles of a parent (FX) and an active metabolite (NFX). © 2014 The British Pharmacological Society.

  9. PREDICTIVE CAPACITY OF INSOLVENCY MODELS BASED ON ACCOUNTING NUMBERS AND DESCRIPTIVE DATA

    Directory of Open Access Journals (Sweden)

    Rony Petson Santana de Souza

    2012-09-01

    Full Text Available In Brazil, research into models to predict insolvency started in the 1970s, with most authors using discriminant analysis as a statistical tool in their models. In more recent years, authors have increasingly tried to verify whether it is possible to forecast insolvency using descriptive data contained in firms’ reports. This study examines the capacity of some insolvency models to predict the failure of Brazilian companies that have gone bankrupt. The study is descriptive in nature with a quantitative approach, based on research of documents. The sample is omposed of 13 companies that were declared bankrupt between 1997 and 2003. The results indicate that the majority of the insolvency prediction models tested showed high rates of correct forecasts. The models relying on descriptive reports on average were more likely to succeed than those based on accounting figures. These findings demonstrate that although some studies indicate a lack of validity of predictive models created in different business settings, some of these models have good capacity to forecast insolvency in Brazil. We can conclude that both models based on accounting numbers and those relying on descriptive reports can predict the failure of firms. Therefore, it can be inferred that the majority of bankruptcy prediction models that make use of accounting numbers can succeed in predicting the failure of firms.

  10. A Method for Driving Route Predictions Based on Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Ning Ye

    2015-01-01

    Full Text Available We present a driving route prediction method that is based on Hidden Markov Model (HMM. This method can accurately predict a vehicle’s entire route as early in a trip’s lifetime as possible without inputting origins and destinations beforehand. Firstly, we propose the route recommendation system architecture, where route predictions play important role in the system. Secondly, we define a road network model, normalize each of driving routes in the rectangular coordinate system, and build the HMM to make preparation for route predictions using a method of training set extension based on K-means++ and the add-one (Laplace smoothing technique. Thirdly, we present the route prediction algorithm. Finally, the experimental results of the effectiveness of the route predictions that is based on HMM are shown.

  11. Physics-based Modeling Tools for Life Prediction and Durability Assessment of Advanced Materials, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The technical objectives of this program are: (1) to develop a set of physics-based modeling tools to predict the initiation of hot corrosion and to address pit and...

  12. A model of urban rational growth based on grey prediction

    Science.gov (United States)

    Xiao, Wenjing

    2017-04-01

    Smart growth focuses on building sustainable cities, using compact development to prevent urban sprawl. This paper establishes a series of models to implement smart growth theories into city design. Besides two specific city design cases are shown. Firstly, We establishes Smart Growth Measure Model to measure the success of smart growth of a city. And we use Full Permutation Polygon Synthetic Indicator Method to calculate the Comprehensive Indicator (CI) which is used to measure the success of smart growth. Secondly, this paper uses the principle of smart growth to develop a new growth plan for two cities. We establish an optimization model to maximum CI value. The Particle Swarm Optimization (PSO) algorithm is used to solve the model. Combined with the calculation results and the specific circumstances of cities, we make their the smart growth plan respectively.

  13. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  14. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    Science.gov (United States)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  15. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  16. Occupant feedback based model predictive control for thermal comfort and energy optimization: A chamber experimental evaluation

    International Nuclear Information System (INIS)

    Chen, Xiao; Wang, Qian; Srebric, Jelena

    2016-01-01

    Highlights: • This study evaluates an occupant-feedback driven Model Predictive Controller (MPC). • The MPC adjusts indoor temperature based on a dynamic thermal sensation (DTS) model. • A chamber model for predicting chamber air temperature is developed and validated. • Experiments show that MPC using DTS performs better than using Predicted Mean Vote. - Abstract: In current centralized building climate control, occupants do not have much opportunity to intervene the automated control system. This study explores the benefit of using thermal comfort feedback from occupants in the model predictive control (MPC) design based on a novel dynamic thermal sensation (DTS) model. This DTS model based MPC was evaluated in chamber experiments. A hierarchical structure for thermal control was adopted in the chamber experiments. At the high level, an MPC controller calculates the optimal supply air temperature of the chamber heating, ventilation, and air conditioning (HVAC) system, using the feedback of occupants’ votes on thermal sensation. At the low level, the actual supply air temperature is controlled by the chiller/heater using a PI control to achieve the optimal set point. This DTS-based MPC was also compared to an MPC designed based on the Predicted Mean Vote (PMV) model for thermal sensation. The experiment results demonstrated that the DTS-based MPC using occupant feedback allows significant energy saving while maintaining occupant thermal comfort compared to the PMV-based MPC.

  17. Ontological model for predicting cyberattacks based on virtualized Honeynets

    Directory of Open Access Journals (Sweden)

    Gaona-García, Pablo

    2016-12-01

    Full Text Available The honeynets security tools are widely used today for the purpose of gathering information from potential attackers about vulnerabilities in our network. For performing correct use of them is necessary to understand the existing types, structures raised, the tools used and current developments. However, poor planning honeypot or honeynet one could provide unwanted users an access point to the network we want to protect. The purpose of this article is to carry out the approach of an ontological model for identifying the most common attacks types from the use of honeynets, and its implementation on working scenarios. This model will facilitate decision-making for the location of elements and components to computer level in an organization.

  18. Structure-Based Predictive Model for Coal Char Combustion

    Energy Technology Data Exchange (ETDEWEB)

    Christopher Hadad; Joseph Calo; Robert Essenhigh; Robert Hurt

    1998-04-08

    Progress was made this period on a number of separate experimental and modelling activities. At Brown, the models of carbon nanostructure evolution were expanded to consider high-rank materials with initial anisotropy. The report presents detailed results of Monte Carlo simulations with non-zero initial layer length and with statistically oriented initial states. The expanded simulations are now capable of describing the development of nanostructure during carbonization of most coals. Work next quarter will address the remaining challenge of isotropic coke-forming coals. Experiments at Brown yielded important data on the "memory loss" phenomenon in carbon annealing, and on the effect of mineral matter on high-temperature reactivity. The experimental aspects of the Brown work will be discussed in detail in the next report.

  19. Comparison of prediction-based fusion and feature-level fusion across different learning models

    NARCIS (Netherlands)

    Petridis, Stavros; Bilakhia, Sanjay; Pantic, Maja

    2012-01-01

    There is evidence in neuroscience indicating that prediction of spatial and temporal patterns in the brain plays a key role in perception. This has given rise to prediction-based fusion as a method of combining information from audio and visual modalities. Models are trained on a per-class basis, to

  20. Bearing Degradation Process Prediction Based on the Support Vector Machine and Markov Model

    Directory of Open Access Journals (Sweden)

    Shaojiang Dong

    2014-01-01

    Full Text Available Predicting the degradation process of bearings before they reach the failure threshold is extremely important in industry. This paper proposed a novel method based on the support vector machine (SVM and the Markov model to achieve this goal. Firstly, the features are extracted by time and time-frequency domain methods. However, the extracted original features are still with high dimensional and include superfluous information, and the nonlinear multifeatures fusion technique LTSA is used to merge the features and reduces the dimension. Then, based on the extracted features, the SVM model is used to predict the bearings degradation process, and the CAO method is used to determine the embedding dimension of the SVM model. After the bearing degradation process is predicted by SVM model, the Markov model is used to improve the prediction accuracy. The proposed method was validated by two bearing run-to-failure experiments, and the results proved the effectiveness of the methodology.

  1. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  2. Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model

    Directory of Open Access Journals (Sweden)

    C. Narendra Babu

    2015-07-01

    Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.

  3. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from...... the two estimation methods without noise correction are studied. Second, a noise robust GMM estimator is constructed by approximating integrated volatility by a realized kernel instead of realized variance. The PBEFs are also recalculated in the noise setting, and the two estimation methods ability...

  4. A Deep Learning Prediction Model Based on Extreme-Point Symmetric Mode Decomposition and Cluster Analysis

    OpenAIRE

    Li, Guohui; Zhang, Songling; Yang, Hong

    2017-01-01

    Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD) and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN) is used to predict it. Finally, the reconstructed ...

  5. Predictive model for early math skills based on structural equations.

    Science.gov (United States)

    Aragón, Estíbaliz; Navarro, José I; Aguilar, Manuel; Cerda, Gamal; García-Sedeño, Manuel

    2016-12-01

    Early math skills are determined by higher cognitive processes that are particularly important for acquiring and developing skills during a child's early education. Such processes could be a critical target for identifying students at risk for math learning difficulties. Few studies have considered the use of a structural equation method to rationalize these relations. Participating in this study were 207 preschool students ages 59 to 72 months, 108 boys and 99 girls. Performance with respect to early math skills, early literacy, general intelligence, working memory, and short-term memory was assessed. A structural equation model explaining 64.3% of the variance in early math skills was applied. Early literacy exhibited the highest statistical significance (β = 0.443, p < 0.05), followed by intelligence (β = 0.286, p < 0.05), working memory (β = 0.220, p < 0.05), and short-term memory (β = 0.213, p < 0.05). Correlations between the independent variables were also significant (p < 0.05). According to the results, cognitive variables should be included in remedial intervention programs. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  6. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud; Sørensen, John Dalsgaard

    2018-01-01

    monitoring, fault prediction and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution......The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...

  7. Ligand and structure-based classification models for Prediction of P-glycoprotein inhibitors

    DEFF Research Database (Denmark)

    Klepsch, Freya; Poongavanam, Vasanthanathan; Ecker, Gerhard Franz

    2014-01-01

    The ABC transporter P-glycoprotein (P-gp) actively transports a wide range of drugs and toxins out of cells, and is therefore related to multidrug resistance and the ADME profile of therapeutics. Thus, development of predictive in silico models for the identification of P-gp inhibitors is of great......Score resulted in the correct prediction of 61 % of the external test set. This demonstrates that ligand-based models currently remain the methods of choice for accurately predicting P-gp inhibitors. However, structure-based classification offers information about possible drug/protein interactions, which helps...

  8. Data-Mining-Based Coronary Heart Disease Risk Prediction Model Using Fuzzy Logic and Decision Tree.

    Science.gov (United States)

    Kim, Jaekwon; Lee, Jongsik; Lee, Youngho

    2015-07-01

    The importance of the prediction of coronary heart disease (CHD) has been recognized in Korea; however, few studies have been conducted in this area. Therefore, it is necessary to develop a method for the prediction and classification of CHD in Koreans. A model for CHD prediction must be designed according to rule-based guidelines. In this study, a fuzzy logic and decision tree (classification and regression tree [CART])-driven CHD prediction model was developed for Koreans. Datasets derived from the Korean National Health and Nutrition Examination Survey VI (KNHANES-VI) were utilized to generate the proposed model. The rules were generated using a decision tree technique, and fuzzy logic was applied to overcome problems associated with uncertainty in CHD prediction. The accuracy and receiver operating characteristic (ROC) curve values of the propose systems were 69.51% and 0.594, proving that the proposed methods were more efficient than other models.

  9. Cyclone-track based seasonal prediction for South Pacific tropical cyclone activity using APCC multi-model ensemble prediction

    Science.gov (United States)

    Kim, Ok-Yeon; Chan, Johnny C. L.

    2018-01-01

    This study aims to predict the seasonal TC track density over the South Pacific by combining the Asia-Pacific Economic Cooperation (APEC) Climate Center (APCC) multi-model ensemble (MME) dynamical prediction system with a statistical model. The hybrid dynamical-statistical model is developed for each of the three clusters that represent major groups of TC best tracks in the South Pacific. The cross validation result from the MME hybrid model demonstrates moderate but statistically significant skills to predict TC numbers across all TC clusters, with correlation coefficients of 0.4 to 0.6 between the hindcasts and observations for 1982/1983 to 2008/2009. The prediction skill in the area east of about 170°E is significantly influenced by strong El Niño, whereas the skill in the southwest Pacific region mainly comes from the linear trend of TC number. The prediction skill of TC track density is particularly high in the region where there is climatological high TC track density around the area 160°E-180° and 20°S. Since this area has a mixed response with respect to ENSO, the prediction skill of TC track density is higher in non-ENSO years compared to that in ENSO years. Even though the cross-validation prediction skill is higher in the area east of about 170°E compared to other areas, this region shows less skill for track density based on the categorical verification due to huge influences by strong El Niño years. While prediction skill of the developed methodology varies across the region, it is important that the model demonstrates skill in the area where TC activity is high. Such a result has an important practical implication—improving the accuracy of seasonal forecast and providing communities at risk with advanced information which could assist with preparedness and disaster risk reduction.

  10. Short-term wind power prediction based on LSSVM–GSA model

    International Nuclear Information System (INIS)

    Yuan, Xiaohui; Chen, Chen; Yuan, Yanbin; Huang, Yuehua; Tan, Qingxiong

    2015-01-01

    Highlights: • A hybrid model is developed for short-term wind power prediction. • The model is based on LSSVM and gravitational search algorithm. • Gravitational search algorithm is used to optimize parameters of LSSVM. • Effect of different kernel function of LSSVM on wind power prediction is discussed. • Comparative studies show that prediction accuracy of wind power is improved. - Abstract: Wind power forecasting can improve the economical and technical integration of wind energy into the existing electricity grid. Due to its intermittency and randomness, it is hard to forecast wind power accurately. For the purpose of utilizing wind power to the utmost extent, it is very important to make an accurate prediction of the output power of a wind farm under the premise of guaranteeing the security and the stability of the operation of the power system. In this paper, a hybrid model (LSSVM–GSA) based on the least squares support vector machine (LSSVM) and gravitational search algorithm (GSA) is proposed to forecast the short-term wind power. As the kernel function and the related parameters of the LSSVM have a great influence on the performance of the prediction model, the paper establishes LSSVM model based on different kernel functions for short-term wind power prediction. And then an optimal kernel function is determined and the parameters of the LSSVM model are optimized by using GSA. Compared with the Back Propagation (BP) neural network and support vector machine (SVM) model, the simulation results show that the hybrid LSSVM–GSA model based on exponential radial basis kernel function and GSA has higher accuracy for short-term wind power prediction. Therefore, the proposed LSSVM–GSA is a better model for short-term wind power prediction

  11. Copula based prediction models: an application to an aortic regurgitation study

    Directory of Open Access Journals (Sweden)

    Shoukri Mohamed M

    2007-06-01

    Full Text Available Abstract Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction; p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808. From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0

  12. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model

    Directory of Open Access Journals (Sweden)

    Jingzhou Xin

    2018-01-01

    Full Text Available Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA, and generalized autoregressive conditional heteroskedasticity (GARCH. Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS deformation monitoring system demonstrated that: (1 the Kalman filter is capable of denoising the bridge deformation monitoring data; (2 the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3 in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity; the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data

  13. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.

    Science.gov (United States)

    Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu

    2018-01-19

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing

  14. A Markov Chain Based Demand Prediction Model for Stations in Bike Sharing Systems

    Directory of Open Access Journals (Sweden)

    Yajun Zhou

    2018-01-01

    Full Text Available Accurate transfer demand prediction at bike stations is the key to develop balancing solutions to address the overutilization or underutilization problem often occurring in bike sharing system. At the same time, station transfer demand prediction is helpful to bike station layout and optimization of the number of public bikes within the station. Traditional traffic demand prediction methods, such as gravity model, cannot be easily adapted to the problem of forecasting bike station transfer demand due to the difficulty in defining impedance and distinct characteristics of bike stations (Xu et al. 2013. Therefore, this paper proposes a prediction method based on Markov chain model. The proposed model is evaluated based on field data collected from Zhongshan City bike sharing system. The daily production and attraction of stations are forecasted. The experimental results show that the model of this paper performs higher forecasting accuracy and better generalization ability.

  15. Research on power grid loss prediction model based on Granger causality property of time series

    Energy Technology Data Exchange (ETDEWEB)

    Wang, J. [North China Electric Power Univ., Beijing (China); State Grid Corp., Beijing (China); Yan, W.P.; Yuan, J. [North China Electric Power Univ., Beijing (China); Xu, H.M.; Wang, X.L. [State Grid Information and Telecommunications Corp., Beijing (China)

    2009-03-11

    This paper described a method of predicting power transmission line losses using the Granger causality property of time series. The stable property of the time series was investigated using unit root tests. The Granger causality relationship between line losses and other variables was then determined. Granger-caused time series were then used to create the following 3 prediction models: (1) a model based on line loss binomials that used electricity sales to predict variables, (2) a model that considered both power sales and grid capacity, and (3) a model based on autoregressive distributed lag (ARDL) approaches that incorporated both power sales and the square of power sales as variables. A case study of data from China's electric power grid between 1980 and 2008 was used to evaluate model performance. Results of the study showed that the model error rates ranged between 2.7 and 3.9 percent. 6 refs., 3 tabs., 1 fig.

  16. A polynomial based model for cell fate prediction in human diseases.

    Science.gov (United States)

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  17. Multirule Based Diagnostic Approach for the Fog Predictions Using WRF Modelling Tool

    Directory of Open Access Journals (Sweden)

    Swagata Payra

    2014-01-01

    Full Text Available The prediction of fog onset remains difficult despite the progress in numerical weather prediction. It is a complex process and requires adequate representation of the local perturbations in weather prediction models. It mainly depends upon microphysical and mesoscale processes that act within the boundary layer. This study utilizes a multirule based diagnostic (MRD approach using postprocessing of the model simulations for fog predictions. The empiricism involved in this approach is mainly to bridge the gap between mesoscale and microscale variables, which are related to mechanism of the fog formation. Fog occurrence is a common phenomenon during winter season over Delhi, India, with the passage of the western disturbances across northwestern part of the country accompanied with significant amount of moisture. This study implements the above cited approach for the prediction of occurrences of fog and its onset time over Delhi. For this purpose, a high resolution weather research and forecasting (WRF model is used for fog simulations. The study involves depiction of model validation and postprocessing of the model simulations for MRD approach and its subsequent application to fog predictions. Through this approach model identified foggy and nonfoggy days successfully 94% of the time. Further, the onset of fog events is well captured within an accuracy of 30–90 minutes. This study demonstrates that the multirule based postprocessing approach is a useful and highly promising tool in improving the fog predictions.

  18. Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information.

    Science.gov (United States)

    Wang, Jingbin; Wang, Xiaohong; Wang, Lizhi

    2017-09-15

    Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system.

  19. Accuracy of depolarization and delay spread predictions using advanced ray-based modeling in indoor scenarios

    Directory of Open Access Journals (Sweden)

    Mani Francesco

    2011-01-01

    Full Text Available Abstract This article investigates the prediction accuracy of an advanced deterministic propagation model in terms of channel depolarization and frequency selectivity for indoor wireless propagation. In addition to specular reflection and diffraction, the developed ray tracing tool considers penetration through dielectric blocks and/or diffuse scattering mechanisms. The sensitivity and prediction accuracy analysis is based on two measurement campaigns carried out in a warehouse and an office building. It is shown that the implementation of diffuse scattering into RT significantly increases the accuracy of the cross-polar discrimination prediction, whereas the delay-spread prediction is only marginally improved.

  20. Spatial downscaling of soil prediction models based on weighted generalized additive models in smallholder farm settings.

    Science.gov (United States)

    Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P; Nair, Vimala D

    2017-09-11

    Digital soil mapping (DSM) is gaining momentum as a technique to help smallholder farmers secure soil security and food security in developing regions. However, communications of the digital soil mapping information between diverse audiences become problematic due to the inconsistent scale of DSM information. Spatial downscaling can make use of accessible soil information at relatively coarse spatial resolution to provide valuable soil information at relatively fine spatial resolution. The objective of this research was to disaggregate the coarse spatial resolution soil exchangeable potassium (K ex ) and soil total nitrogen (TN) base map into fine spatial resolution soil downscaled map using weighted generalized additive models (GAMs) in two smallholder villages in South India. By incorporating fine spatial resolution spectral indices in the downscaling process, the soil downscaled maps not only conserve the spatial information of coarse spatial resolution soil maps but also depict the spatial details of soil properties at fine spatial resolution. The results of this study demonstrated difference between the fine spatial resolution downscaled maps and fine spatial resolution base maps is smaller than the difference between coarse spatial resolution base maps and fine spatial resolution base maps. The appropriate and economical strategy to promote the DSM technique in smallholder farms is to develop the relatively coarse spatial resolution soil prediction maps or utilize available coarse spatial resolution soil maps at the regional scale and to disaggregate these maps to the fine spatial resolution downscaled soil maps at farm scale.

  1. Research on a Novel Kernel Based Grey Prediction Model and Its Applications

    Directory of Open Access Journals (Sweden)

    Xin Ma

    2016-01-01

    Full Text Available The discrete grey prediction models have attracted considerable interest of research due to its effectiveness to improve the modelling accuracy of the traditional grey prediction models. The autoregressive GM(1,1 model, abbreviated as ARGM(1,1, is a novel discrete grey model which is easy to use and accurate in prediction of approximate nonhomogeneous exponential time series. However, the ARGM(1,1 is essentially a linear model; thus, its applicability is still limited. In this paper a novel kernel based ARGM(1,1 model is proposed, abbreviated as KARGM(1,1. The KARGM(1,1 has a nonlinear function which can be expressed by a kernel function using the kernel method, and its modelling procedures are presented in details. Two case studies of predicting the monthly gas well production are carried out with the real world production data. The results of KARGM(1,1 model are compared to the existing discrete univariate grey prediction models, including ARGM(1,1, NDGM(1,1,k, DGM(1,1, and NGBMOP, and it is shown that the KARGM(1,1 outperforms the other four models.

  2. Permeability prediction of non-crimp fabrics based on a geometric model

    NARCIS (Netherlands)

    Loendersloot, Richard; ten Thije, R.H.W.; Akkerman, Remko; Lomov, S.V.; Galiotis, C.

    2004-01-01

    A model to predict the permeability of Non-Crimp Fabrics is proposed. The model is based on the geometrical features of the fabric. The stitches penetrating the uni-directional plies of the NCF induce distortions in the plane of the fabric. The dimensions of these Stitch Yarn induced fibre

  3. Degradation Prediction Model Based on a Neural Network with Dynamic Windows

    Science.gov (United States)

    Zhang, Xinghui; Xiao, Lei; Kang, Jianshe

    2015-01-01

    Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873

  4. Swarm Intelligence-Based Hybrid Models for Short-Term Power Load Prediction

    Directory of Open Access Journals (Sweden)

    Jianzhou Wang

    2014-01-01

    Full Text Available Swarm intelligence (SI is widely and successfully applied in the engineering field to solve practical optimization problems because various hybrid models, which are based on the SI algorithm and statistical models, are developed to further improve the predictive abilities. In this paper, hybrid intelligent forecasting models based on the cuckoo search (CS as well as the singular spectrum analysis (SSA, time series, and machine learning methods are proposed to conduct short-term power load prediction. The forecasting performance of the proposed models is augmented by a rolling multistep strategy over the prediction horizon. The test results are representative of the out-performance of the SSA and CS in tuning the seasonal autoregressive integrated moving average (SARIMA and support vector regression (SVR in improving load forecasting, which indicates that both the SSA-based data denoising and SI-based intelligent optimization strategy can effectively improve the model’s predictive performance. Additionally, the proposed CS-SSA-SARIMA and CS-SSA-SVR models provide very impressive forecasting results, demonstrating their strong robustness and universal forecasting capacities in terms of short-term power load prediction 24 hours in advance.

  5. Patient Similarity in Prediction Models Based on Health Data: A Scoping Review

    Science.gov (United States)

    Sharafoddini, Anis; Dubin, Joel A

    2017-01-01

    Background Physicians and health policy makers are required to make predictions during their decision making in various medical problems. Many advances have been made in predictive modeling toward outcome prediction, but these innovations target an average patient and are insufficiently adjustable for individual patients. One developing idea in this field is individualized predictive analytics based on patient similarity. The goal of this approach is to identify patients who are similar to an index patient and derive insights from the records of similar patients to provide personalized predictions.. Objective The aim is to summarize and review published studies describing computer-based approaches for predicting patients’ future health status based on health data and patient similarity, identify gaps, and provide a starting point for related future research. Methods The method involved (1) conducting the review by performing automated searches in Scopus, PubMed, and ISI Web of Science, selecting relevant studies by first screening titles and abstracts then analyzing full-texts, and (2) documenting by extracting publication details and information on context, predictors, missing data, modeling algorithm, outcome, and evaluation methods into a matrix table, synthesizing data, and reporting results. Results After duplicate removal, 1339 articles were screened in abstracts and titles and 67 were selected for full-text review. In total, 22 articles met the inclusion criteria. Within included articles, hospitals were the main source of data (n=10). Cardiovascular disease (n=7) and diabetes (n=4) were the dominant patient diseases. Most studies (n=18) used neighborhood-based approaches in devising prediction models. Two studies showed that patient similarity-based modeling outperformed population-based predictive methods. Conclusions Interest in patient similarity-based predictive modeling for diagnosis and prognosis has been growing. In addition to raw/coded health

  6. Improving stability of prediction models based on correlated omics data by using network approaches.

    Directory of Open Access Journals (Sweden)

    Renaud Tissier

    Full Text Available Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1 network construction, 2 clustering to empirically derive modules or pathways, and 3 building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  7. Comparison of ITER performance predicted by semi-empirical and theory-based transport models

    International Nuclear Information System (INIS)

    Mukhovatov, V.; Shimomura, Y.; Polevoi, A.

    2003-01-01

    The values of Q=(fusion power)/(auxiliary heating power) predicted for ITER by three different methods, i.e., transport model based on empirical confinement scaling, dimensionless scaling technique, and theory-based transport models are compared. The energy confinement time given by the ITERH-98(y,2) scaling for an inductive scenario with plasma current of 15 MA and plasma density 15% below the Greenwald value is 3.6 s with one technical standard deviation of ±14%. These data are translated into a Q interval of [7-13] at the auxiliary heating power P aux = 40 MW and [7-28] at the minimum heating power satisfying a good confinement ELMy H-mode. Predictions of dimensionless scalings and theory-based transport models such as Weiland, MMM and IFS/PPPL overlap with the empirical scaling predictions within the margins of uncertainty. (author)

  8. Data Analytics Based Dual-Optimized Adaptive Model Predictive Control for the Power Plant Boiler

    Directory of Open Access Journals (Sweden)

    Zhenhao Tang

    2017-01-01

    Full Text Available To control the furnace temperature of a power plant boiler precisely, a dual-optimized adaptive model predictive control (DoAMPC method is designed based on the data analytics. In the proposed DoAMPC, an accurate predictive model is constructed adaptively by the hybrid algorithm of the least squares support vector machine and differential evolution method. Then, an optimization problem is constructed based on the predictive model and many constraint conditions. To control the boiler furnace temperature, the differential evolution method is utilized to decide the control variables by solving the optimization problem. The proposed method can adapt to the time-varying situation by updating the sample data. The experimental results based on practical data illustrate that the DoAMPC can control the boiler furnace temperature with errors of less than 1.5% which can meet the requirements of the real production process.

  9. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    Science.gov (United States)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  10. Model-free prediction and regression a transformation-based approach to inference

    CERN Document Server

    Politis, Dimitris N

    2015-01-01

    The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, co...

  11. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models.......This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Danish...

  12. On the comparison of stochastic model predictive control strategies applied to a hydrogen-based microgrid

    Science.gov (United States)

    Velarde, P.; Valverde, L.; Maestre, J. M.; Ocampo-Martinez, C.; Bordons, C.

    2017-03-01

    In this paper, a performance comparison among three well-known stochastic model predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained model predictive control is presented. To this end, three predictive controllers have been designed and implemented in a real renewable-hydrogen-based microgrid. The experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel cell as main equipment. The real experimental results show significant differences from the plant components, mainly in terms of use of energy, for each implemented technique. Effectiveness, performance, advantages, and disadvantages of these techniques are extensively discussed and analyzed to give some valid criteria when selecting an appropriate stochastic predictive controller.

  13. Cytokine-based Predictive Models to Estimate the Probability of Chronic Periodontitis: Development of Diagnostic Nomograms.

    Science.gov (United States)

    Tomás, I; Arias-Bujanda, N; Alonso-Sampedro, M; Casares-de-Cal, M A; Sánchez-Sellero, C; Suárez-Quintanilla, D; Balsa-Castro, C

    2017-09-14

    Although a distinct cytokine profile has been described in the gingival crevicular fluid (GCF) of patients with chronic periodontitis, there is no evidence of GCF cytokine-based predictive models being used to diagnose the disease. Our objectives were: to obtain GCF cytokine-based predictive models; and develop nomograms derived from them. A sample of 150 participants was recruited: 75 periodontally healthy controls and 75 subjects affected by chronic periodontitis. Sixteen mediators were measured in GCF using the Luminex 100™ instrument: GMCSF, IFNgamma, IL1alpha, IL1beta, IL2, IL3, IL4, IL5, IL6, IL10, IL12p40, IL12p70, IL13, IL17A, IL17F and TNFalpha. Cytokine-based models were obtained using multivariate binary logistic regression. Models were selected for their ability to predict chronic periodontitis, considering the different role of the cytokines involved in the inflammatory process. The outstanding predictive accuracy of the resulting smoking-adjusted models showed that IL1alpha, IL1beta and IL17A in GCF are very good biomarkers for distinguishing patients with chronic periodontitis from periodontally healthy individuals. The predictive ability of these pro-inflammatory cytokines was increased by incorporating IFN gamma and IL10. The nomograms revealed the amount of periodontitis-associated imbalances between these cytokines with pro-inflammatory and anti-inflammatory effects in terms of a particular probability of having chronic periodontitis.

  14. PENERAPAN MODEL PEMBELAJARAN PROBLEM BASED INSTRUCTION DENGAN PENDEKATAN PREDICT-OBSERVE-EXPLAIN

    Directory of Open Access Journals (Sweden)

    Ayu Dwi Listiowati

    2015-11-01

    Full Text Available This research aimed to determine the effect of Problem Based Instruction learning model with Predict-Observe-Explain approach on chemistry learning outcomes. The population is XI grader Sciences of Senior High School in Brebes for academic year 2011/2012. Initial data analysis showed that the population are normally distributed and homogeneous, so the sampling technique which used is cluster random sampling. From this sampling, XI Science-5 used as a control class (Problem Based Instruction learning model without Predict-Observe-Explain approach and XI Science-1 as an experiment class (Problem Based Instruction with Predict Observe Explain approach. Final data analysis showed that learning outcomes for both classes are normally distributed and have equal variances. In the correlation test, obtained 0.433 of r b value, which showed a middle correlation, so Problem Based Instruction with Predict-ObserveExplain approach has middle effect on chemistry learning outcomes in solubility and solubility product. This learning contributes to student learning outcomes is 19%. The average value of affective and psychomotor in experimental class is better than the control class. Based on this  research, we can conclude that Problem Based Instruction with Predict-Observe-Explain approach has a positive effect on chemistry learning product in Senior High School students.Key Words: Problem Based Instruction Learning 

  15. Comparison of intensive care outcome prediction models based on admission scores with those based on 24-hour data.

    Science.gov (United States)

    Duke, G J; Piercy, M; DiGiantomasso, D; Green, J V

    2008-11-01

    We compared the performance of six outcome prediction models--three based on 24-hour data and three based on admission-only data--in a metropolitan university-affiliated teaching hospital with a 10-bed intensive care unit. The Acute Physiology and Chronic Health Evaluation models, version II (APACHE II) and version III-J, and the Simplified Acute Physiology Score version II (SAPS II) are based on 24-hour data and were compared with the Mortality Prediction Model version II and the SAPS version III using international and Australian coefficients (SAPS IIIA). Data were collected prospectively according to the standard methodologies for each model. Calibration and discrimination for each model were assessed by the standardised mortality ratio, area under the receiver operating characteristic plot and Hosmer-Lemeshow contingency tables and chi-squared statistics (C10 and H10). Predetermined criteria were area under the receiver operating characteristic plot > 0.8, standardised mortality ratio 95% confidence interval includes 1.0, and C10 and H10 P values >0.05. Between October 1, 2005 and December 31, 2007, 1843 consecutive admissions were screened and after the standard exclusions, 1741 were included in the analysis. The SAPS II and SAPS IIIA models fulfilled and the APACHE II model failed all criteria. The other models satisfied the discrimination criterion but significantly over-predicted mortality risk and require recalibration. Outcome prediction models based on admission-only data compared favourably to those based on 24-hour data.

  16. Validation of water sorption-based clay prediction models for calcareous soils

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Razzaghi, Fatemeh; Moosavi, Ali

    2017-01-01

    . The low organic carbon content of the soils and the low fraction of low-activity clay minerals like kaolinite suggested that the clay content under-predictions were due to large CaCO3 contents. Thus, for such water-sorption based models to work accurately for calcareous soils, a correction factor......Soil particle size distribution (PSD), particularly the active clay fraction, mediates soil engineering, agronomic and environmental functions. The tedious and costly nature of traditional methods of determining PSD prompted the development of water sorption-based models for determining the clay...... fraction. The applicability of such models to semi-arid soils with significant amounts of calcium carbonate and/or gypsum is unknown. The objective of this study was to validate three water sorption-based clay prediction models for 30 calcareous soils from Iran and identify the effect of CaCO3...

  17. Assessing the model transferability for prediction of transcription factor binding sites based on chromatin accessibility.

    Science.gov (United States)

    Liu, Sheng; Zibetti, Cristina; Wan, Jun; Wang, Guohua; Blackshaw, Seth; Qian, Jiang

    2017-07-27

    Computational prediction of transcription factor (TF) binding sites in different cell types is challenging. Recent technology development allows us to determine the genome-wide chromatin accessibility in various cellular and developmental contexts. The chromatin accessibility profiles provide useful information in prediction of TF binding events in various physiological conditions. Furthermore, ChIP-Seq analysis was used to determine genome-wide binding sites for a range of different TFs in multiple cell types. Integration of these two types of genomic information can improve the prediction of TF binding events. We assessed to what extent a model built upon on other TFs and/or other cell types could be used to predict the binding sites of TFs of interest. A random forest model was built using a set of cell type-independent features such as specific sequences recognized by the TFs and evolutionary conservation, as well as cell type-specific features derived from chromatin accessibility data. Our analysis suggested that the models learned from other TFs and/or cell lines performed almost as well as the model learned from the target TF in the cell type of interest. Interestingly, models based on multiple TFs performed better than single-TF models. Finally, we proposed a universal model, BPAC, which was generated using ChIP-Seq data from multiple TFs in various cell types. Integrating chromatin accessibility information with sequence information improves prediction of TF binding.The prediction of TF binding is transferable across TFs and/or cell lines suggesting there are a set of universal "rules". A computational tool was developed to predict TF binding sites based on the universal "rules".

  18. Prediction of CO concentrations based on a hybrid Partial Least Square and Support Vector Machine model

    Science.gov (United States)

    Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.

    2012-08-01

    Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.

  19. Enhanced Voltage Control of VSC-HVDC Connected Offshore Wind Farms Based on Model Predictive Control

    OpenAIRE

    Guo, Yifei; Gao, Houlei; Wu, Qiuwei; Zhao, Haoran; Østergaard, Jacob; Shahidehpour, Mohammad

    2018-01-01

    This paper proposes an enhanced voltage control strategy (EVCS) based on model predictive control (MPC) for voltage source converter based high voltage direct current (VSCHVDC) connected offshore wind farms (OWFs). In the proposed MPC based EVCS, all wind turbine generators (WTGs) as well as the wind farm side VSC are optimally coordinated to keep voltages within the feasible range and reduce system power losses. Considering the high ratio of the OWF collector system, the effects of active po...

  20. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud

    2017-01-01

    monitoring, fault detection and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution......The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...

  1. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    Science.gov (United States)

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  2. A multivariate-based conflict prediction model for a Brazilian freeway.

    Science.gov (United States)

    Caleffi, Felipe; Anzanello, Michel José; Cybis, Helena Beatriz Bettella

    2017-01-01

    Real-time collision risk prediction models relying on traffic data can be useful in dynamic management systems seeking at improving traffic safety. Models have been proposed to predict crash occurrence and collision risk in order to proactively improve safety. This paper presents a multivariate-based framework for selecting variables for a conflict prediction model on the Brazilian BR-290/RS freeway. The Bhattacharyya Distance (BD) and Principal Component Analysis (PCA) are applied to a dataset comprised of variables that potentially help to explain occurrence of traffic conflicts; the parameters yielded by such multivariate techniques give rise to a variable importance index that guides variables removal for later selection. Next, the selected variables are inserted into a Linear Discriminant Analysis (LDA) model to estimate conflict occurrence. A matched control-case technique is applied using traffic data processed from surveillance cameras at a segment of a Brazilian freeway. Results indicate that the variables that significantly impacted on the model are associated to total flow, difference between standard deviation of lanes' occupancy, and the speed's coefficient of variation. The model allowed to asses a characteristic behavior of major Brazilian's freeways, by identifying the Brazilian typical heterogeneity of traffic pattern among lanes, which leads to aggressive maneuvers. Results also indicate that the developed LDA-PCA model outperforms the LDA-BD model. The LDA-PCA model yields average 76% classification accuracy, and average 87% sensitivity (which measures the rate of conflicts correctly predicted). Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Composition-Based Prediction of Temperature-Dependent Thermophysical Food Properties: Reevaluating Component Groups and Prediction Models.

    Science.gov (United States)

    Phinney, David Martin; Frelka, John C; Heldman, Dennis Ray

    2017-01-01

    Prediction of temperature-dependent thermophysical properties (thermal conductivity, density, specific heat, and thermal diffusivity) is an important component of process design for food manufacturing. Current models for prediction of thermophysical properties of foods are based on the composition, specifically, fat, carbohydrate, protein, fiber, water, and ash contents, all of which change with temperature. The objectives of this investigation were to reevaluate and improve the prediction expressions for thermophysical properties. Previously published data were analyzed over the temperature range from 10 to 150 °C. These data were analyzed to create a series of relationships between the thermophysical properties and temperature for each food component, as well as to identify the dependence of the thermophysical properties on more specific structural properties of the fats, carbohydrates, and proteins. Results from this investigation revealed that the relationships between the thermophysical properties of the major constituents of foods and temperature can be statistically described by linear expressions, in contrast to the current polynomial models. Links between variability in thermophysical properties and structural properties were observed. Relationships for several thermophysical properties based on more specific constituents have been identified. Distinctions between simple sugars (fructose, glucose, and lactose) and complex carbohydrates (starch, pectin, and cellulose) have been proposed. The relationships between the thermophysical properties and proteins revealed a potential correlation with the molecular weight of the protein. The significance of relating variability in constituent thermophysical properties with structural properties--such as molecular mass--could significantly improve composition-based prediction models and, consequently, the effectiveness of process design. © 2016 Institute of Food Technologists®.

  4. ProMT: effective human promoter prediction using Markov chain model based on DNA structural properties.

    Science.gov (United States)

    Xiong, Dapeng; Liu, Rongjie; Xiao, Fen; Gao, Xieping

    2014-12-01

    The core promoters play significant and extensive roles for the initiation and regulation of DNA transcription. The identification of core promoters is one of the most challenging problems yet. Due to the diverse nature of core promoters, the results obtained through existing computational approaches are not satisfactory. None of them considered the potential influence on performance of predictive approach resulted by the interference between neighboring TSSs in TSS clusters. In this paper, we sufficiently considered this main factor and proposed an approach to locate potential TSS clusters according to the correlation of regional profiles of DNA and TSS clusters. On this basis, we further presented a novel computational approach (ProMT) for promoter prediction using Markov chain model and predictive TSS clusters based on structural properties of DNA. Extensive experiments demonstrated that ProMT can significantly improve the predictive performance. Therefore, considering interference between neighboring TSSs is essential for a wider range of promoter prediction.

  5. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    Science.gov (United States)

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  6. Establishment and Application of Coalmine Gas Prediction Model Based on Multi-Sensor Data Fusion Technology

    Directory of Open Access Journals (Sweden)

    Wenyu Lv

    2014-04-01

    Full Text Available Undoubtedly an accident involving gas is one of the greater disasters that can occur in a coalmine, thus being able to predict when an accident involving gas might occur is an essential aspect in loss prevention and the reduction of safety hazards. However, the traditional methods concerning gas safety prediction is hindered by multi-objective and non-continuous problem. The coalmine gas prediction model based on multi-sensor data fusion technology (CGPM-MSDFT was established through analysis of accidents involving gas using artificial neural network to fuse multi- sensor data, using an improved algorithm designed to train the network and using an early stop method to resolve the over-fitting problem, the network test and field application results show that this model can provide a new direction for research into predicting the likelihood of a gas related incident within a coalmine. It will have a broad application prospect in coal mining.

  7. A predictive model for high-quality blastocyst based on blastomere number, fragmentation, and symmetry.

    Science.gov (United States)

    Yu, Cheng-He; Zhang, Ruo-Peng; Li, Juan; A, Zhou-Cun

    2018-03-03

    The aim of this study was to create a predictive model for high-quality blastocyst progression based on the traditional morphology parameters of embryos. A total of 1564 embryos from 234 women underwent conventional in vitro fertilization and were involved in the present study. High-quality blastocysts were defined as having a grade of at least 3BB, and all embryos were divided based on the development of high-quality blastocysts (group HQ) or the failure to develop high-quality blastocysts (group NHQ). A retrospective analysis of day-3 embryo parameters, focused on blastomere number, fragmentation, the presence of a vacuole, symmetry, and the presence of multinucleated blastomeres was conducted. All parameters were related to high-quality blastocysts (p quality blastocysts. Parameters are indicated by s_bn (blastomere number), s_f (fragmentation), s_pv (presence of a vacuole), s_s (symmetry), and s_MNB (multinucleated blastomeres). Subsequently, univariate and multivariate logistic regression analyses were conducted to explore their relationship. In the multivariate logistic regression analysis, a predictive model was constructed, and a parameter Hc was created based on the s_bn, s_f, and s_s parameters and their corresponding odds ratios. The value of Hc in group HQ was significantly higher than that in group NHQ. A receiver operating characteristic curve was used to test the effectiveness of the model. An area under the curve of 0.790, with a 95% confidence interval of 0.766-0.813, was calculated. A dataset was used to validate the predictive utility of the model. Moreover, another dataset was used to ensure that the model can be applied to predict the implantation of day-3 embryos. A predictive model for high-quality blastocysts was created based on blastomere number, fragmentation, and symmetry. This model provides novel information on the selection of potential embryos.

  8. Economic Model Predictive Control for Hot Water Based Heating Systems in Smart Buildings

    DEFF Research Database (Denmark)

    Awadelrahman, M. A. Ahmed; Zong, Yi; Li, Hongwei

    2017-01-01

    This paper presents a study to optimize the heating energy costs in a residential building with varying electricity price signals based on an Economic Model Predictive Controller (EMPC). The investigated heating system consists of an air source heat pump (ASHP) incorporated with a hot water tank...

  9. A Riccati-Based Interior Point Method for Efficient Model Predictive Control of SISO Systems

    DEFF Research Database (Denmark)

    Hagdrup, Morten; Johansson, Rolf; Bagterp Jørgensen, John

    2017-01-01

    This paper presents an algorithm for Model Predictive Control of SISO systems. Based on a quadratic objective in addition to (hard) input constraints it features soft upper as well as lower constraints on the output and an input rate-of-change penalty term. It keeps the deterministic and stochast...

  10. Breast cancer risk prediction model: a nomogram based on common mammographic screening findings

    NARCIS (Netherlands)

    Timmers, J.M.H.; Verbeek, A.L.M.; Hout, J. in't; Pijnappel, R.M.; Broeders, M.J.M.; Heeten, G.J. den

    2013-01-01

    OBJECTIVES: To develop a prediction model for breast cancer based on common mammographic findings on screening mammograms aiming to reduce reader variability in assigning BI-RADS. METHODS: We retrospectively reviewed 352 positive screening mammograms of women participating in the Dutch screening

  11. Breast cancer risk prediction model: a nomogram based on common mammographic screening findings

    NARCIS (Netherlands)

    Timmers, J. M. H.; Verbeek, A. L. M.; Inthout, J.; Pijnappel, R. M.; Broeders, M. J. M.; den Heeten, G. J.

    2013-01-01

    To develop a prediction model for breast cancer based on common mammographic findings on screening mammograms aiming to reduce reader variability in assigning BI-RADS. We retrospectively reviewed 352 positive screening mammograms of women participating in the Dutch screening programme (Nijmegen

  12. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    Directory of Open Access Journals (Sweden)

    Lei Jia

    Full Text Available Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG and melting temperature change (dTm were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  13. Model Based Predictive Control of Thermal Comfort for Integrated Building System

    Science.gov (United States)

    Georgiev, Tz.; Jonkov, T.; Yonchev, E.; Tsankov, D.

    2011-12-01

    This article deals with the indoor thermal control problem in HVAC (heating, ventilation and air conditioning) systems. Important outdoor and indoor variables in these systems are: air temperature, global and diffuse radiations, wind speed and direction, temperature, relative humidity, mean radiant temperature, and so on. The aim of this article is to obtain the thermal comfort optimisation by model based predictive control algorithms (MBPC) of an integrated building system. The control law is given by a quadratic programming problem and the obtained control action is applied to the process. The derived models and model based predictive control algorithms are investigated based on real—live data. All researches are derived in MATLAB environment. The further research will focus on synthesis of robust energy saving control algorithms.

  14. Linear Model-Based Predictive Control of the LHC 1.8 K Cryogenic Loop

    CERN Document Server

    Blanco-Viñuela, E; De Prada-Moraga, C

    1999-01-01

    The LHC accelerator will employ 1800 superconducting magnets (for guidance and focusing of the particle beams) in a pressurized superfluid helium bath at 1.9 K. This temperature is a severely constrained control parameter in order to avoid the transition from the superconducting to the normal state. Cryogenic processes are difficult to regulate due to their highly non-linear physical parameters (heat capacity, thermal conductance, etc.) and undesirable peculiarities like non self-regulating process, inverse response and variable dead time. To reduce the requirements on either temperature sensor or cryogenic system performance, various control strategies have been investigated on a reduced-scale LHC prototype built at CERN (String Test). Model Based Predictive Control (MBPC) is a regulation algorithm based on the explicit use of a process model to forecast the plant output over a certain prediction horizon. This predicted controlled variable is used in an on-line optimization procedure that minimizes an approp...

  15. Prediction of brittleness based on anisotropic rock physics model for kerogen-rich shale

    Science.gov (United States)

    Qian, Ke-Ran; He, Zhi-Liang; Chen, Ye-Quan; Liu, Xi-Wu; Li, Xiang-Yang

    2017-12-01

    The construction of a shale rock physics model and the selection of an appropriate brittleness index ( BI) are two significant steps that can influence the accuracy of brittleness prediction. On one hand, the existing models of kerogen-rich shale are controversial, so a reasonable rock physics model needs to be built. On the other hand, several types of equations already exist for predicting the BI whose feasibility needs to be carefully considered. This study constructed a kerogen-rich rock physics model by performing the selfconsistent approximation and the differential effective medium theory to model intercoupled clay and kerogen mixtures. The feasibility of our model was confirmed by comparison with classical models, showing better accuracy. Templates were constructed based on our model to link physical properties and the BI. Different equations for the BI had different sensitivities, making them suitable for different types of formations. Equations based on Young's Modulus were sensitive to variations in lithology, while those using Lame's Coefficients were sensitive to porosity and pore fluids. Physical information must be considered to improve brittleness prediction.

  16. A Network-Based Approach to Modeling and Predicting Product Coconsideration Relations

    Directory of Open Access Journals (Sweden)

    Zhenghui Sha

    2018-01-01

    Full Text Available Understanding customer preferences in consideration decisions is critical to choice modeling in engineering design. While existing literature has shown that the exogenous effects (e.g., product and customer attributes are deciding factors in customers’ consideration decisions, it is not clear how the endogenous effects (e.g., the intercompetition among products would influence such decisions. This paper presents a network-based approach based on Exponential Random Graph Models to study customers’ consideration behaviors according to engineering design. Our proposed approach is capable of modeling the endogenous effects among products through various network structures (e.g., stars and triangles besides the exogenous effects and predicting whether two products would be conisdered together. To assess the proposed model, we compare it against the dyadic network model that only considers exogenous effects. Using buyer survey data from the China automarket in 2013 and 2014, we evaluate the goodness of fit and the predictive power of the two models. The results show that our model has a better fit and predictive accuracy than the dyadic network model. This underscores the importance of the endogenous effects on customers’ consideration decisions. The insights gained from this research help explain how endogenous effects interact with exogeous effects in affecting customers’ decision-making.

  17. Minimal important change (MIC) based on a predictive modeling approach was more precise than MIC based on ROC analysis

    NARCIS (Netherlands)

    Terluin, B.; Eekhout, I.; Terwee, C.B.; de Vet, H.C.W.

    2015-01-01

    Objectives To present a new method to estimate a "minimal important change" (MIC) of health-related quality of life (HRQOL) scales, based on predictive modeling, and to compare its performance with the MIC based on receiver operating characteristic (ROC) analysis. To illustrate how the new method

  18. A physiologically based pharmacokinetic modelling approach to predict buprenorphine pharmacokinetics following intravenous and sublingual administration.

    Science.gov (United States)

    Kalluri, Hari V; Zhang, Hongfei; Caritis, Steve N; Venkataramanan, Raman

    2017-11-01

    Opioid dependence is associated with high morbidity and mortality. Buprenorphine (BUP) is approved by the Food and Drug Administration to treat opioid dependence. There is a lack of clear consensus on the appropriate dosing of BUP due to interpatient physiological differences in absorption/disposition, subjective response assessment and other patient comorbidities. The objective of the present study was to build and validate robust physiologically based pharmacokinetic (PBPK) models for intravenous (IV) and sublingual (SL) BUP as a first step to optimizing BUP pharmacotherapy. BUP-PBPK modelling and simulations were performed using SimCyp® by incorporating the physiochemical properties of BUP, establishing intersystem extrapolation factors-based in vitro-in-vivo extrapolation (IVIVE) methods to extrapolate in vitro enzyme activity data, and using tissue-specific plasma partition coefficient estimations. Published data on IV and SL-BUP in opioid-dependent and non-opioid-dependent patients were used to build the models. Fourteen model-naïve BUP-PK datasets were used for inter- and intrastudy validations. The IV and SL-BUP-PBPK models developed were robust in predicting the multicompartment disposition of BUP over a dosing range of 0.3-32 mg. Predicted plasma concentration-time profiles in virtual patients were consistent with reported data across five single-dose IV, five single-dose SL and four multiple dose SL studies. All PK parameter predictions were within 75-137% of the corresponding observed data. The model developed predicted the brain exposure of BUP to be about four times higher than that of BUP in plasma. The validated PBPK models will be used in future studies to predict BUP plasma and brain concentrations based on the varying demographic, physiological and pathological characteristics of patients. © 2017 The British Pharmacological Society.

  19. [Study on predicting model for acute hypotensive episodes in ICU based on support vector machine].

    Science.gov (United States)

    Lai, Lijuan; Wang, Zhigang; Wu, Xiaoming; Xiong, Dongsheng

    2011-06-01

    The occurrence of acute hypotensive episodes (AHE) in intensive care units (ICU) seriously endangers the lives of patients, and the treatment is mainly depended on the expert experience of doctors. In this paper, a model for predicting the occurrence of AHE in ICU has been developed using the theory of medical Informatics. We analyzed the trend and characteristics of the mean arterial blood pressure (MAP) between the patients who were suffering AHE and those who were not, and extracted the median, mean and other statistical parameters for learning and training based on support vector machine (SVM), then developed a predicting model. On this basis, we also compared different models consisted of different kernel functions. Experiments demonstrated that this approach performed well on classification and prediction, which would contribute to forecast the occurrence of AHE.

  20. A deep learning-based multi-model ensemble method for cancer prediction.

    Science.gov (United States)

    Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong

    2018-01-01

    Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Model-predictive control based on Takagi-Sugeno fuzzy model for electrical vehicles delayed model

    DEFF Research Database (Denmark)

    Khooban, Mohammad-Hassan; Vafamand, Navid; Niknam, Taher

    2017-01-01

    is made between the results of the suggested robust strategy and those obtained from some of the most recent studies on the same topic, to assess the efficiency of the suggested controller. Finally, the experimental results based on a TMS320F28335 DSP are performed on a direct current motor. Simulation...

  2. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  3. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection.

    Directory of Open Access Journals (Sweden)

    Su Yang

    Full Text Available Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1 Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2 The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3 The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.

  4. Predicting Oral Drug Absorption: Mini Review on Physiologically-Based Pharmacokinetic Models

    Directory of Open Access Journals (Sweden)

    Louis Lin

    2017-09-01

    Full Text Available Most marketed drugs are administered orally, despite the complex process of oral absorption that is difficult to predict. Oral bioavailability is dependent on the interplay between many processes that are dependent on both compound and physiological properties. Because of this complexity, computational oral physiologically-based pharmacokinetic (PBPK models have emerged as a tool to integrate these factors in an attempt to mechanistically capture the process of oral absorption. These models use inputs from in vitro assays to predict the pharmacokinetic behavior of drugs in the human body. The most common oral PBPK models are compartmental approaches, in which the gastrointestinal tract is characterized as a series of compartments through which the drug transits. The focus of this review is on the development of oral absorption PBPK models, followed by a brief discussion of the major applications of oral PBPK models in the pharmaceutical industry.

  5. Tensor-Based Quality Prediction for Building Model Reconstruction from LIDAR Data and Topographic Map

    Science.gov (United States)

    Lin, B. C.; You, R. J.

    2012-08-01

    A quality prediction method is proposed to evaluate the quality of the automatic reconstruction of building models. In this study, LiDAR data and topographic maps are integrated for building model reconstruction. Hence, data registration is a critical step for data fusion. To improve the efficiency of the data fusion, a robust least squares method is applied to register boundary points extracted from LiDAR data and building outlines obtained from topographic maps. After registration, a quality indicator based on the tensor analysis of residuals is derived in order to evaluate the correctness of the automatic building model reconstruction. Finally, an actual dataset demonstrates the quality of the predictions for automatic model reconstruction. The results show that our method can achieve reliable results and save both time and expense on model reconstruction.

  6. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  7. A Validation of Subchannel Based CHF Prediction Model for Rod Bundles

    International Nuclear Information System (INIS)

    Hwang, Dae-Hyun; Kim, Seong-Jin

    2015-01-01

    A large number of CHF data base were procured from various sources which included square and non-square lattice test bundles. CHF prediction accuracy was evaluated for various models including CHF lookup table method, empirical correlations, and phenomenological DNB models. The parametric effect of the mass velocity and unheated wall has been investigated from the experimental result, and incorporated into the development of local parameter CHF correlation applicable to APWR conditions. According to the CHF design criterion, the CHF should not occur at the hottest rod in the reactor core during normal operation and anticipated operational occurrences with at least a 95% probability at a 95% confidence level. This is accomplished by assuring that the minimum DNBR (Departure from Nucleate Boiling Ratio) in the reactor core is greater than the limit DNBR which accounts for the accuracy of CHF prediction model. The limit DNBR can be determined from the inverse of the lower tolerance limit of M/P that is evaluated from the measured-to-predicted CHF ratios for the relevant CHF data base. It is important to evaluate an adequacy of the CHF prediction model for application to the actual reactor core conditions. Validation of CHF prediction model provides the degree of accuracy inferred from the comparison of solution and data. To achieve a required accuracy for the CHF prediction model, it may be necessary to calibrate the model parameters by employing the validation results. If the accuracy of the model is acceptable, then it is applied to the real complex system with the inferred accuracy of the model. In a conventional approach, the accuracy of CHF prediction model was evaluated from the M/P statistics for relevant CHF data base, which was evaluated by comparing the nominal values of the predicted and measured CHFs. The experimental uncertainty for the CHF data was not considered in this approach to determine the limit DNBR. When a subchannel based CHF prediction model

  8. Hot metal temperature prediction in blast furnace using advanced model based on fuzzy logic tools

    Energy Technology Data Exchange (ETDEWEB)

    Martin, R.D.; Obeso, F.; Mochon, J.; Barea, R.; Jimenez, J.

    2007-05-15

    The present work presents a model based on fuzzy logic tools to predict and simulate the hot metal temperature in a blast furnace (BF). As input variables this model uses the control variables of a current BF such as moisture, pulverised coal injection, oxygen addition, mineral/coke ratio and blast volume, and it yields as a result of the hot metal temperature. The variables employed to develop the model have been obtained from data supplied by current sensors of a Spanish BF In the model training stage the adaptive neurofuzzy inference system and the subtractive clustering algorithms have been used.

  9. Predicting Drug Concentration-Time Profiles in Multiple CNS Compartments Using a Comprehensive Physiologically-Based Pharmacokinetic Model

    NARCIS (Netherlands)

    Yamamoto, Yumi; Välitalo, Pyry A; Huntjens, Dymphy R; Proost, Johannes H; Vermeulen, An; Krauwinkel, Walter; Beukers, Margot W; van den Berg, Dirk-Jan; Hartman, Robin; Wong, Yin Cheong; Danhof, Meindert; van Hasselt, John G C; de Lange, Elizabeth C M

    2017-01-01

    Drug development targeting the central nervous system (CNS) is challenging due to poor predictability of drug concentrations in various CNS compartments. We developed a generic physiologically based pharmacokinetic (PBPK) model for prediction of drug concentrations in physiologically relevant CNS

  10. Gas Emission Prediction Model of Coal Mine Based on CSBP Algorithm

    Directory of Open Access Journals (Sweden)

    Xiong Yan

    2016-01-01

    Full Text Available In view of the nonlinear characteristics of gas emission in a coal working face, a prediction method is proposed based on cuckoo search algorithm optimized BP neural network (CSBP. In the CSBP algorithm, the cuckoo search is adopted to optimize weight and threshold parameters of BP network, and obtains the global optimal solutions. Furthermore, the twelve main affecting factors of the gas emission in the coal working face are taken as input vectors of CSBP algorithm, the gas emission is acted as output vector, and then the prediction model of BP neural network with optimal parameters is established. The results show that the CSBP algorithm has batter generalization ability and higher prediction accuracy, and can be utilized effectively in the prediction of coal mine gas emission.

  11. Research on Short-Term Wind Power Prediction Based on Combined Forecasting Models

    Directory of Open Access Journals (Sweden)

    Zhang Chi

    2016-01-01

    Full Text Available Short-Term wind power forecasting is crucial for power grid since the generated energy of wind farm fluctuates frequently. In this paper, a physical forecasting model based on NWP and a statistical forecasting model with optimized initial value in the method of BP neural network are presented. In order to make full use of the advantages of the models presented and overcome the limitation of the disadvantage, the equal weight model and the minimum variance model are established for wind power prediction. Simulation results show that the combination forecasting model is more precise than single forecasting model and the minimum variance combination model can dynamically adjust weight of each single method, restraining the forecasting error further.

  12. A statistical prediction model based on sparse representations for single image super-resolution.

    Science.gov (United States)

    Peleg, Tomer; Elad, Michael

    2014-06-01

    We address single image super-resolution using a statistical prediction model based on sparse representations of low- and high-resolution image patches. The suggested model allows us to avoid any invariance assumption, which is a common practice in sparsity-based approaches treating this task. Prediction of high resolution patches is obtained via MMSE estimation and the resulting scheme has the useful interpretation of a feedforward neural network. To further enhance performance, we suggest data clustering and cascading several levels of the basic algorithm. We suggest a training scheme for the resulting network and demonstrate the capabilities of our algorithm, showing its advantages over existing methods based on a low- and high-resolution dictionary pair, in terms of computational complexity, numerical criteria, and visual appearance. The suggested approach offers a desirable compromise between low computational complexity and reconstruction quality, when comparing it with state-of-the-art methods for single image super-resolution.

  13. Enhanced Voltage Control of VSC-HVDC Connected Offshore Wind Farms Based on Model Predictive Control

    DEFF Research Database (Denmark)

    Guo, Yifei; Gao, Houlei; Wu, Qiuwei

    2018-01-01

    This paper proposes an enhanced voltage control strategy (EVCS) based on model predictive control (MPC) for voltage source converter based high voltage direct current (VSCHVDC) connected offshore wind farms (OWFs). In the proposed MPC based EVCS, all wind turbine generators (WTGs) as well...... as the wind farm side VSC are optimally coordinated to keep voltages within the feasible range and reduce system power losses. Considering the high ratio of the OWF collector system, the effects of active power outputs of WTGs on voltage control are also taken into consideration. The predictive model of VSC...... with a typical cascaded control structure is derived in details. The sensitivity coefficients are calculated by an analytical method to improve the computational efficiency. A VSC-HVDC connected OWF with 64 WTGs was used to validate the proposed voltage control strategy....

  14. Prediction Model of Collapse Risk Based on Information Entropy and Distance Discriminant Analysis Method

    Directory of Open Access Journals (Sweden)

    Hujun He

    2017-01-01

    Full Text Available The prediction and risk classification of collapse is an important issue in the process of highway construction in mountainous regions. Based on the principles of information entropy and Mahalanobis distance discriminant analysis, we have produced a collapse hazard prediction model. We used the entropy measure method to reduce the influence indexes of the collapse activity and extracted the nine main indexes affecting collapse activity as the discriminant factors of the distance discriminant analysis model (i.e., slope shape, aspect, gradient, and height, along with exposure of the structural face, stratum lithology, relationship between weakness face and free face, vegetation cover rate, and degree of rock weathering. We employ postearthquake collapse data in relation to construction of the Yingxiu-Wolong highway, Hanchuan County, China, as training samples for analysis. The results were analyzed using the back substitution estimation method, showing high accuracy and no errors, and were the same as the prediction result of uncertainty measure. Results show that the classification model based on information entropy and distance discriminant analysis achieves the purpose of index optimization and has excellent performance, high prediction accuracy, and a zero false-positive rate. The model can be used as a tool for future evaluation of collapse risk.

  15. Energy saving and prediction modeling of petrochemical industries: A novel ELM based on FAHP

    International Nuclear Information System (INIS)

    Geng, ZhiQiang; Qin, Lin; Han, YongMing; Zhu, QunXiong

    2017-01-01

    Extreme learning machine (ELM), which is a simple single-hidden-layer feed-forward neural network with fast implementation, has been widely applied in many engineering fields. However, it is difficult to enhance the modeling ability of extreme learning in disposing the high-dimensional noisy data. And the predictive modeling method based on the ELM integrated fuzzy C-Means integrating analytic hierarchy process (FAHP) (FAHP-ELM) is proposed. The fuzzy C-Means algorithm is used to cluster the input attributes of the high-dimensional data. The Analytic Hierarchy Process (AHP) based on the entropy weights is proposed to filter the redundant information and extracts characteristic components. Then, the fusion data is used as the input of the ELM. Compared with the back-propagation (BP) neural network and the ELM, the proposed model has better performance in terms of the speed of convergence, generalization and modeling accuracy based on University of California Irvine (UCI) benchmark datasets. Finally, the proposed method was applied to build the energy saving and predictive model of the purified terephthalic acid (PTA) solvent system and the ethylene production system. The experimental results demonstrated the validity of the proposed method. Meanwhile, it could enhance the efficiency of energy utilization and achieve energy conservation and emission reduction. - Highlights: • The ELM integrated FAHP approach is proposed. • The FAHP-ELM prediction model is effectively verified through UCI datasets. • The energy saving and prediction model of petrochemical industries is obtained. • The method is efficient in improvement of energy efficiency and emission reduction.

  16. Predicting the adoption of evidence-based practice using "Rogers diffusion of innovation model".

    Science.gov (United States)

    Pashaeypoor, Shahzad; Ashktorab, Tahereh; Rassouli, Maryam; Alavi-Majd, Hamid

    2016-02-01

    Predicting the significant determinants of adopting evidence-based practice (EBP) by nursing students has received little attention in the nursing education literature. The purpose of this study was to investigate the predictors of EBP adoption among Iranian nursing students and to evaluate the fitness of the research model derived from Rogers' model. The study was a cross-sectional survey of 170 nursing students. A path analysis was conducted to predict the determinants of EBP in nursing students. Also, direct and indirect effects of the variables were calculated. The results indicated that knowledge of EBP as well as its perceived complexity, observability and trialability significantly predicted the adoption of EBP. Among the variables, knowledge of EBP had the greatest total impact on the adoption of EBP (P adoption of EBP. The results of this study can be used by health professionals such as nursing managers and educators for teaching any new concept in clinical and educational settings such as EBP.

  17. A sparse QSRR model for predicting retention indices of essential oils based on robust screening approach.

    Science.gov (United States)

    Al-Fakih, A M; Algamal, Z Y; Lee, M H; Aziz, M

    2017-08-01

    A robust screening approach and a sparse quantitative structure-retention relationship (QSRR) model for predicting retention indices (RIs) of 169 constituents of essential oils is proposed. The proposed approach is represented in two steps. First, dimension reduction was performed using the proposed modified robust sure independence screening (MR-SIS) method. Second, prediction of RIs was made using the proposed robust sparse QSRR with smoothly clipped absolute deviation (SCAD) penalty (RSQSRR). The RSQSRR model was internally and externally validated based on [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], Y-randomization test, [Formula: see text], [Formula: see text], and the applicability domain. The validation results indicate that the model is robust and not due to chance correlation. The descriptor selection and prediction performance of the RSQSRR for training dataset outperform the other two used modelling methods. The RSQSRR shows the highest [Formula: see text], [Formula: see text], and [Formula: see text], and the lowest [Formula: see text]. For the test dataset, the RSQSRR shows a high external validation value ([Formula: see text]), and a low value of [Formula: see text] compared with the other methods, indicating its higher predictive ability. In conclusion, the results reveal that the proposed RSQSRR is an efficient approach for modelling high dimensional QSRRs and the method is useful for the estimation of RIs of essential oils that have not been experimentally tested.

  18. Evaluation of remote-sensing-based rainfall products through predictive capability in hydrological runoff modelling

    DEFF Research Database (Denmark)

    Stisen, Simon; Sandholt, Inge

    2010-01-01

    were similar. The results showed that the Climate Prediction Center/Famine Early Warning System (CPC-FEWS) and cold cloud duration (CCD) products, which are partly based on rain gauge data and produced specifically for the African continent, performed better in the modelling context than the global......The emergence of regional and global satellite-based rainfall products with high spatial and temporal resolution has opened up new large-scale hydrological applications in data-sparse or ungauged catchments. Particularly, distributed hydrological models can benefit from the good spatial coverage...

  19. Prediction of different types of liver diseases using rule based classification model.

    Science.gov (United States)

    Kumar, Yugal; Sahoo, G

    2013-01-01

    Diagnosing different types of liver diseases clinically is a quite hectic process because patients have to undergo large numbers of independent laboratory tests. On the basis of results and analysis of laboratory test, different liver diseases are classified. Hence to simplify this complex process, we have developed a Rule Base Classification Model (RBCM) to predict different types of liver diseases. The proposed model is the combination of rules and different data mining techniques. The objective of this paper is to propose a rule based classification model with machine learning techniques for the prediction of different types of Liver diseases. A dataset was developed with twelve attributes that include the records of 583 patients in which 441 patients were male and rests were female. Support Vector Machine (SVM), Rule Induction (RI), Decision Tree (DT), Naive Bayes (NB) and Artificial Neural Network (ANN) data mining techniques with K-cross fold technique are used with the proposed model for the prediction of liver diseases. The performance of these data mining techniques are evaluated with accuracy, sensitivity, specificity and kappa parameters as well as statistical techniques (ANOVA and Chi square test) are used to analyze the liver disease dataset and independence of attributes. Out of 583 patients, 416 patients are liver diseases affected and rests of 167 patients are healthy. The proposed model with decision tree (DT) technique provides the better result among all techniques (RI, SVM, ANN and NB) with all parameters (Accuracy 98.46%, Sensitivity 95.7%, Specificity 95.28% and Kappa 0.983) while the SVM exhibits poor performance (Accuracy 82.33%, Sensitivity 68.03%, Specificity 91.28% and Kappa 0.801). It is also found that the best performance of the model without rules (RI, Accuracy 82.68%, Sensitivity 86.34%, Specificity 90.51% and Kappa 0.619) is almost similar to the worst performance of the rule based classification model (SVM, Accuracy 82

  20. A network security situation prediction model based on wavelet neural network with optimized parameters

    Directory of Open Access Journals (Sweden)

    Haibo Zhang

    2016-08-01

    Full Text Available The security incidents ion networks are sudden and uncertain, it is very hard to precisely predict the network security situation by traditional methods. In order to improve the prediction accuracy of the network security situation, we build a network security situation prediction model based on Wavelet Neural Network (WNN with optimized parameters by the Improved Niche Genetic Algorithm (INGA. The proposed model adopts WNN which has strong nonlinear ability and fault-tolerance performance. Also, the parameters for WNN are optimized through the adaptive genetic algorithm (GA so that WNN searches more effectively. Considering the problem that the adaptive GA converges slowly and easily turns to the premature problem, we introduce a novel niche technology with a dynamic fuzzy clustering and elimination mechanism to solve the premature convergence of the GA. Our final simulation results show that the proposed INGA-WNN prediction model is more reliable and effective, and it achieves faster convergence-speed and higher prediction accuracy than the Genetic Algorithm-Wavelet Neural Network (GA-WNN, Genetic Algorithm-Back Propagation Neural Network (GA-BPNN and WNN.

  1. Trait-based representation of biological nitrification: Model development, testing, and predicted community composition

    Directory of Open Access Journals (Sweden)

    Nick eBouskill

    2012-10-01

    Full Text Available Trait-based microbial models show clear promise as tools to represent the diversity and activity of microorganisms across ecosystem gradients. These models parameterize specific traits that determine the relative fitness of an ‘organism’ in a given environment, and represent the complexity of biological systems across temporal and spatial scales. In this study we introduce a microbial community trait-based modeling framework (MicroTrait focused on nitrification (MicroTrait-N that represents the ammonia-oxidizing bacteria (AOB and ammonia-oxidizing archaea (AOA and nitrite oxidizing bacteria (NOB using traits related to enzyme kinetics and physiological properties. We used this model to predict nitrifier diversity, ammonia (NH3 oxidation rates and nitrous oxide (N2O production across pH, temperature and substrate gradients. Predicted nitrifier diversity was predominantly determined by temperature and substrate availability, the latter was strongly influenced by pH. The model predicted that transient N2O production rates are maximized by a decoupling of the AOB and NOB communities, resulting in an accumulation and detoxification of nitrite to N2O by AOB. However, cumulative N2O production (over six month simulations is maximized in a system where the relationship between AOB and NOB is maintained. When the reactions uncouple, the AOB become unstable and biomass declines rapidly, resulting in decreased NH3 oxidation and N2O production. We evaluated this model against site level chemical datasets from the interior of Alaska and accurately simulated NH3 oxidation rates and the relative ratio of AOA:AOB biomass. The predicted community structure and activity indicate (a parameterization of a small number of traits may be sufficient to broadly characterize nitrifying community structure and (b changing decadal trends in climate and edaphic conditions could impact nitrification rates in ways that are not captured by extant biogeochemical models.

  2. Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model.

    Science.gov (United States)

    Xianfang, Wang; Junmei, Wang; Xiaolei, Wang; Yue, Zhang

    2017-01-01

    The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server.

  3. A Short-Term Photovoltaic Power Prediction Model Based on an FOS-ELM Algorithm

    Directory of Open Access Journals (Sweden)

    Jidong Wang

    2017-04-01

    Full Text Available With the increasing proportion of photovoltaic (PV power in power systems, the problem of its fluctuation and intermittency has become more prominent. To reduce the negative influence of the use of PV power, we propose a short-term PV power prediction model based on the online sequential extreme learning machine with forgetting mechanism (FOS-ELM, which can constantly replace outdated data with new data. We use historical weather data and historical PV power data to predict the PV power in the next period of time. The simulation result shows that this model has the advantages of a short training time and high accuracy. This model can help the power dispatch department schedule generation plans as well as support spatial and temporal compensation and coordinated power control, which is important for the security and stability as well as the optimal operation of power systems.

  4. Nonlinear Model Predictive Control of a Cable-Robot-Based Motion Simulator

    DEFF Research Database (Denmark)

    Katliar, Mikhail; Fischer, Joerg; Frison, Gianluca

    2017-01-01

    In this paper we present the implementation of a model-predictive controller (MPC) for real-time control of a cable-robot-based motion simulator. The controller computes control inputs such that a desired acceleration and angular velocity at a defined point in simulator's cabin are tracked while ....... (C) 2017, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved....

  5. Combined Prediction Model of Death Toll for Road Traffic Accidents Based on Independent and Dependent Variables

    Directory of Open Access Journals (Sweden)

    Feng Zhong-xiang

    2014-01-01

    Full Text Available In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability.

  6. Sparse Power-Law Network Model for Reliable Statistical Predictions Based on Sampled Data

    Directory of Open Access Journals (Sweden)

    Alexander P. Kartun-Giles

    2018-04-01

    Full Text Available A projective network model is a model that enables predictions to be made based on a subsample of the network data, with the predictions remaining unchanged if a larger sample is taken into consideration. An exchangeable model is a model that does not depend on the order in which nodes are sampled. Despite a large variety of non-equilibrium (growing and equilibrium (static sparse complex network models that are widely used in network science, how to reconcile sparseness (constant average degree with the desired statistical properties of projectivity and exchangeability is currently an outstanding scientific problem. Here we propose a network process with hidden variables which is projective and can generate sparse power-law networks. Despite the model not being exchangeable, it can be closely related to exchangeable uncorrelated networks as indicated by its information theory characterization and its network entropy. The use of the proposed network process as a null model is here tested on real data, indicating that the model offers a promising avenue for statistical network modelling.

  7. The Impact of Incorporating Chemistry to Numerical Weather Prediction Models: An Ensemble-Based Sensitivity Analysis

    Science.gov (United States)

    Barnard, P. A.; Arellano, A. F.

    2011-12-01

    Data assimilation has emerged as an integral part of numerical weather prediction (NWP). More recently, atmospheric chemistry processes have been incorporated into NWP models to provide forecasts and guidance on air quality. There is, however, a unique opportunity within this coupled system to investigate the additional benefit of constraining model dynamics and physics due to chemistry. Several studies have reported the strong interaction between chemistry and meteorology through radiation, transport, emission, and cloud processes. To examine its importance to NWP, we conduct an ensemble-based sensitivity analysis of meteorological fields to the chemical and aerosol fields within the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) and the Data Assimilation Research Testbed (DART) framework. In particular, we examine the sensitivity of the forecasts of surface temperature and related dynamical fields to the initial conditions of dust and aerosol concentrations in the model over the continental United States within the summer 2008 time period. We use an ensemble of meteorological and chemical/aerosol predictions within WRF-Chem/DART to calculate the sensitivities. This approach is similar to recent ensemble-based sensitivity studies in NWP. The use of an ensemble prediction is appealing because the analysis does not require the adjoint of the model, which to a certain extent becomes a limitation due to the rapidly evolving models and the increasing number of different observations. Here, we introduce this approach as applied to atmospheric chemistry. We also show our initial results of the calculated sensitivities from joint assimilation experiments using a combination of conventional meteorological observations from the National Centers for Environmental Prediction, retrievals of aerosol optical depth from NASA's Moderate Resolution Imaging Spectroradiometer, and retrievals of carbon monoxide from NASA's Measurements of Pollution in the

  8. An IL28B genotype-based clinical prediction model for treatment of chronic hepatitis C.

    Directory of Open Access Journals (Sweden)

    Thomas R O'Brien

    Full Text Available Genetic variation in IL28B and other factors are associated with sustained virological response (SVR after pegylated-interferon/ribavirin treatment for chronic hepatitis C (CHC. Using data from the HALT-C Trial, we developed a model to predict a patient's probability of SVR based on IL28B genotype and clinical variables.HALT-C enrolled patients with advanced CHC who had failed previous interferon-based treatment. Subjects were re-treated with pegylated-interferon/ribavirin during trial lead-in. We used step-wise logistic regression to calculate adjusted odds ratios (aOR and create the predictive model. Leave-one-out cross-validation was used to predict a priori probabilities of SVR and determine area under the receiver operator characteristics curve (AUC.Among 646 HCV genotype 1-infected European American patients, 14.2% achieved SVR. IL28B rs12979860-CC genotype was the strongest predictor of SVR (aOR, 7.56; p10% (43.3% of subjects had an SVR rate of 27.9% and accounted for 84.8% of subjects actually achieving SVR. To verify that consideration of both IL28B genotype and clinical variables is required for treatment decisions, we calculated AUC values from published data for the IDEAL Study.A clinical prediction model based on IL28B genotype and clinical variables can yield useful individualized predictions of the probability of treatment success that could increase SVR rates and decrease the frequency of futile treatment among patients with CHC.

  9. Prediction of recombinant protein overexpression in Escherichia coli using a machine learning based model (RPOLP).

    Science.gov (United States)

    Habibi, Narjeskhatoon; Norouzi, Alireza; Mohd Hashim, Siti Z; Shamsir, Mohd Shahir; Samian, Razip

    2015-11-01

    Recombinant protein overexpression, an important biotechnological process, is ruled by complex biological rules which are mostly unknown, is in need of an intelligent algorithm so as to avoid resource-intensive lab-based trial and error experiments in order to determine the expression level of the recombinant protein. The purpose of this study is to propose a predictive model to estimate the level of recombinant protein overexpression for the first time in the literature using a machine learning approach based on the sequence, expression vector, and expression host. The expression host was confined to Escherichia coli which is the most popular bacterial host to overexpress recombinant proteins. To provide a handle to the problem, the overexpression level was categorized as low, medium and high. A set of features which were likely to affect the overexpression level was generated based on the known facts (e.g. gene length) and knowledge gathered from related literature. Then, a representative sub-set of features generated in the previous objective was determined using feature selection techniques. Finally a predictive model was developed using random forest classifier which was able to adequately classify the multi-class imbalanced small dataset constructed. The result showed that the predictive model provided a promising accuracy of 80% on average, in estimating the overexpression level of a recombinant protein. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Prediction of TF target sites based on atomistic models of protein-DNA complexes

    Directory of Open Access Journals (Sweden)

    Collado-Vides Julio

    2008-10-01

    Full Text Available Abstract Background The specific recognition of genomic cis-regulatory elements by transcription factors (TFs plays an essential role in the regulation of coordinated gene expression. Studying the mechanisms determining binding specificity in protein-DNA interactions is thus an important goal. Most current approaches for modeling TF specific recognition rely on the knowledge of large sets of cognate target sites and consider only the information contained in their primary sequence. Results Here we describe a structure-based methodology for predicting sequence motifs starting from the coordinates of a TF-DNA complex. Our algorithm combines information regarding the direct and indirect readout of DNA into an atomistic statistical model, which is used to estimate the interaction potential. We first measure the ability of our method to correctly estimate the binding specificities of eight prokaryotic and eukaryotic TFs that belong to different structural superfamilies. Secondly, the method is applied to two homology models, finding that sampling of interface side-chain rotamers remarkably improves the results. Thirdly, the algorithm is compared with a reference structural method based on contact counts, obtaining comparable predictions for the experimental complexes and more accurate sequence motifs for the homology models. Conclusion Our results demonstrate that atomic-detail structural information can be feasibly used to predict TF binding sites. The computational method presented here is universal and might be applied to other systems involving protein-DNA recognition.

  11. Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model

    Science.gov (United States)

    Morin, Mario A.; Ficarazzo, Francesco

    2006-04-01

    Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.

  12. Real-time GPS Satellite Clock Error Prediction Based On No-stationary Time Series Model

    Science.gov (United States)

    Wang, Q.; Xu, G.; Wang, F.

    2009-04-01

    Analysis Centers of the IGS provide precise satellite ephemeris for GPS data post-processing. The accuracy of orbit products is better than 5cm, and that of the satellite clock errors (SCE) approaches 0.1ns (igscb.jpl.nasa.gov), which can meet with the requirements of precise point positioning (PPP). Due to the 13 day-latency of the IGS final products, only the broadcast ephemeris and IGS ultra rapid products (predicted) are applicable for real time PPP (RT-PPP). Therefore, development of an approach to estimate high precise GPS SCE in real time is of particular importance for RT-PPP. Many studies have been carried out for forecasting the corrections using models, such as Linear Model (LM), Quadratic Polynomial Model (QPM), Quadratic Polynomial Model with Cyclic corrected Terms (QPM+CT), Grey Model (GM) and Kalman Filter Model (KFM), etc. However, the precisions of these models are generally in nanosecond level. The purpose of this study is to develop a method using which SCE forecasting for RT-PPP can be reached with a precision of sub-nanosecond. Analysis of the last 8 years IGS SCE data shown that predicted precision depend on the stability of the individual satellite clock. The clocks of the most recent GPS satellites (BLOCK IIR and BLOCK IIR-M) are more stable than that of the former GPS satellites (BLOCK IIA). For the stable satellite clock, the next 6 hours SCE can be easily predict with LM. The residuals of unstable satellite clocks are periodic ones with noise components. Dominant periods of residuals are found by using Fourier Transform and Spectrum Analysis. For the rest part of the residuals, an auto-regression model is used to determine their systematic trends. Summarized from this study, a no-stationary time series model can be proposed to predict GPS SCE in real time. This prediction model includes: linear term, cyclic corrected terms and auto-regression term, which are used to represent SCE trend, cyclic parts and rest of the errors, respectively

  13. DFT-based Green's function pathways model for prediction of bridge-mediated electronic coupling.

    Science.gov (United States)

    Berstis, Laura; Baldridge, Kim K

    2015-12-14

    A density functional theory-based Green's function pathway model is developed enabling further advancements towards the long-standing challenge of accurate yet inexpensive prediction of electron transfer rate. Electronic coupling predictions are demonstrated to within 0.1 eV of experiment for organic and biological systems of moderately large size, with modest computational expense. Benchmarking and comparisons are made across density functional type, basis set extent, and orbital localization scheme. The resulting framework is shown to be flexible and to offer quantitative prediction of both electronic coupling and tunneling pathways in covalently bound non-adiabatic donor-bridge-acceptor (D-B-A) systems. A new localized molecular orbital Green's function pathway method (LMO-GFM) adaptation enables intuitive understanding of electron tunneling in terms of through-bond and through-space interactions.

  14. Comparison of individual-based modeling and population approaches for prediction of foodborne pathogens growth.

    Science.gov (United States)

    Augustin, Jean-Christophe; Ferrier, Rachel; Hezard, Bernard; Lintz, Adrienne; Stahl, Valérie

    2015-02-01

    Individual-based modeling (IBM) approach combined with the microenvironment modeling of vacuum-packed cold-smoked salmon was more effective to describe the variability of the growth of a few Listeria monocytogenes cells contaminating irradiated salmon slices than the traditional population models. The IBM approach was particularly relevant to predict the absence of growth in 25% (5 among 20) of artificially contaminated cold-smoked salmon samples stored at 8 °C. These results confirmed similar observations obtained with smear soft cheese (Ferrier et al., 2013). These two different food models were used to compare the IBM/microscale and population/macroscale modeling approaches in more global exposure and risk assessment frameworks taking into account the variability and/or the uncertainty of the factors influencing the growth of L. monocytogenes. We observed that the traditional population models significantly overestimate exposure and risk estimates in comparison to IBM approach when contamination of foods occurs with a low number of cells (population model were characterized by a great uncertainty. The overestimation was mainly linked to the ability of IBM to predict no growth situations rather than the consideration of microscale environment. On the other hand, when the aim of quantitative risk assessment studies is only to assess the relative impact of changes in control measures affecting the growth of foodborne bacteria, the two modeling approach gave similar results and the simplest population approach was suitable. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Advanced Emergency Braking Control Based on a Nonlinear Model Predictive Algorithm for Intelligent Vehicles

    Directory of Open Access Journals (Sweden)

    Ronghui Zhang

    2017-05-01

    Full Text Available Focusing on safety, comfort and with an overall aim of the comprehensive improvement of a vision-based intelligent vehicle, a novel Advanced Emergency Braking System (AEBS is proposed based on Nonlinear Model Predictive Algorithm. Considering the nonlinearities of vehicle dynamics, a vision-based longitudinal vehicle dynamics model is established. On account of the nonlinear coupling characteristics of the driver, surroundings, and vehicle itself, a hierarchical control structure is proposed to decouple and coordinate the system. To avoid or reduce the collision risk between the intelligent vehicle and collision objects, a coordinated cost function of tracking safety, comfort, and fuel economy is formulated. Based on the terminal constraints of stable tracking, a multi-objective optimization controller is proposed using the theory of non-linear model predictive control. To quickly and precisely track control target in a finite time, an electronic brake controller for AEBS is designed based on the Nonsingular Fast Terminal Sliding Mode (NFTSM control theory. To validate the performance and advantages of the proposed algorithm, simulations are implemented. According to the simulation results, the proposed algorithm has better integrated performance in reducing the collision risk and improving the driving comfort and fuel economy of the smart car compared with the existing single AEBS.

  16. Bridge Deterioration Prediction Model Based On Hybrid Markov-System Dynamic

    Directory of Open Access Journals (Sweden)

    Widodo Soetjipto Jojok

    2017-01-01

    Full Text Available Instantaneous bridge failure tends to increase in Indonesia. To mitigate this condition, Indonesia’s Bridge Management System (I-BMS has been applied to continuously monitor the condition of bridges. However, I-BMS only implements visual inspection for maintenance priority of the bridge structure component instead of bridge structure system. This paper proposes a new bridge failure prediction model based on hybrid Markov-System Dynamic (MSD. System dynamic is used to represent the correlation among bridge structure components while Markov chain is used to calculate temporal probability of the bridge failure. Around 235 data of bridges in Indonesia were collected from Directorate of Bridge the Ministry of Public Works and Housing for calculating transition probability of the model. To validate the model, a medium span concrete bridge was used as a case study. The result shows that the proposed model can accurately predict the bridge condition. Besides predicting the probability of the bridge failure, this model can also be used as an early warning system for bridge monitoring activity.

  17. Prospective Fall-Risk Prediction Models for Older Adults Based on Wearable Sensors.

    Science.gov (United States)

    Howcroft, Jennifer; Kofman, Jonathan; Lemaire, Edward D

    2017-10-01

    Wearable sensors can provide quantitative, gait-based assessments that can translate to point-of-care environments. This investigation generated elderly fall-risk predictive models based on wearable-sensor-derived gait data and prospective fall occurrence, and identified the optimal sensor type, location, and combination for single and dual-task walking. 75 individuals who reported six month prospective fall occurrence (75.2 ± 6.6 years; 47 non-fallers and 28 fallers) walked 7.62 m under single-task and dual-task conditions while wearing pressure-sensinginsoles and tri-axial accelerometers at the head, pelvis, and left and right shanks. Fall-risk classificationmodels were assessed for all sensor combinations and three model types: neural network, naïve Bayesian, and support vector machine. The best performing model used a neural network, dual-task gait data, and input parameters from head, pelvis, and left shank accelerometers (accuracy = 57%, sensitivity = 43%, and specificity = 65%). The best single-sensor model used a neural network, dual-task gait data, and pelvis accelerometer parameters (accuracy = 54%, sensitivity = 35%, and specificity = 67%). Single-task and dual-task gait assessments provided similar fall-risk model performance. Fall-risk predictive models developed for point-of-care environments should use multi-sensor dual-task gait assessment with the pelvis location considered if assessment is limited to a single sensor.

  18. Hyperspectral-based predictive modelling of grapevine water status in the Portuguese Douro wine region

    Science.gov (United States)

    Pôças, Isabel; Gonçalves, João; Costa, Patrícia Malva; Gonçalves, Igor; Pereira, Luís S.; Cunha, Mario

    2017-06-01

    In this study, hyperspectral reflectance (HySR) data derived from a handheld spectroradiometer were used to assess the water status of three grapevine cultivars in two sub-regions of Douro wine region during two consecutive years. A large set of potential predictors derived from the HySR data were considered for modelling/predicting the predawn leaf water potential (Ψpd) through different statistical and machine learning techniques. Three HySR vegetation indices were selected as final predictors for the computation of the models and the in-season time trend was removed from data by using a time predictor. The vegetation indices selected were the Normalized Reflectance Index for the wavelengths 554 nm and 561 nm (NRI554;561), the water index (WI) for the wavelengths 900 nm and 970 nm, and the D1 index which is associated with the rate of reflectance increase in the wavelengths of 706 nm and 730 nm. These vegetation indices covered the green, red edge and the near infrared domains of the electromagnetic spectrum. A large set of state-of-the-art analysis and statistical and machine-learning modelling techniques were tested. Predictive modelling techniques based on generalized boosted model (GBM), bagged multivariate adaptive regression splines (B-MARS), generalized additive model (GAM), and Bayesian regularized neural networks (BRNN) showed the best performance for predicting Ψpd, with an average determination coefficient (R2) ranging between 0.78 and 0.80 and RMSE varying between 0.11 and 0.12 MPa. When cultivar Touriga Nacional was used for training the models and the cultivars Touriga Franca and Tinta Barroca for testing (independent validation), the models performance was good, particularly for GBM (R2 = 0.85; RMSE = 0.09 MPa). Additionally, the comparison of Ψpd observed and predicted showed an equitable dispersion of data from the various cultivars. The results achieved show a good potential of these predictive models based on vegetation indices to support

  19. Can citizen-based observations be assimilated in hydrological models to improve flood prediction?

    Science.gov (United States)

    Mazzoleni, Maurizio; Alfonso, Leonardo; Solomatine, Dimitri P.

    2015-04-01

    In the recent years, the continued technological improvement has stimulated the spread of low-cost sensors that can be used to measure hydrological variables by citizens in a more spatially distributed way than classic static physical sensors. However, such measurements have the main characteristics to have irregular arrival time and variable uncertainty. This study presents a Kalman filter based method to integrate citizen-based observations into hydrological models in order to improve flood prediction. The methodology is applied in the Brue catchment, South West of England. In order to estimate the response of the catchment to a given flood event, a lumped conceptual hydrological model is implemented. The measured precipitation values are used as perfect forecast input in the hydrological model. Synthetic streamflow values are used in this study due to the fact that citizen-based observations coming at irregular time steps are not available. The results of this study pointed out how increasing the number of uncertain citizen-based observations within two model time steps can improve the model accuracy leading to a better flood forecast. Therefore, observations uncertainty influences the model accuracy more than the irregular moments in which the streamflow observations are assimilated into the hydrological model. This study is part of the FP7 European Project WeSenseIt Citizen Water Observatory (http://wesenseit.eu/).

  20. Prediction Model of Machining Failure Trend Based on Large Data Analysis

    Science.gov (United States)

    Li, Jirong

    2017-12-01

    The mechanical processing has high complexity, strong coupling, a lot of control factors in the machining process, it is prone to failure, in order to improve the accuracy of fault detection of large mechanical equipment, research on fault trend prediction requires machining, machining fault trend prediction model based on fault data. The characteristics of data processing using genetic algorithm K mean clustering for machining, machining feature extraction which reflects the correlation dimension of fault, spectrum characteristics analysis of abnormal vibration of complex mechanical parts processing process, the extraction method of the abnormal vibration of complex mechanical parts processing process of multi-component spectral decomposition and empirical mode decomposition Hilbert based on feature extraction and the decomposition results, in order to establish the intelligent expert system for the data base, combined with large data analysis method to realize the machining of the Fault trend prediction. The simulation results show that this method of fault trend prediction of mechanical machining accuracy is better, the fault in the mechanical process accurate judgment ability, it has good application value analysis and fault diagnosis in the machining process.

  1. Multivariate Radiological-Based Models for the Prediction of Future Knee Pain: Data from the OAI

    Directory of Open Access Journals (Sweden)

    Jorge I. Galván-Tejada

    2015-01-01

    Full Text Available In this work, the potential of X-ray based multivariate prognostic models to predict the onset of chronic knee pain is presented. Using X-rays quantitative image assessments of joint-space-width (JSW and paired semiquantitative central X-ray scores from the Osteoarthritis Initiative (OAI, a case-control study is presented. The pain assessments of the right knee at the baseline and the 60-month visits were used to screen for case/control subjects. Scores were analyzed at the time of pain incidence (T-0, the year prior incidence (T-1, and two years before pain incidence (T-2. Multivariate models were created by a cross validated elastic-net regularized generalized linear models feature selection tool. Univariate differences between cases and controls were reported by AUC, C-statistics, and ODDs ratios. Univariate analysis indicated that the medial osteophytes were significantly more prevalent in cases than controls: C-stat 0.62, 0.62, and 0.61, at T-0, T-1, and T-2, respectively. The multivariate JSW models significantly predicted pain: AUC = 0.695, 0.623, and 0.620, at T-0, T-1, and T-2, respectively. Semiquantitative multivariate models predicted paint with C-stat = 0.671, 0.648, and 0.645 at T-0, T-1, and T-2, respectively. Multivariate models derived from plain X-ray radiography assessments may be used to predict subjects that are at risk of developing knee pain.

  2. Coordinated Voltage Control of a Wind Farm based on Model Predictive Control

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Guo, Qinglai

    2016-01-01

    This paper presents an autonomous wind farm voltage controller based on Model Predictive Control (MPC). The reactive power compensation and voltage regulation devices of the wind farm include Static Var Compensators (SVCs), Static Var Generators (SVGs), Wind Turbine Generators (WTGs) and On...... are calculated based on an analytical method to improve the computation efficiency and overcome the convergence problem. Two control modes are designed for both voltage violated and normal operation conditions. A wind farm with 20 wind turbines was used to conduct case studies to verify the proposed coordinated...

  3. Markov chain-based promoter structure modeling for tissue-specific expression pattern prediction.

    Science.gov (United States)

    Vandenbon, Alexis; Miyamoto, Yuki; Takimoto, Noriko; Kusakabe, Takehiro; Nakai, Kenta

    2008-02-29

    Transcriptional regulation is the first level of regulation of gene expression and is therefore a major topic in computational biology. Genes with similar expression patterns can be assumed to be co-regulated at the transcriptional level by promoter sequences with a similar structure. Current approaches for modeling shared regulatory features tend to focus mainly on clustering of cis-regulatory sites. Here we introduce a Markov chain-based promoter structure model that uses both shared motifs and shared features from an input set of promoter sequences to predict candidate genes with similar expression. The model uses positional preference, order, and orientation of motifs. The trained model is used to score a genomic set of promoter sequences: high-scoring promoters are assumed to have a structure similar to the input sequences and are thus expected to drive similar expression patterns. We applied our model on two datasets in Caenorhabditis elegans and in Ciona intestinalis. Both computational and experimental verifications indicate that this model is capable of predicting candidate promoters driving similar expression patterns as the input-regulatory sequences. This model can be useful for finding promising candidate genes for wet-lab experiments and for increasing our understanding of transcriptional regulation.

  4. LMI-Based Generation of Feedback Laws for a Robust Model Predictive Control Algorithm

    Science.gov (United States)

    Acikmese, Behcet; Carson, John M., III

    2007-01-01

    This technical note provides a mathematical proof of Corollary 1 from the paper 'A Nonlinear Model Predictive Control Algorithm with Proven Robustness and Resolvability' that appeared in the 2006 Proceedings of the American Control Conference. The proof was omitted for brevity in the publication. The paper was based on algorithms developed for the FY2005 R&TD (Research and Technology Development) project for Small-body Guidance, Navigation, and Control [2].The framework established by the Corollary is for a robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems that guarantees the resolvability of the associated nite-horizon optimal control problem in a receding-horizon implementation. Additional details of the framework are available in the publication.

  5. A Bayesian network model for predicting type 2 diabetes risk based on electronic health records

    Science.gov (United States)

    Xie, Jiang; Liu, Yan; Zeng, Xu; Zhang, Wu; Mei, Zhen

    2017-07-01

    An extensive, in-depth study of diabetes risk factors (DBRF) is of crucial importance to prevent (or reduce) the chance of suffering from type 2 diabetes (T2D). Accumulation of electronic health records (EHRs) makes it possible to build nonlinear relationships between risk factors and diabetes. However, the current DBRF researches mainly focus on qualitative analyses, and the inconformity of physical examination items makes the risk factors likely to be lost, which drives us to study the novel machine learning approach for risk model development. In this paper, we use Bayesian networks (BNs) to analyze the relationship between physical examination information and T2D, and to quantify the link between risk factors and T2D. Furthermore, with the quantitative analyses of DBRF, we adopt EHR and propose a machine learning approach based on BNs to predict the risk of T2D. The experiments demonstrate that our approach can lead to better predictive performance than the classical risk model.

  6. Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network.

    Science.gov (United States)

    Han, Hong-Gui; Zhang, Lu; Hou, Ying; Qiao, Jun-Fei

    2016-02-01

    A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration.

  7. Chemical structure-based predictive model for the oxidation of trace organic contaminants by sulfate radical.

    Science.gov (United States)

    Ye, Tiantian; Wei, Zongsu; Spinney, Richard; Tang, Chong-Jian; Luo, Shuang; Xiao, Ruiyang; Dionysiou, Dionysios D

    2017-06-01

    Second-order rate constants [Formula: see text] for the reaction of sulfate radical anion (SO 4 •- ) with trace organic contaminants (TrOCs) are of scientific and practical importance for assessing their environmental fate and removal efficiency in water treatment systems. Here, we developed a chemical structure-based model for predicting [Formula: see text] using 32 molecular fragment descriptors, as this type of model provides a quick estimate at low computational cost. The model was constructed using the multiple linear regression (MLR) and artificial neural network (ANN) methods. The MLR method yielded adequate fit for the training set (R training 2 =0.88,n=75) and reasonable predictability for the validation set (R validation 2 =0.62,n=38). In contrast, the ANN method produced a more statistical robustness but rather poor predictability (R training 2 =0.99andR validation 2 =0.42). The reaction mechanisms of SO 4 •- reactivity with TrOCs were elucidated. Our result shows that the coefficients of functional groups reflect their electron donating/withdrawing characters. For example, electron donating groups typically exhibit positive coefficients, indicating enhanced SO 4 •- reactivity. Electron withdrawing groups exhibit negative values, indicating reduced reactivity. With its quick and accurate features, we applied this structure-based model to 55 discrete TrOCs culled from the Contaminant Candidate List 4, and quantitatively compared their removal efficiency with SO 4 •- and OH in the presence of environmental matrices. This high-throughput model helps prioritize TrOCs that are persistent to SO 4 •- based oxidation technologies at the screening level, and provide diagnostics of SO 4 •- reaction mechanisms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Short-Term Bus Passenger Demand Prediction Based on Time Series Model and Interactive Multiple Model Approach

    Directory of Open Access Journals (Sweden)

    Rui Xue

    2015-01-01

    Full Text Available Although bus passenger demand prediction has attracted increased attention during recent years, limited research has been conducted in the context of short-term passenger demand forecasting. This paper proposes an interactive multiple model (IMM filter algorithm-based model to predict short-term passenger demand. After aggregated in 15 min interval, passenger demand data collected from a busy bus route over four months were used to generate time series. Considering that passenger demand exhibits various characteristics in different time scales, three time series were developed, named weekly, daily, and 15 min time series. After the correlation, periodicity, and stationarity analyses, time series models were constructed. Particularly, the heteroscedasticity of time series was explored to achieve better prediction performance. Finally, IMM filter algorithm was applied to combine individual forecasting models with dynamically predicted passenger demand for next interval. Different error indices were adopted for the analyses of individual and hybrid models. The performance comparison indicates that hybrid model forecasts are superior to individual ones in accuracy. Findings of this study are of theoretical and practical significance in bus scheduling.

  9. Research of Coal Resources Reserves Prediction Based on GM (1, 1) Model

    Science.gov (United States)

    Xiao, Jiancheng

    2018-01-01

    Based on the forecast of China’s coal reserves, this paper uses the GM (1, 1) gray forecasting theory to establish the gray forecasting model of China’s coal reserves based on the data of China’s coal reserves from 2002 to 2009, and obtained the trend of coal resources reserves with the current economic and social development situation, and the residual test model is established, so the prediction model is more accurate. The results show that China’s coal reserves can ensure the use of production at least 300 years of use. And the results are similar to the mainstream forecast results, and that are in line with objective reality.

  10. Enabling Persistent Autonomy for Underwater Gliders with Ocean Model Predictions and Terrain Based Navigation

    Directory of Open Access Journals (Sweden)

    Andrew eStuntz

    2016-04-01

    Full Text Available Effective study of ocean processes requires sampling over the duration of long (weeks to months oscillation patterns. Such sampling requires persistent, autonomous underwater vehicles, that have a similarly long deployment duration. The spatiotemporal dynamics of the ocean environment, coupled with limited communication capabilities, make navigation and localization difficult, especially in coastal regions where the majority of interesting phenomena occur. In this paper, we consider the combination of two methods for reducing navigation and localization error; a predictive approach based on ocean model predictions and a prior information approach derived from terrain-based navigation. The motivation for this work is not only for real-time state estimation, but also for accurately reconstructing the actual path that the vehicle traversed to contextualize the gathered data, with respect to the science question at hand. We present an application for the practical use of priors and predictions for large-scale ocean sampling. This combined approach builds upon previous works by the authors, and accurately localizes the traversed path of an underwater glider over long-duration, ocean deployments. The proposed method takes advantage of the reliable, short-term predictions of an ocean model, and the utility of priors used in terrain-based navigation over areas of significant bathymetric relief to bound uncertainty error in dead-reckoning navigation. This method improves upon our previously published works by 1 demonstrating the utility of our terrain-based navigation method with multiple field trials, and 2 presenting a hybrid algorithm that combines both approaches to bound navigational error and uncertainty for long-term deployments of underwater vehicles. We demonstrate the approach by examining data from actual field trials with autonomous underwater gliders, and demonstrate an ability to estimate geographical location of an underwater glider to 2

  11. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    Science.gov (United States)

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the

  12. Discrete Model Predictive Control-Based Maximum Power Point Tracking for PV Systems: Overview and Evaluation

    DEFF Research Database (Denmark)

    Lashab, Abderezak; Sera, Dezso; Guerrero, Josep M.

    2018-01-01

    The main objective of this work is to provide an overview and evaluation of discrete model predictive controlbased maximum power point tracking (MPPT) for PV systems. A large number of MPC based MPPT methods have been recently introduced in the literature with very promising performance, however......, an in-depth investigation and comparison of these methods have not been carried out yet. Therefore, this paper has set out to provide an in-depth analysis and evaluation of MPC based MPPT methods applied to various common power converter topologies. The performance of MPC based MPPT is directly linked...... with the converter topology, and it is also affected by the accurate determination of the converter parameters, sensitivity to converter parameter variations is also investigated. The static and dynamic performance of the trackers are assessed according to the EN 50530 standard, using detailed simulation models...

  13. Alignment and prediction of cis-regulatory modules based on a probabilistic model of evolution.

    Directory of Open Access Journals (Sweden)

    Xin He

    2009-03-01

    Full Text Available Cross-species comparison has emerged as a powerful paradigm for predicting cis-regulatory modules (CRMs and understanding their evolution. The comparison requires reliable sequence alignment, which remains a challenging task for less conserved noncoding sequences. Furthermore, the existing models of DNA sequence evolution generally do not explicitly treat the special properties of CRM sequences. To address these limitations, we propose a model of CRM evolution that captures different modes of evolution of functional transcription factor binding sites (TFBSs and the background sequences. A particularly novel aspect of our work is a probabilistic model of gains and losses of TFBSs, a process being recognized as an important part of regulatory sequence evolution. We present a computational framework that uses this model to solve the problems of CRM alignment and prediction. Our alignment method is similar to existing methods of statistical alignment but uses the conserved binding sites to improve alignment. Our CRM prediction method deals with the inherent uncertainties of binding site annotations and sequence alignment in a probabilistic framework. In simulated as well as real data, we demonstrate that our program is able to improve both alignment and prediction of CRM sequences over several state-of-the-art methods. Finally, we used alignments produced by our program to study binding site conservation in genome-wide binding data of key transcription factors in the Drosophila blastoderm, with two intriguing results: (i the factor-bound sequences are under strong evolutionary constraints even if their neighboring genes are not expressed in the blastoderm and (ii binding sites in distal bound sequences (relative to transcription start sites tend to be more conserved than those in proximal regions. Our approach is implemented as software, EMMA (Evolutionary Model-based cis-regulatory Module Analysis, ready to be applied in a broad biological context.

  14. Alignment and prediction of cis-regulatory modules based on a probabilistic model of evolution.

    Science.gov (United States)

    He, Xin; Ling, Xu; Sinha, Saurabh

    2009-03-01

    Cross-species comparison has emerged as a powerful paradigm for predicting cis-regulatory modules (CRMs) and understanding their evolution. The comparison requires reliable sequence alignment, which remains a challenging task for less conserved noncoding sequences. Furthermore, the existing models of DNA sequence evolution generally do not explicitly treat the special properties of CRM sequences. To address these limitations, we propose a model of CRM evolution that captures different modes of evolution of functional transcription factor binding sites (TFBSs) and the background sequences. A particularly novel aspect of our work is a probabilistic model of gains and losses of TFBSs, a process being recognized as an important part of regulatory sequence evolution. We present a computational framework that uses this model to solve the problems of CRM alignment and prediction. Our alignment method is similar to existing methods of statistical alignment but uses the conserved binding sites to improve alignment. Our CRM prediction method deals with the inherent uncertainties of binding site annotations and sequence alignment in a probabilistic framework. In simulated as well as real data, we demonstrate that our program is able to improve both alignment and prediction of CRM sequences over several state-of-the-art methods. Finally, we used alignments produced by our program to study binding site conservation in genome-wide binding data of key transcription factors in the Drosophila blastoderm, with two intriguing results: (i) the factor-bound sequences are under strong evolutionary constraints even if their neighboring genes are not expressed in the blastoderm and (ii) binding sites in distal bound sequences (relative to transcription start sites) tend to be more conserved than those in proximal regions. Our approach is implemented as software, EMMA (Evolutionary Model-based cis-regulatory Module Analysis), ready to be applied in a broad biological context.

  15. An ANN-GA model based promoter prediction in Arabidopsis thaliana using tilling microarray data.

    Science.gov (United States)

    Mishra, Hrishikesh; Singh, Nitya; Misra, Krishna; Lahiri, Tapobrata

    2011-01-01

    Identification of promoter region is an important part of gene annotation. Identification of promoters in eukaryotes is important as promoters modulate various metabolic functions and cellular stress responses. In this work, a novel approach utilizing intensity values of tilling microarray data for a model eukaryotic plant Arabidopsis thaliana, was used to specify promoter region from non-promoter region. A feed-forward back propagation neural network model supported by genetic algorithm was employed to predict the class of data with a window size of 41. A dataset comprising of 2992 data vectors representing both promoter and non-promoter regions, chosen randomly from probe intensity vectors for whole genome of Arabidopsis thaliana generated through tilling microarray technique was used. The classifier model shows prediction accuracy of 69.73% and 65.36% on training and validation sets, respectively. Further, a concept of distance based class membership was used to validate reliability of classifier, which showed promising results. The study shows the usability of micro-array probe intensities to predict the promoter regions in eukaryotic genomes.

  16. Statistics-based model for prediction of chemical biosynthesis yield from Saccharomyces cerevisiae

    Directory of Open Access Journals (Sweden)

    Leonard Effendi

    2011-06-01

    Full Text Available Abstract Background The robustness of Saccharomyces cerevisiae in facilitating industrial-scale production of ethanol extends its utilization as a platform to synthesize other metabolites. Metabolic engineering strategies, typically via pathway overexpression and deletion, continue to play a key role for optimizing the conversion efficiency of substrates into the desired products. However, chemical production titer or yield remains difficult to predict based on reaction stoichiometry and mass balance. We sampled a large space of data of chemical production from S. cerevisiae, and developed a statistics-based model to calculate production yield using input variables that represent the number of enzymatic steps in the key biosynthetic pathway of interest, metabolic modifications, cultivation modes, nutrition and oxygen availability. Results Based on the production data of about 40 chemicals produced from S. cerevisiae, metabolic engineering methods, nutrient supplementation, and fermentation conditions described therein, we generated mathematical models with numerical and categorical variables to predict production yield. Statistically, the models showed that: 1. Chemical production from central metabolic precursors decreased exponentially with increasing number of enzymatic steps for biosynthesis (>30% loss of yield per enzymatic step, P-value = 0; 2. Categorical variables of gene overexpression and knockout improved product yield by 2~4 folds (P-value Saccharomyces cerevisiae has historically evolved for robust alcohol fermentation. Conclusions We generated simple mathematical models for first-order approximation of chemical production yield from S. cerevisiae. These linear models provide empirical insights to the effects of strain engineering and cultivation conditions toward biosynthetic efficiency. These models may not only provide guidelines for metabolic engineers to synthesize desired products, but also be useful to compare the

  17. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation......, then rival strategies can still be compared based on repeated bootstraps of the same data. Often, however, the overall performance of rival strategies is similar and it is thus difficult to decide for one model. Here, we investigate the variability of the prediction models that results when the same...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  18. Analysis of direct contact membrane distillation based on a lumped-parameter dynamic predictive model

    KAUST Repository

    Karam, Ayman M.

    2016-10-03

    Membrane distillation (MD) is an emerging technology that has a great potential for sustainable water desalination. In order to pave the way for successful commercialization of MD-based water desalination techniques, adequate and accurate dynamical models of the process are essential. This paper presents the predictive capabilities of a lumped-parameter dynamic model for direct contact membrane distillation (DCMD) and discusses the results under wide range of steady-state and dynamic conditions. Unlike previous studies, the proposed model captures the time response of the spacial temperature distribution along the flow direction. It also directly solves for the local temperatures at the membrane interfaces, which allows to accurately model and calculate local flux values along with other intrinsic variables of great influence on the process, like the temperature polarization coefficient (TPC). The proposed model is based on energy and mass conservation principles and analogy between thermal and electrical systems. Experimental data was collected to validated the steady-state and dynamic responses of the model. The obtained results shows great agreement with the experimental data. The paper discusses the results of several simulations under various conditions to optimize the DCMD process efficiency and analyze its response. This demonstrates some potential applications of the proposed model to carry out scale up and design studies. © 2016

  19. Developing An Explanatory Prediction Model Based On Rainfall Patterns For Cholera Outbreaks In Africa

    Science.gov (United States)

    van der Merwe, M. R.; Du Preez, M.

    2012-12-01

    area) and Uganda (inland area) which is not only based on correlation study results but also on the identification of cause-effect mechanisms. This is done by following an integrative multidisciplinary approach which involves the integration of laboratory and field study results, in situ and satellite data, and modeled data. We conclude that a prediction model for early warning and intervention purposes needs to be based on the identification and understanding of cause-effect mechanisms associated with the correlation between cholera outbreaks and rainfall; be parametrized for local conditions; and be based on a driver(s) or proxy for a driver(s) which allows sufficient time for decision makers to act.

  20. Microclimate Data Improve Predictions of Insect Abundance Models Based on Calibrated Spatiotemporal Temperatures

    Science.gov (United States)

    Rebaudo, François; Faye, Emile; Dangles, Olivier

    2016-01-01

    A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: (i) the WorldClim dataset (global dataset), (ii) air temperature recorded using data loggers (weather station dataset), and (iii) air crop canopy temperature (microclimate dataset). We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1 km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250 m radius disks for the microclimate dataset). Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11°C revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict species

  1. Microclimate data improve predictions of insect abundance models based on calibrated spatiotemporal temperatures

    Directory of Open Access Journals (Sweden)

    François eRebaudo

    2016-04-01

    Full Text Available A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: i the WorldClim dataset (global dataset, ii air temperature recorded using data loggers (weather station dataset, and iii air crop canopy temperature (microclimate dataset. We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250m radius disks for the microclimate dataset. Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11ºC revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict

  2. Microclimate Data Improve Predictions of Insect Abundance Models Based on Calibrated Spatiotemporal Temperatures.

    Science.gov (United States)

    Rebaudo, François; Faye, Emile; Dangles, Olivier

    2016-01-01

    A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: (i) the WorldClim dataset (global dataset), (ii) air temperature recorded using data loggers (weather station dataset), and (iii) air crop canopy temperature (microclimate dataset). We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1 km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250 m radius disks for the microclimate dataset). Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11°C revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict species

  3. Prediction of power ramp defects - development of a physically based model and evaluation of existing criteria

    International Nuclear Information System (INIS)

    Notley, M.J.F.; Kohn, E.

    2001-01-01

    Power-ramp induced fuel failure is not a problem in the present CANDU reactors. The current empirical correlations that define probability of failure do not agree one-with-another and do not allow extrapolation outside the database. A new methodology, based on physical processes, is presented and compared to data. The methodology calculates the pre-ramp sheath stress and the incremental stress during the ramp, and whether or not there is a defect is predicted based on a failure threshold stress. The proposed model confirms the deductions made by daSilva from an empirical 'fit' to data from the 1988 PNGS power ramp failure incident. It is recommended that daSilvas' correlation be used as reference for OPG (Ontario Power Generation) power reactor fuel, and that extrapolation be performed using the new model. (author)

  4. The k-nearest neighbour-based GMDH prediction model and its applications

    Science.gov (United States)

    Li, Qiumin; Tian, Yixiang; Zhang, Gaoxun

    2014-11-01

    This paper centres on a new GMDH (group method of data handling) algorithm based on the k-nearest neighbour (k-NN) method. Instead of the transfer function that has been used in traditional GMDH, the k-NN kernel function is adopted in the proposed GMDH to characterise relationships between the input and output variables. The proposed method combines the advantages of the k-nearest neighbour (k-NN) algorithm and GMDH algorithm, and thus improves the predictive capability of the GMDH algorithm. It has been proved that when the bandwidth of the kernel is less than a certain constant C, the predictive capability of the new model is superior to that of the traditional one. As an illustration, it is shown that the new method can accurately forecast consumer price index (CPI).

  5. Hierarchical model-based predictive control of a power plant portfolio

    DEFF Research Database (Denmark)

    Edlund, Kristian; Bendtsen, Jan Dimon; Jørgensen, John Bagterp

    2011-01-01

    optimisation problem, which is solved using Dantzig–Wolfe decomposition. This decomposition yields improved computational efficiency and better scalability compared to centralised methods.The proposed control scheme is compared to an existing, state-of-the-art portfolio control system (operated by DONG Energy...... control” – becomes increasingly important as the ratio of renewable energy in a power system grows. As a consequence, tomorrow's “smart grids” require highly flexible and scalable control systems compared to conventional power systems. This paper proposes a hierarchical model-based predictive control......One of the main difficulties in large-scale implementation of renewable energy in existing power systems is that the production from renewable sources is difficult to predict and control. For this reason, fast and efficient control of controllable power producing units – so-called “portfolio...

  6. Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems

    Directory of Open Access Journals (Sweden)

    Man Hong

    2013-01-01

    Full Text Available To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR is proposed. A predictive control algorithm based on the Kalman filter for constrained Hammerstein-Wiener systems is designed. An output feedback control law regarding the linear subsystem is derived by state observation. The size of reaction heat produced and its influence on the output are evaluated by the Kalman filter. The observation and evaluation results are calculated by the multistep predictive approach. Actual control variables are computed while considering the constraints of the optimal control problem in a finite horizon through the receding horizon. The simulation example of the CSTR tester shows the effectiveness and feasibility of the proposed algorithm.

  7. Control of continuous fed-batch fermentation process using neural network based model predictive controller.

    Science.gov (United States)

    Kiran, A Uma Maheshwar; Jana, Asim Kumar

    2009-10-01

    Cell growth and metabolite production greatly depend on the feeding of the nutrients in fed-batch fermentations. A strategy for controlling the glucose feed rate in fed-batch baker's yeast fermentation and a novel controller was studied. The difference between the specific carbon dioxide evolution rate and oxygen uptake rate (Qc - Qo) was used as controller variable. The controller evaluated was neural network based model predictive controller and optimizer. The performance of the controller was evaluated by the set point tracking. Results showed good performance of the controller.

  8. Combined Active and Reactive Power Control of Wind Farms based on Model Predictive Control

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Wang, Jianhui

    2017-01-01

    This paper proposes a combined wind farm controller based on Model Predictive Control (MPC). Compared with the conventional decoupled active and reactive power control, the proposed control scheme considers the significant impact of active power on voltage variations due to the low X=R ratio...... of wind farm collector systems. The voltage control is improved. Besides, by coordination of active and reactive power, the Var capacity is optimized to prevent potential failures due to Var shortage, especially when the wind farm operates close to its full load. An analytical method is used to calculate...

  9. [Predicting copper toxicity to Hypophthalmichthys molitrix and Ctenopharyngodon idellus based on biotic ligand model].

    Science.gov (United States)

    Wang, Wan-Bin; Chen, Sha; Wu, Min; Zhao, Jing

    2014-10-01

    A series of 96 h copper acute toxicity experiments were conducted with Ctenopharyngodon idellus and Hypophthalmichthys molitrix under different concentrations of DOC [ρ(DOC) 0.05, 0.5, 1, 2, 4 mg · L(-1)]. Higher DOC resulted in a reduction of toxicity, which was in line with the concepts of the biotic ligand model (BLM). It was concluded that the mean absolute deviation (MAD) of LC50 with Ctenopharyngodon idellus and Hypophthalmichthys molitrix was 591.2, 157.14 μg · L(-1) and 728.18, 91.24 μg x L(-1), respectively, by the prediction of copper BLM developed for Fathead minnow and Rainbow trout. Based on speciation analysis of biotic ligand model, it was shown that LA50 values of Ctenopharyngodon idellus and Hypophthalmichthys molitrix were 10.960 and 3.978 nmol · g(-1), respectively. Then the MAD values became 280.52 and 92.25 μg · L(-1) for Ctenopharyngodon idellus and Hypophthalmichthys molitrix using the normalized LA50. Finally by searching toxicity data in literature, the MAD values on Ctenopharyngodon idellus and Hypophthalmichthys molitrix were 252.37 and 50.26 μg · L(-1), successively. This result verified that the toxicity prediction based on biotic ligand model was practical.

  10. Simulation of complex glazing products; from optical data measurements to model based predictive controls

    Energy Technology Data Exchange (ETDEWEB)

    Kohler, Christian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-04-01

    Complex glazing systems such as venetian blinds, fritted glass and woven shades require more detailed optical and thermal input data for their components than specular non light-redirecting glazing systems. Various methods for measuring these data sets are described in this paper. These data sets are used in multiple simulation tools to model the thermal and optical properties of complex glazing systems. The output from these tools can be used to generate simplified rating values or as an input to other simulation tools such as whole building annual energy programs, or lighting analysis tools. I also describe some of the challenges of creating a rating system for these products and which factors affect this rating. A potential future direction of simulation and building operations is model based predictive controls, where detailed computer models are run in real-time, receiving data for an actual building and providing control input to building elements such as shades.

  11. Predicting seizure by modeling synaptic plasticity based on EEG signals - a case study of inherited epilepsy

    Science.gov (United States)

    Zhang, Honghui; Su, Jianzhong; Wang, Qingyun; Liu, Yueming; Good, Levi; Pascual, Juan M.

    2018-03-01

    This paper explores the internal dynamical mechanisms of epileptic seizures through quantitative modeling based on full brain electroencephalogram (EEG) signals. Our goal is to provide seizure prediction and facilitate treatment for epileptic patients. Motivated by an earlier mathematical model with incorporated synaptic plasticity, we studied the nonlinear dynamics of inherited seizures through a differential equation model. First, driven by a set of clinical inherited electroencephalogram data recorded from a patient with diagnosed Glucose Transporter Deficiency, we developed a dynamic seizure model on a system of ordinary differential equations. The model was reduced in complexity after considering and removing redundancy of each EEG channel. Then we verified that the proposed model produces qualitatively relevant behavior which matches the basic experimental observations of inherited seizure, including synchronization index and frequency. Meanwhile, the rationality of the connectivity structure hypothesis in the modeling process was verified. Further, through varying the threshold condition and excitation strength of synaptic plasticity, we elucidated the effect of synaptic plasticity to our seizure model. Results suggest that synaptic plasticity has great effect on the duration of seizure activities, which support the plausibility of therapeutic interventions for seizure control.

  12. Predicting seizure by modeling synaptic plasticity based on EEG signals - a case study of inherited epilepsy.

    Science.gov (United States)

    Zhang, Honghui; Su, Jianzhong; Wang, Qingyun; Liu, Yueming; Good, Levi; Pascual, Juan

    2018-03-01

    This paper explores the internal dynamical mechanisms of epileptic seizures through quantitative modeling based on full brain electroencephalogram (EEG) signals. Our goal is to provide seizure prediction and facilitate treatment for epileptic patients. Motivated by an earlier mathematical model with incorporated synaptic plasticity, we studied the nonlinear dynamics of inherited seizures through a differential equation model. First, driven by a set of clinical inherited electroencephalogram data recorded from a patient with diagnosed Glucose Transporter Deficiency, we developed a dynamic seizure model on a system of ordinary differential equations. The model was reduced in complexity after considering and removing redundancy of each EEG channel. Then we verified that the proposed model produces qualitatively relevant behavior which matches the basic experimental observations of inherited seizure, including synchronization index and frequency. Meanwhile, the rationality of the connectivity structure hypothesis in the modeling process was verified. Further, through varying the threshold condition and excitation strength of synaptic plasticity, we elucidated the effect of synaptic plasticity to our seizure model. Results suggest that synaptic plasticity has great effect on the duration of seizure activities, which support the plausibility of therapeutic interventions for seizure control.

  13. Passivity-based model predictive control for mobile vehicle motion planning

    CERN Document Server

    Tahirovic, Adnan

    2013-01-01

    Passivity-based Model Predictive Control for Mobile Vehicle Navigation represents a complete theoretical approach to the adoption of passivity-based model predictive control (MPC) for autonomous vehicle navigation in both indoor and outdoor environments. The brief also introduces analysis of the worst-case scenario that might occur during the task execution. Some of the questions answered in the text include: • how to use an MPC optimization framework for the mobile vehicle navigation approach; • how to guarantee safe task completion even in complex environments including obstacle avoidance and sideslip and rollover avoidance; and  • what to expect in the worst-case scenario in which the roughness of the terrain leads the algorithm to generate the longest possible path to the goal. The passivity-based MPC approach provides a framework in which a wide range of complex vehicles can be accommodated to obtain a safer and more realizable tool during the path-planning stage. During task execution, the optimi...

  14. Application of GIS based data driven evidential belief function model to predict groundwater potential zonation

    Science.gov (United States)

    Nampak, Haleh; Pradhan, Biswajeet; Manap, Mohammad Abd

    2014-05-01

    The objective of this paper is to exploit potential application of an evidential belief function (EBF) model for spatial prediction of groundwater productivity at Langat basin area, Malaysia using geographic information system (GIS) technique. About 125 groundwater yield data were collected from well locations. Subsequently, the groundwater yield was divided into high (⩾11 m3/h) and low yields (divided into a testing dataset 70% (42 wells) for training the model and the remaining 30% (18 wells) was used for validation purpose. To perform cross validation, the frequency ratio (FR) approach was applied into remaining groundwater wells with low yield to show the spatial correlation between the low potential zones of groundwater productivity. A total of twelve groundwater conditioning factors that affect the storage of groundwater occurrences were derived from various data sources such as satellite based imagery, topographic maps and associated database. Those twelve groundwater conditioning factors are elevation, slope, curvature, stream power index (SPI), topographic wetness index (TWI), drainage density, lithology, lineament density, land use, normalized difference vegetation index (NDVI), soil and rainfall. Subsequently, the Dempster-Shafer theory of evidence model was applied to prepare the groundwater potential map. Finally, the result of groundwater potential map derived from belief map was validated using testing data. Furthermore, to compare the performance of the EBF result, logistic regression model was applied. The success-rate and prediction-rate curves were computed to estimate the efficiency of the employed EBF model compared to LR method. The validation results demonstrated that the success-rate for EBF and LR methods were 83% and 82% respectively. The area under the curve for prediction-rate of EBF and LR methods were calculated 78% and 72% respectively. The outputs achieved from the current research proved the efficiency of EBF in groundwater

  15. Enhancing the Lasso Approach for Developing a Survival Prediction Model Based on Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Shuhei Kaneko

    2015-01-01

    Full Text Available In the past decade, researchers in oncology have sought to develop survival prediction models using gene expression data. The least absolute shrinkage and selection operator (lasso has been widely used to select genes that truly correlated with a patient’s survival. The lasso selects genes for prediction by shrinking a large number of coefficients of the candidate genes towards zero based on a tuning parameter that is often determined by a cross-validation (CV. However, this method can pass over (or fail to identify true positive genes (i.e., it identifies false negatives in certain instances, because the lasso tends to favor the development of a simple prediction model. Here, we attempt to monitor the identification of false negatives by developing a method for estimating the number of true positive (TP genes for a series of values of a tuning parameter that assumes a mixture distribution for the lasso estimates. Using our developed method, we performed a simulation study to examine its precision in estimating the number of TP genes. Additionally, we applied our method to a real gene expression dataset and found that it was able to identify genes correlated with survival that a CV method was unable to detect.

  16. A radar-based hydrological model for flash flood prediction in the dry regions of Israel

    Science.gov (United States)

    Ronen, Alon; Peleg, Nadav; Morin, Efrat

    2014-05-01

    Flash floods are floods which follow shortly after rainfall events, and are among the most destructive natural disasters that strike people and infrastructures in humid and arid regions alike. Using a hydrological model for the prediction of flash floods in gauged and ungauged basins can help mitigate the risk and damage they cause. The sparsity of rain gauges in arid regions requires the use of radar measurements in order to get reliable quantitative precipitation estimations (QPE). While many hydrological models use radar data, only a handful do so in dry climate. This research presents a robust radar-based hydro-meteorological model built specifically for dry climate. Using this model we examine the governing factors of flash floods in the arid and semi-arid regions of Israel in particular and in dry regions in general. The hydrological model built is a semi-distributed, physically-based model, which represents the main hydrological processes in the area, namely infiltration, flow routing and transmission losses. Three infiltration functions were examined - Initial & Constant, SCS-CN and Green&Ampt. The parameters for each function were found by calibration based on 53 flood events in three catchments, and validation was performed using 55 flood events in six catchments. QPE were obtained from a C-band weather radar and adjusted using a weighted multiple regression method based on a rain gauge network. Antecedent moisture conditions were calculated using a daily recharge assessment model (DREAM). We found that the SCS-CN infiltration function performed better than the other two, with reasonable agreement between calculated and measured peak discharge. Effects of storm characteristics were studied using synthetic storms from a high resolution weather generator (HiReS-WG), and showed a strong correlation between storm speed, storm direction and rain depth over desert soils to flood volume and peak discharge.

  17. THE RECURRENT ALGORITHM FOR INTERFEROMETRIC SIGNALS PROCESSING BASED ON MULTI-CLOUD PREDICTION MODEL

    Directory of Open Access Journals (Sweden)

    I. P. Gurov

    2014-07-01

    Full Text Available The paper deals with modification of the recurrent processing algorithm for discrete sequence of interferometric signal samples. The algorithm is based on subsequent reference signal prediction at specifying a set (“cloud” of values for signal parameters vector by Monte Carlo method, comparison with the measured signal value and usage of the residual for enhancing the values of signal parameters at each discretization step. The concept of multi-cloud prediction model is used in the proposed modified algorithm. A set of normally distributed clouds is created with expectation values selected on the base of criterion of minimum residual between prediction and observation values. Experimental testing of the proposed method applied to estimation of fringe initial phase in the phase shifting interferometry has been conducted. The estimate variance of the signal reconstructed according to estimated initial phase from initial signal does not exceed 2% of the maximum signal value. It has been shown that the proposed algorithm application makes it possible to avoid the 2π-ambiguity and ensure sustainable recovery of interference fringes phase of a complicated type without involving a priori information about interference fringe phase distribution. The usage of the proposed algorithm applied to estimation of interferometric signals parameters gives the possibility for improving the filter stability with respect to influence of random noise and decreasing requirements for accuracy of a priori filtration parameters setting as compared with conventional (single-cloud implementation of the sequential Monte Carlo method.

  18. Combined prediction model for supply risk in nuclear power equipment manufacturing industry based on support vector machine and decision tree

    International Nuclear Information System (INIS)

    Shi Chunsheng; Meng Dapeng

    2011-01-01

    The prediction index for supply risk is developed based on the factor identifying of nuclear equipment manufacturing industry. The supply risk prediction model is established with the method of support vector machine and decision tree, based on the investigation on 3 important nuclear power equipment manufacturing enterprises and 60 suppliers. Final case study demonstrates that the combination model is better than the single prediction model, and demonstrates the feasibility and reliability of this model, which provides a method to evaluate the suppliers and measure the supply risk. (authors)

  19. Introducing spatial information into predictive NF-kappaB modelling--an agent-based approach.

    Directory of Open Access Journals (Sweden)

    Mark Pogson

    2008-06-01

    Full Text Available Nature is governed by local interactions among lower-level sub-units, whether at the cell, organ, organism, or colony level. Adaptive system behaviour emerges via these interactions, which integrate the activity of the sub-units. To understand the system level it is necessary to understand the underlying local interactions. Successful models of local interactions at different levels of biological organisation, including epithelial tissue and ant colonies, have demonstrated the benefits of such 'agent-based' modelling. Here we present an agent-based approach to modelling a crucial biological system--the intracellular NF-kappaB signalling pathway. The pathway is vital to immune response regulation, and is fundamental to basic survival in a range of species. Alterations in pathway regulation underlie a variety of diseases, including atherosclerosis and arthritis. Our modelling of individual molecules, receptors and genes provides a more comprehensive outline of regulatory network mechanisms than previously possible with equation-based approaches. The method also permits consideration of structural parameters in pathway regulation; here we predict that inhibition of NF-kappaB is directly affected by actin filaments of the cytoskeleton sequestering excess inhibitors, therefore regulating steady-state and feedback behaviour.

  20. Predicting biological parameters of estuarine benthic communities using models based on environmental data

    Directory of Open Access Journals (Sweden)

    José Souto Rosa-Filho

    2004-08-01

    Full Text Available This study aimed to predict the biological parameters (species composition, abundance, richness, diversity and evenness of benthic assemblages in southern Brazil estuaries using models based on environmental data (sediment characteristics, salinity, air and water temperature and depth. Samples were collected seasonally from five estuaries between the winter of 1996 and the summer of 1998. At each estuary, samples were taken in unpolluted areas with similar characteristics related to presence or absence of vegetation, depth and distance from the mouth. In order to obtain predictive models, two methods were used, the first one based on Multiple Discriminant Analysis (MDA, and the second based on Multiple Linear Regression (MLR. Models using MDA had better results than those based on linear regression. The best results using MLR were obtained for diversity and richness. It could be concluded that the use predictions models based on environmental data would be very useful in environmental monitoring studies in estuaries.Este trabalho objetivou predizer parâmetros da estrutura de associações macrobentônicas (composição específica, abundância, riqueza, diversidade e equitatividade em estuários do Sul do Brasil, utilizando modelos baseados em dados ambientais (características dos sedimentos, salinidade, temperaturas do ar e da água, e profundidade. As amostragens foram realizadas sazonalmente em cinco estuários entre o inverno de 1996 e o verão de 1998. Em cada estuário as amostras foram coletadas em áreas não poluídas, com características semelhantes quanto a presença ou ausência de vegetação, profundidade e distância da desenbocadura. Para a obtenção dos modelos de predição, foram utilizados dois métodos: o primeiro baseado em Análise Discriminante Múltipla (ADM e o segundo em Regressão Linear Múltipla (RLM. Os modelos baseados em ADM apresentaram resultados melhores do que os baseados em regressão linear. Os melhores

  1. A model-based prognostic approach to predict interconnect failure using impedance analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Dae Il; Yoon, Jeong Ah [Dept. of System Design and Control Engineering. Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2016-10-15

    The reliability of electronic assemblies is largely affected by the health of interconnects, such as solder joints, which provide mechanical, electrical and thermal connections between circuit components. During field lifecycle conditions, interconnects are often subjected to a DC open circuit, one of the most common interconnect failure modes, due to cracking. An interconnect damaged by cracking is sometimes extremely hard to detect when it is a part of a daisy-chain structure, neighboring with other healthy interconnects that have not yet cracked. This cracked interconnect may seem to provide a good electrical contact due to the compressive load applied by the neighboring healthy interconnects, but it can cause the occasional loss of electrical continuity under operational and environmental loading conditions in field applications. Thus, cracked interconnects can lead to the intermittent failure of electronic assemblies and eventually to permanent failure of the product or the system. This paper introduces a model-based prognostic approach to quantitatively detect and predict interconnect failure using impedance analysis and particle filtering. Impedance analysis was previously reported as a sensitive means of detecting incipient changes at the surface of interconnects, such as cracking, based on the continuous monitoring of RF impedance. To predict the time to failure, particle filtering was used as a prognostic approach using the Paris model to address the fatigue crack growth. To validate this approach, mechanical fatigue tests were conducted with continuous monitoring of RF impedance while degrading the solder joints under test due to fatigue cracking. The test results showed the RF impedance consistently increased as the solder joints were degraded due to the growth of cracks, and particle filtering predicted the time to failure of the interconnects similarly to their actual timesto- failure based on the early sensitivity of RF impedance.

  2. Self-Adaptive Prediction of Cloud Resource Demands Using Ensemble Model and Subtractive-Fuzzy Clustering Based Fuzzy Neural Network

    OpenAIRE

    Zhijia Chen; Yuanchang Zhu; Yanqiang Di; Shaochong Feng

    2015-01-01

    In IaaS (infrastructure as a service) cloud environment, users are provisioned with virtual machines (VMs). To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN). We analyze the characters of user preferences and demands. Then the architecture of the prediction model is const...

  3. Interpretation of ANN-based QSAR models for prediction of antioxidant activity of flavonoids.

    Science.gov (United States)

    Žuvela, Petar; David, Jonathan; Wong, Ming Wah

    2018-02-05

    Quantitative structure-activity relationships (QSARs) built using machine learning methods, such as artificial neural networks (ANNs) are powerful in prediction of (antioxidant) activity from quantum mechanical (QM) parameters describing the molecular structure, but are usually not interpretable. This obvious difficulty is one of the most common obstacles in application of ANN-based QSAR models for design of potent antioxidants or elucidating the underlying mechanism. Interpreting the resulting models is often omitted or performed erroneously altogether. In this work, a comprehensive comparative study of six methods (PaD, PaD 2 , weights, stepwise, perturbation and profile) for exploration and interpretation of ANN models built for prediction of Trolox-equivalent antioxidant capacity (TEAC) QM descriptors, is presented. Sum of ranking differences (SRD) was used for ranking of the six methods with respect to the contributions of the calculated QM molecular descriptors toward TEAC. The results show that the PaD, PaD 2 and profile methods are the most stable and give rise to realistic interpretation of the observed correlations. Therefore, they are safely applicable for future interpretations without the opinion of an experienced chemist or bio-analyst. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  4. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    refining formal, inductive predictive models is the quality of the archaeological and environmental data. To build models efficiently, relevant...geomorphology, and historic information . Lessons Learned: The original model was focused on the identification of prehistoric resources. This...system but uses predictive modeling informally . For example, there is no probability for buried archaeological deposits on the Burton Mesa, but there is

  5. Value-based benefit design: using a predictive modeling approach to improve compliance.

    Science.gov (United States)

    Mahoney, John J

    2008-07-01

    Increased medication compliance rates have been demonstrated to result in improved clinical outcomes and reduced overall medical expenditures. As such, managed care stakeholders should take the total value approach to benefit design and consider total medical costs beyond the cost of pharmacotherapy alone. To describe the value-based benefit design employed by Pitney Bowes (specifically, the predictive modeling approach), to improve medication compliance, and to report the results of this intervention. Despite significant skepticism surrounding value-based benefit design, there is growing evidence that these plans can be used in conjunction with careful pharmacy management. In fact, value-based design provides a different lever on pharmacy management and allows for the appropriate drug to be channeled to the appropriate person. Studies demonstrating the adverse impact of high coinsurance levels further augment the argument for value-based benefit design. Value-based benefit design was employed at Pitney Bowes, a $6.1-billion global provider of integrated mailstream solutions, with noticeable success. Patients were either placed in a disease management program or in a secondary program promoting preventive care. The company selectively cut copays to achieve that end, and this total value approach translated into significant savings. To develop a successful value-based benefit design, stakeholders cannot simply cut costs or cut copays. Action must be taken as part of a concerted program, coupled with disease management or similar interventions. "Value based" means that positive outcomes are the ultimate goal, and barriers to those positive outcomes must be addressed.

  6. A voxel-based finite element model for the prediction of bladder deformation

    International Nuclear Information System (INIS)

    Chai Xiangfei; Herk, Marcel van; Hulshof, Maarten C. C. M.; Bel, Arjan

    2012-01-01

    Purpose: A finite element (FE) bladder model was previously developed to predict bladder deformation caused by bladder filling change. However, two factors prevent a wide application of FE models: (1) the labor required to construct a FE model with high quality mesh and (2) long computation time needed to construct the FE model and solve the FE equations. In this work, we address these issues by constructing a low-resolution voxel-based FE bladder model directly from the binary segmentation images and compare the accuracy and computational efficiency of the voxel-based model used to simulate bladder deformation with those of a classical FE model with a tetrahedral mesh. Methods: For ten healthy volunteers, a series of MRI scans of the pelvic region was recorded at regular intervals of 10 min over 1 h. For this series of scans, the bladder volume gradually increased while rectal volume remained constant. All pelvic structures were defined from a reference image for each volunteer, including bladder wall, small bowel, prostate (male), uterus (female), rectum, pelvic bone, spine, and the rest of the body. Four separate FE models were constructed from these structures: one with a tetrahedral mesh (used in previous study), one with a uniform hexahedral mesh, one with a nonuniform hexahedral mesh, and one with a low-resolution nonuniform hexahedral mesh. Appropriate material properties were assigned to all structures and uniform pressure was applied to the inner bladder wall to simulate bladder deformation from urine inflow. Performance of the hexahedral meshes was evaluated against the performance of the standard tetrahedral mesh by comparing the accuracy of bladder shape prediction and computational efficiency. Results: FE model with a hexahedral mesh can be quickly and automatically constructed. No substantial differences were observed between the simulation results of the tetrahedral mesh and hexahedral meshes (<1% difference in mean dice similarity coefficient to

  7. Predicting the Water Level Fluctuation in an Alpine Lake Using Physically Based, Artificial Neural Network, and Time Series Forecasting Models

    Directory of Open Access Journals (Sweden)

    Chih-Chieh Young

    2015-01-01

    Full Text Available Accurate prediction of water level fluctuation is important in lake management due to its significant impacts in various aspects. This study utilizes four model approaches to predict water levels in the Yuan-Yang Lake (YYL in Taiwan: a three-dimensional hydrodynamic model, an artificial neural network (ANN model (back propagation neural network, BPNN, a time series forecasting (autoregressive moving average with exogenous inputs, ARMAX model, and a combined hydrodynamic and ANN model. Particularly, the black-box ANN model and physically based hydrodynamic model are coupled to more accurately predict water level fluctuation. Hourly water level data (a total of 7296 observations was collected for model calibration (training and validation. Three statistical indicators (mean absolute error, root mean square error, and coefficient of correlation were adopted to evaluate model performances. Overall, the results demonstrate that the hydrodynamic model can satisfactorily predict hourly water level changes during the calibration stage but not for the validation stage. The ANN and ARMAX models better predict the water level than the hydrodynamic model does. Meanwhile, the results from an ANN model are superior to those by the ARMAX model in both training and validation phases. The novel proposed concept using a three-dimensional hydrodynamic model in conjunction with an ANN model has clearly shown the improved prediction accuracy for the water level fluctuation.

  8. A hybrid predictive model for acoustic noise in urban areas based on time series analysis and artificial neural network

    Science.gov (United States)

    Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine

    2017-06-01

    The dangerous effect of noise on human health is well known. Both the auditory and non-auditory effects are largely documented in literature, and represent an important hazard in human activities. Particular care is devoted to road traffic noise, since it is growing according to the growth of residential, industrial and commercial areas. For these reasons, it is important to develop effective models able to predict the noise in a certain area. In this paper, a hybrid predictive model is presented. The model is based on the mixing of two different approach: the Time Series Analysis (TSA) and the Artificial Neural Network (ANN). The TSA model is based on the evaluation of trend and seasonality in the data, while the ANN model is based on the capacity of the network to "learn" the behavior of the data. The mixed approach will consist in the evaluation of noise levels by means of TSA and, once the differences (residuals) between TSA estimations and observed data have been calculated, in the training of a ANN on the residuals. This hybrid model will exploit interesting features and results, with a significant variation related to the number of steps forward in the prediction. It will be shown that the best results, in terms of prediction, are achieved predicting one step ahead in the future. Anyway, a 7 days prediction can be performed, with a slightly greater error, but offering a larger range of prediction, with respect to the single day ahead predictive model.

  9. Predictive modelling for shelf life determination of nutricereal based fermented baby food.

    Science.gov (United States)

    Rasane, Prasad; Jha, Alok; Sharma, Nitya

    2015-08-01

    A shelf life model based on storage temperatures was developed for a nutricereal based fermented baby food formulation. The formulated baby food samples were packaged and stored at 10, 25, 37 and 45 °C for a test storage period of 180 days. A shelf life study was conducted using consumer and semi-trained panels, along with chemical analysis (moisture and acidity). The chemical parameters (moisture and titratable acidity) were found inadequate in determining the shelf life of the formulated product. Weibull hazard analysis was used to determine the shelf life of the product based on sensory evaluation. Considering 25 and 50 % rejection probability, the shelf life of the baby food formulation was predicted to be 98 and 322 days, 84 and 271 days, 71 and 221 days and 58 and 171 days for the samples stored at 10, 25, 37 and 45 °C, respectively. A shelf life equation was proposed using the rejection times obtained from the consumer study. Finally, the formulated baby food samples were subjected to microbial analysis for the predicted shelf life period and were found microbiologically safe for consumption during the storage period of 360 days.

  10. On Feature Relevance in Image-Based Prediction Models: An Empirical Study

    DEFF Research Database (Denmark)

    Konukoglu, E.; Ganz, Melanie; Van Leemput, Koen

    2013-01-01

    the community. In this article, we present an empirical study on the relevant features produced by two recently developed discriminative learning algorithms: neighborhood approximation forests (NAF) and the relevance voxel machine (RVoxM). Specifically, we examine whether the sets of features these methods......Determining disease-related variations of the anatomy and function is an important step in better understanding diseases and developing early diagnostic systems. In particular, image-based multivariate prediction models and the “relevant features” they produce are attracting attention from...... produce are exhaustive; that is whether the features that are not marked as relevant carry disease-related information. We perform experiments on three different problems: image-based regression on a synthetic dataset for which the set of relevant features is known, regression of subject age as well...

  11. Evaluation of wavelet performance via an ANN-based electrical conductivity prediction model.

    Science.gov (United States)

    Ravansalar, Masoud; Rajaee, Taher

    2015-06-01

    The prediction of water quality parameters plays an important role in water resources and environmental systems. The use of electrical conductivity (EC) as a water quality indicator is one of the important parameters for estimating the amount of mineralization. This study describes the application of artificial neural network (ANN) and wavelet-neural network hybrid (WANN) models to predict the monthly EC of the Asi River at the Demirköprü gauging station, Turkey. In the proposed hybrid WANN model, the discrete wavelet transform (DWT) was linked to the ANN model for EC prediction using a feed-forward back propagation (FFBP) training algorithm. For this purpose, the original time series of monthly EC and discharge (Q) values were decomposed to several sub-time series by DWT, and these sub-time series were then presented to the ANN model as an input dataset to predict the monthly EC. Comparing the values predicted by the models indicated that the performance of the proposed WANN model was better than the conventional ANN model. The correlation of determination (R (2)) were 0.949 and 0.381 for the WANN and ANN models, respectively. The results indicate that the peak EC values predicted by the WANN model are closer to the observed values, and this model simulates the hysteresis phenomena at an acceptable level as well.

  12. Driver Vision Based Perception-Response Time Prediction and Assistance Model on Mountain Highway Curve

    OpenAIRE

    Li, Yi; Chen, Yuren

    2016-01-01

    To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers’ perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers’ vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanica...

  13. Improving predictive power of physically based rainfall-induced shallow landslide models: a probabilistic approach

    Directory of Open Access Journals (Sweden)

    S. Raia

    2014-03-01

    Full Text Available Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are based on deterministic laws. These models extend spatially the static stability models adopted in geotechnical engineering, and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the operation of the existing models lays in the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of rainfall-induced shallow landslides. For this purpose, we have modified the transient rainfall infiltration and grid-based regional slope-stability analysis (TRIGRS code. The new code (TRIGRS-P adopts a probabilistic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs

  14. Prediction of Sliding Friction Coefficient Based on a Novel Hybrid Molecular-Mechanical Model.

    Science.gov (United States)

    Zhang, Xiaogang; Zhang, Yali; Wang, Jianmei; Sheng, Chenxing; Li, Zhixiong

    2018-08-01

    Sliding friction is a complex phenomenon which arises from the mechanical and molecular interactions of asperities when examined in a microscale. To reveal and further understand the effects of micro scaled mechanical and molecular components of friction coefficient on overall frictional behavior, a hybrid molecular-mechanical model is developed to investigate the effects of main factors, including different loads and surface roughness values, on the sliding friction coefficient in a boundary lubrication condition. Numerical modelling was conducted using a deterministic contact model and based on the molecular-mechanical theory of friction. In the contact model, with given external loads and surface topographies, the pressure distribution, real contact area, and elastic/plastic deformation of each single asperity contact were calculated. Then asperity friction coefficient was predicted by the sum of mechanical and molecular components of friction coefficient. The mechanical component was mainly determined by the contact width and elastic/plastic deformation, and the molecular component was estimated as a function of the contact area and interfacial shear stress. Numerical results were compared with experimental results and a good agreement was obtained. The model was then used to predict friction coefficients in different operating and surface conditions. Numerical results explain why applied load has a minimum effect on the friction coefficients. They also provide insight into the effect of surface roughness on the mechanical and molecular components of friction coefficients. It is revealed that the mechanical component dominates the friction coefficient when the surface roughness is large (Rq > 0.2 μm), while the friction coefficient is mainly determined by the molecular component when the surface is relatively smooth (Rq friction coefficient are recommended.

  15. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  16. explICU: A web-based visualization and predictive modeling toolkit for mortality in intensive care patients.

    Science.gov (United States)

    Chen, Robert; Kumar, Vikas; Fitch, Natalie; Jagadish, Jitesh; Lifan Zhang; Dunn, William; Duen Horng Chau

    2015-01-01

    Preventing mortality in intensive care units (ICUs) has been a top priority in American hospitals. Predictive modeling has been shown to be effective in prediction of mortality based upon data from patients' past medical histories from electronic health records (EHRs). Furthermore, visualization of timeline events is imperative in the ICU setting in order to quickly identify trends in patient histories that may lead to mortality. With the increasing adoption of EHRs, a wealth of medical data is becoming increasingly available for secondary uses such as data exploration and predictive modeling. While data exploration and predictive modeling are useful for finding risk factors in ICU patients, the process is time consuming and requires a high level of computer programming ability. We propose explICU, a web service that hosts EHR data, displays timelines of patient events based upon user-specified preferences, performs predictive modeling in the back end, and displays results to the user via intuitive, interactive visualizations.

  17. Simultaneous construction of PCR-DGGE-based predictive models of Listeria monocytogenes and Vibrio parahaemolyticus on cooked shrimps.

    Science.gov (United States)

    Liao, C; Peng, Z Y; Li, J B; Cui, X W; Zhang, Z H; Malakar, P K; Zhang, W J; Pan, Y J; Zhao, Y

    2015-03-01

    The aim of this study was to simultaneously construct PCR-DGGE-based predictive models of Listeria monocytogenes and Vibrio parahaemolyticus on cooked shrimps at 4 and 10°C. Calibration curves were established to correlate peak density of DGGE bands with microbial counts. Microbial counts derived from PCR-DGGE and plate methods were fitted by Baranyi model to obtain molecular and traditional predictive models. For L. monocytogenes, growing at 4 and 10°C, molecular predictive models were constructed. It showed good evaluations of correlation coefficients (R(2) > 0.92), bias factors (Bf ) and accuracy factors (Af ) (1.0 ≤ Bf ≤ Af ≤ 1.1). Moreover, no significant difference was found between molecular and traditional predictive models when analysed on lag phase (λ), maximum growth rate (μmax ) and growth data (P > 0.05). But for V. parahaemolyticus, inactivated at 4 and 10°C, molecular models show significant difference when compared with traditional models. Taken together, these results suggest that PCR-DGGE based on DNA can be used to construct growth models, but it is inappropriate for inactivation models yet. This is the first report of developing PCR-DGGE to simultaneously construct multiple molecular models. It has been known for a long time that microbial predictive models based on traditional plate methods are time-consuming and labour-intensive. Denaturing gradient gel electrophoresis (DGGE) has been widely used as a semiquantitative method to describe complex microbial community. In our study, we developed DGGE to quantify bacterial counts and simultaneously established two molecular predictive models to describe the growth and survival of two bacteria (Listeria monocytogenes and Vibrio parahaemolyticus) at 4 and 10°C. We demonstrated that PCR-DGGE could be used to construct growth models. This work provides a new approach to construct molecular predictive models and thereby facilitates predictive microbiology and QMRA (Quantitative Microbial

  18. Model-based evaluation of subsurface monitoring networks for improved efficiency and predictive certainty of regional groundwater models

    Science.gov (United States)

    Gosses, M. J.; Wöhling, Th.; Moore, C. R.; Dann, R.; Scott, D. M.; Close, M.

    2012-04-01

    -specific prediction target under consideration. Therefore, the worth of individual observation locations may differ for different prediction targets. Our evaluation is based on predictions of lowland stream discharge resulting from changes in land use and irrigation in the upper Central Plains catchment. In our analysis, we adopt the model predictive uncertainty analysis method by Moore and Doherty (2005) which accounts for contributions from both measurement errors and uncertain structural heterogeneity. The method is robust and efficient due to a linearity assumption in the governing equations and readily implemented for application in the model-independent parameter estimation and uncertainty analysis toolkit PEST (Doherty, 2010). The proposed methods can be applied not only for the evaluation of monitoring networks, but also for the optimization of networks, to compare alternative monitoring strategies, as well as to identify best cost-benefit monitoring design even prior to any data acquisition.

  19. An Economic Model-Based Predictive Control to Manage the Users’ Thermal Comfort in a Building

    Directory of Open Access Journals (Sweden)

    Yaser Imad Alamin

    2017-03-01

    Full Text Available The goal of maintaining users’ thermal comfort conditions in indoor environments may require complex regulation procedures and a proper energy management. This problem is being widely analyzed, since it has a direct effect on users’ productivity. This paper presents an economic model-based predictive control (MPC whose main strength is the use of the day-ahead price (DAP in order to predict the energy consumption associated with the heating, ventilation and air conditioning (HVAC. In this way, the control system is able to maintain a high thermal comfort level by optimizing the use of the HVAC system and to reduce, at the same time, the energy consumption associated with it, as much as possible. Later, the performance of the proposed control system is tested through simulations with a non-linear model of a bioclimatic building room. Several simulation scenarios are considered as a test-bed. From the obtained results, it is possible to conclude that the control system has a good behavior in several situations, i.e., it can reach the users’ thermal comfort for the analyzed situations, whereas the HVAC use is adjusted through the DAP; therefore, the energy savings associated with the HVAC is increased.

  20. A novel real-time non-linear wavelet-based model predictive controller for a coupled tank system

    OpenAIRE

    Owa, K; Sharma, S; Sutton, R

    2014-01-01

    This article presents the design, simulation and real-time implementation of a constrained non-linear model predictive controller for a coupled tank system. A novel wavelet-based function neural network model and a genetic algorithm online non-linear real-time optimisation approach were used in the non-linear model predictive controller strategy. A coupled tank system, which resembles operations in many chemical processes, is complex and has inherent non-linearity, and hence, controlling such...

  1. Score-based prediction of genomic islands in prokaryotic genomes using hidden Markov models

    Directory of Open Access Journals (Sweden)

    Surovcik Katharina

    2006-03-01

    Full Text Available Abstract Background Horizontal gene transfer (HGT is considered a strong evolutionary force shaping the content of microbial genomes in a substantial manner. It is the difference in speed enabling the rapid adaptation to changing environmental demands that distinguishes HGT from gene genesis, duplications or mutations. For a precise characterization, algorithms are needed that identify transfer events with high reliability. Frequently, the transferred pieces of DNA have a considerable length, comprise several genes and are called genomic islands (GIs or more specifically pathogenicity or symbiotic islands. Results We have implemented the program SIGI-HMM that predicts GIs and the putative donor of each individual alien gene. It is based on the analysis of codon usage (CU of each individual gene of a genome under study. CU of each gene is compared against a carefully selected set of CU tables representing microbial donors or highly expressed genes. Multiple tests are used to identify putatively alien genes, to predict putative donors and to mask putatively highly expressed genes. Thus, we determine the states and emission probabilities of an inhomogeneous hidden Markov model working on gene level. For the transition probabilities, we draw upon classical test theory with the intention of integrating a sensitivity controller in a consistent manner. SIGI-HMM was written in JAVA and is publicly available. It accepts as input any file created according to the EMBL-format. It generates output in the common GFF format readable for genome browsers. Benchmark tests showed that the output of SIGI-HMM is in agreement with known findings. Its predictions were both consistent with annotated GIs and with predictions generated by different methods. Conclusion SIGI-HMM is a sensitive tool for the identification of GIs in microbial genomes. It allows to interactively analyze genomes in detail and to generate or to test hypotheses about the origin of acquired

  2. Predictive Modeling of Fast-Curing Thermosets in Nozzle-Based Extrusion

    Science.gov (United States)

    Xie, Jingjin; Randolph, Robert; Simmons, Gary; Hull, Patrick V.; Mazzeo, Aaron D.

    2017-01-01

    This work presents an approach to modeling the dynamic spreading and curing behavior of thermosets in nozzle-based extrusions. Thermosets cover a wide range of materials, some of which permit low-temperature processing with subsequent high-temperature and high-strength working properties. Extruding thermosets may overcome the limited working temperatures and strengths of conventional thermoplastic materials used in additive manufacturing. This project aims to produce technology for the fabrication of thermoset-based structures leveraging advances made in nozzle-based extrusion, such as fused deposition modeling (FDM), material jetting, and direct writing. Understanding the synergistic interactions between spreading and fast curing of extruded thermosetting materials will provide essential insights for applications that require accurate dimensional controls, such as additive manufacturing [1], [2] and centrifugal coating/forming [3]. Two types of thermally curing thermosets -- one being a soft silicone (Ecoflex 0050) and the other being a toughened epoxy (G/Flex) -- served as the test materials in this work to obtain models for cure kinetics and viscosity. The developed models align with extensive measurements made with differential scanning calorimetry (DSC) and rheology. DSC monitors the change in the heat of reaction, which reflects the rate and degree of cure at different crosslinking stages. Rheology measures the change in complex viscosity, shear moduli, yield stress, and other properties dictated by chemical composition. By combining DSC and rheological measurements, it is possible to establish a set of models profiling the cure kinetics and chemorheology without prior knowledge of chemical composition, which is usually necessary for sophisticated mechanistic modeling. In this work, we conducted both isothermal and dynamic measurements with both DSC and rheology. With the developed models, numerical simulations yielded predictions of diameter and height of

  3. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    Science.gov (United States)

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  4. A prediction model for spontaneous regression of cervical intraepithelial neoplasia grade 2, based on simple clinical parameters.

    Science.gov (United States)

    Koeneman, Margot M; van Lint, Freyja H M; van Kuijk, Sander M J; Smits, Luc J M; Kooreman, Loes F S; Kruitwagen, Roy F P M; Kruse, Arnold J

    2017-01-01

    This study aims to develop a prediction model for spontaneous regression of cervical intraepithelial neoplasia grade 2 (CIN 2) lesions based on simple clinicopathological parameters. The study was conducted at Maastricht University Medical Center, the Netherlands. The prediction model was developed in a retrospective cohort of 129 women with a histologic diagnosis of CIN 2 who were managed by watchful waiting for 6 to 24months. Five potential predictors for spontaneous regression were selected based on the literature and expert opinion and were analyzed in a multivariable logistic regression model, followed by backward stepwise deletion based on the Wald test. The prediction model was internally validated by the bootstrapping method. Discriminative capacity and accuracy were tested by assessing the area under the receiver operating characteristic curve (AUC) and a calibration plot. Disease regression within 24months was seen in 91 (71%) of 129 patients. A prediction model was developed including the following variables: smoking, Papanicolaou test outcome before the CIN 2 diagnosis, concomitant CIN 1 diagnosis in the same biopsy, and more than 1 biopsy containing CIN 2. Not smoking, Papanicolaou class predictive of disease regression. The AUC was 69.2% (95% confidence interval, 58.5%-79.9%), indicating a moderate discriminative ability of the model. The calibration plot indicated good calibration of the predicted probabilities. This prediction model for spontaneous regression of CIN 2 may aid physicians in the personalized management of these lesions. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Model-based chatter stability prediction and detection for the turning of a flexible workpiece

    Science.gov (United States)

    Lu, Kaibo; Lian, Zisheng; Gu, Fengshou; Liu, Hunju

    2018-02-01

    Machining long slender workpieces still presents a technical challenge on the shop floor due to their low stiffness and damping. Regenerative chatter is a major hindrance in machining processes, reducing the geometric accuracies and dynamic stability of the cutting system. This study has been motivated by the fact that chatter occurrence is generally in relation to the cutting position in straight turning of slender workpieces, which has seldom been investigated comprehensively in literature. In the present paper, a predictive chatter model of turning a tailstock supported slender workpiece considering the cutting position change during machining is explored. Based on linear stability analysis and stiffness distribution at different cutting positions along the workpiece, the effect of the cutting tool movement along the length of the workpiece on chatter stability is studied. As a result, an entire stability chart for a single cutting pass is constructed. Through this stability chart the critical cutting condition and the chatter onset location along the workpiece in a turning operation can be estimated. The difference between the predicted tool locations and the experimental results was within 9% at high speed cutting. Also, on the basis of the predictive model the dynamic behavior during chatter that when chatter arises at some cutting location it will continue for a period of time until another specified location is arrived at, can be inferred. The experimental observation is in good agreement with the theoretical inference. In chatter detection respect, besides the delay strategy and overlap processing technique, a relative threshold algorithm is proposed to detect chatter by comparing the spectrum and variance of the acquired acceleration signals with the reference saved during stable cutting. The chatter monitoring method has shown reliability for various machining conditions.

  6. Metabolomics based predictive biomarker model of ARDS: A systemic measure of clinical hypoxemia.

    Directory of Open Access Journals (Sweden)

    Neeraj Sinha

    Full Text Available Despite advancements in ventilator technologies, lung supportive and rescue therapies, the outcome and prognostication in acute respiratory distress syndrome (ARDS remains incremental and ambiguous. Metabolomics is a potential insightful measure to the diagnostic approaches practiced in critical disease settings. In our study patients diagnosed with mild and moderate/severe ARDS clinically governed by hypoxemic P/F ratio between 100-300 but with indistinct molecular phenotype were discriminated employing nuclear magnetic resonance (NMR based metabolomics of mini bronchoalveolar lavage fluid (mBALF. Resulting biomarker prototype comprising six metabolites was substantiated highlighting ARDS susceptibility/recovery. Both the groups (mild and moderate/severe ARDS showed distinct biochemical profile based on 83.3% classification by discriminant function analysis and cross validated accuracy of 91% using partial least squares discriminant analysis as major classifier. The predictive performance of narrowed down six metabolites were found analogous with chemometrics. The proposed biomarker model consisting of six metabolites proline, lysine/arginine, taurine, threonine and glutamate were found characteristic of ARDS sub-stages with aberrant metabolism observed mainly in arginine, proline metabolism, lysine synthesis and so forth correlating to diseased metabotype. Thus NMR based metabolomics has provided new insight into ARDS sub-stages and conclusively a precise biomarker model proposed, reflecting underlying metabolic dysfunction aiding prior clinical decision making.

  7. Soil erosion model predictions using parent material/soil texture-based parameters compared to using site-specific parameters

    Science.gov (United States)

    R. B. Foltz; W. J. Elliot; N. S. Wagenbrenner

    2011-01-01

    Forested areas disturbed by access roads produce large amounts of sediment. One method to predict erosion and, hence, manage forest roads is the use of physically based soil erosion models. A perceived advantage of a physically based model is that it can be parameterized at one location and applied at another location with similar soil texture or geological parent...

  8. Construction of risk prediction model of type 2 diabetes mellitus based on logistic regression

    Directory of Open Access Journals (Sweden)

    Li Jian

    2017-01-01

    Full Text Available Objective: to construct multi factor prediction model for the individual risk of T2DM, and to explore new ideas for early warning, prevention and personalized health services for T2DM. Methods: using logistic regression techniques to screen the risk factors for T2DM and construct the risk prediction model of T2DM. Results: Male’s risk prediction model logistic regression equation: logit(P=BMI × 0.735+ vegetables × (−0.671 + age × 0.838+ diastolic pressure × 0.296+ physical activity× (−2.287 + sleep ×(−0.009 +smoking ×0.214; Female’s risk prediction model logistic regression equation: logit(P=BMI ×1.979+ vegetables× (−0.292 + age × 1.355+ diastolic pressure× 0.522+ physical activity × (−2.287 + sleep × (−0.010.The area under the ROC curve of male was 0.83, the sensitivity was 0.72, the specificity was 0.86, the area under the ROC curve of female was 0.84, the sensitivity was 0.75, the specificity was 0.90. Conclusion: This study model data is from a compared study of nested case, the risk prediction model has been established by using the more mature logistic regression techniques, and the model is higher predictive sensitivity, specificity and stability.

  9. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress

    Directory of Open Access Journals (Sweden)

    Ching-Hsue Cheng

    2018-01-01

    Full Text Available The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i the proposed model is different from the previous models lacking the concept of time series; (ii the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.

  10. Improving predictive power of physically based rainfall-induced shallow landslide models: a probablistic approach

    Science.gov (United States)

    Raia, S.; Alvioli, M.; Rossi, M.; Baum, R.L.; Godt, J.W.; Guzzetti, F.

    2013-01-01

    Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are deterministic. These models extend spatially the static stability models adopted in geotechnical engineering and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the existing models is the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of shallow rainfall-induced landslides. For the purpose, we have modified the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Analysis (TRIGRS) code. The new code (TRIGRS-P) adopts a stochastic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs of several model runs obtained varying the input parameters

  11. Prediction of Hemodynamic Response to Epinephrine via Model-Based System Identification.

    Science.gov (United States)

    Bighamian, Ramin; Soleymani, Sadaf; Reisner, Andrew T; Seri, Istvan; Hahn, Jin-Oh

    2016-01-01

    In this study, we present a system identification approach to the mathematical modeling of hemodynamic responses to vasopressor-inotrope agents. We developed a hybrid model called the latency-dose-response-cardiovascular (LDC) model that incorporated 1) a low-order lumped latency model to reproduce the delay associated with the transport of vasopressor-inotrope agent and the onset of physiological effect, 2) phenomenological dose-response models to dictate the steady-state inotropic, chronotropic, and vasoactive responses as a function of vasopressor-inotrope dose, and 3) a physiological cardiovascular model to translate the agent's actions into the ultimate response of blood pressure. We assessed the validity of the LDC model to fit vasopressor-inotrope dose-response data using data collected from five piglet subjects during variable epinephrine infusion rates. The results suggested that the LDC model was viable in modeling the subjects' dynamic responses: After tuning the model to each subject, the r (2) values for measured versus model-predicted mean arterial pressure were consistently higher than 0.73. The results also suggested that intersubject variability in the dose-response models, rather than the latency models, had significantly more impact on the model's predictive capability: Fixing the latency model to population-averaged parameter values resulted in r(2) values higher than 0.57 between measured versus model-predicted mean arterial pressure, while fixing the dose-response model to population-averaged parameter values yielded nonphysiological predictions of mean arterial pressure. We conclude that the dose-response relationship must be individualized, whereas a population-averaged latency-model may be acceptable with minimal loss of model fidelity.

  12. Collaboration and abstract representations: towards predictive models based on raw speech and eye-tracking data

    OpenAIRE

    Nüssli, Marc-Antoine; Jermann, Patrick; Sangin, Mirweis; Dillenbourg, Pierre

    2009-01-01

    This study aims to explore the possibility of using machine learning techniques to build predictive models of performance in collaborative induction tasks. More specifically, we explored how signal-level data, like eye-gaze data and raw speech may be used to build such models. The results show that such low level features have effectively some potential to predict performance in such tasks. Implications for future applications design are shortly discussed.

  13. Predicting debris flow occurrence in Eastern Italian Alps based on hydrological and geomorphological modelling

    Science.gov (United States)

    Nikolopoulos, Efthymios I.; Borga, Marco; Destro, Elisa; Marchi, Lorenzo

    2015-04-01

    Most of the work so far on the prediction of debris flow occurrence is focused on the identification of critical rainfall conditions. However, findings in the literature have shown that critical rainfall thresholds cannot always accurately identify debris flow occurrence, leading to false detections (positive or negative). One of the main reasons for this limitation is attributed to the fact that critical rainfall thresholds do not account for the characteristics of underlying land surface (e.g. geomorphology, moisture conditions, sediment availability, etc), which are strongly related to debris flow triggering. In addition, in areas where debris flows occur predominantly as a result of channel bed failure (as in many Alpine basins), the triggering factor is runoff, which suggests that identification of critical runoff conditions for debris flow prediction is more pertinent than critical rainfall. The primary objective of this study is to investigate the potential of a triggering index (TI), which combines variables related to runoff generation and channel morphology, for predicting debris flows occurrence. TI is based on a threshold criterion developed on past works (Tognacca et al., 2000; Berti and Simoni, 2005; Gregoretti and Dalla Fontana, 2008) and combines information on unit width peak flow, local channel slope and mean grain size. Estimation of peak discharge is based on the application of a distributed hydrologic model, while local channel slope is derived from a high-resolution (5m) DEM. Scaling functions of peak flows and channel width with drainage area are adopted since it is not possible to measure channel width or simulate peak flow at all channel nodes. TI values are mapped over the channel network thus allowing spatially distributed prediction but instead of identifying debris flow occurrence on single points, we identify their occurrence with reference to the tributary catchment involved. Evaluation of TI is carried out for five different basins

  14. Reconstruction of walleye exploitation based on angler diary records and a model of predicted catches.

    Science.gov (United States)

    Willms, Allan R; Green, David M

    2007-11-01

    The walleye population in Canadarago Lake, New York, was 81-95% exploited in the 1988 fishing season, the year in which a previous restriction on the length and number of legally harvestable fish was liberalized. Using diary records from a subset of fishermen, growth estimates, and an estimate of the walleye population in the following year, a method is developed to reconstruct the fish population back to the spring of 1988 and thus determine the exploitation rate. The method is based on a model of diary catches that partitions time and fish length into a set of cells and relates predicted catches and population sizes in these cells. The method's sensitivity to the partitioning scheme, the growth estimates, and the diary data is analyzed. The method could be employed in other fish exploitation analyses and demonstrates the use of inexpensive angler-collected data in fisheries management.

  15. Iterated non-linear model predictive control based on tubes and contractive constraints.

    Science.gov (United States)

    Murillo, M; Sánchez, G; Giovanini, L

    2016-05-01

    This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Assess and Predict Automatic Generation Control Performances for Thermal Power Generation Units Based on Modeling Techniques

    Science.gov (United States)

    Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao

    2018-02-01

    Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.

  17. Virtual-view PSNR prediction based on a depth distortion tolerance model and support vector machine.

    Science.gov (United States)

    Chen, Fen; Chen, Jiali; Peng, Zongju; Jiang, Gangyi; Yu, Mei; Chen, Hua; Jiao, Renzhi

    2017-10-20

    Quality prediction of virtual-views is important for free viewpoint video systems, and can be used as feedback to improve the performance of depth video coding and virtual-view rendering. In this paper, an efficient virtual-view peak signal to noise ratio (PSNR) prediction method is proposed. First, the effect of depth distortion on virtual-view quality is analyzed in detail, and a depth distortion tolerance (DDT) model that determines the DDT range is presented. Next, the DDT model is used to predict the virtual-view quality. Finally, a support vector machine (SVM) is utilized to train and obtain the virtual-view quality prediction model. Experimental results show that the Spearman's rank correlation coefficient and root mean square error between the actual PSNR and the predicted PSNR by DDT model are 0.8750 and 0.6137 on average, and by the SVM prediction model are 0.9109 and 0.5831. The computational complexity of the SVM method is lower than the DDT model and the state-of-the-art methods.

  18. A Physiologically Based Pharmacokinetic Model for Pregnant Women to Predict the Pharmacokinetics of Drugs Metabolized Via Several Enzymatic Pathways.

    Science.gov (United States)

    Dallmann, André; Ince, Ibrahim; Coboeken, Katrin; Eissing, Thomas; Hempel, Georg

    2017-09-18

    Physiologically based pharmacokinetic modeling is considered a valuable tool for predicting pharmacokinetic changes in pregnancy to subsequently guide in-vivo pharmacokinetic trials in pregnant women. The objective of this study was to extend and verify a previously developed physiologically based pharmacokinetic model for pregnant women for the prediction of pharmacokinetics of drugs metabolized via several cytochrome P450 enzymes. Quantitative information on gestation-specific changes in enzyme activity available in the literature was incorporated in a pregnancy physiologically based pharmacokinetic model and the pharmacokinetics of eight drugs metabolized via one or multiple cytochrome P450 enzymes was predicted. The tested drugs were caffeine, midazolam, nifedipine, metoprolol, ondansetron, granisetron, diazepam, and metronidazole. Pharmacokinetic predictions were evaluated by comparison with in-vivo pharmacokinetic data obtained from the literature. The pregnancy physiologically based pharmacokinetic model successfully predicted the pharmacokinetics of all tested drugs. The observed pregnancy-induced pharmacokinetic changes were qualitatively and quantitatively reasonably well predicted for all drugs. Ninety-seven percent of the mean plasma concentrations predicted in pregnant women fell within a twofold error range and 63% within a 1.25-fold error range. For all drugs, the predicted area under the concentration-time curve was within a 1.25-fold error range. The presented pregnancy physiologically based pharmacokinetic model can quantitatively predict the pharmacokinetics of drugs that are metabolized via one or multiple cytochrome P450 enzymes by integrating prior knowledge of the pregnancy-related effect on these enzymes. This pregnancy physiologically based pharmacokinetic model may thus be used to identify potential exposure changes in pregnant women a priori and to eventually support informed decision making when clinical trials are designed in this

  19. Climate-based models for pulsed resources improve predictability of consumer population dynamics: outbreaks of house mice in forest ecosystems.

    Science.gov (United States)

    Holland, E Penelope; James, Alex; Ruscoe, Wendy A; Pech, Roger P; Byrom, Andrea E

    2015-01-01

    Accurate predictions of the timing and magnitude of consumer responses to episodic seeding events (masts) are important for understanding ecosystem dynamics and for managing outbreaks of invasive species generated by masts. While models relating consumer populations to resource fluctuations have been developed successfully for a range of natural and modified ecosystems, a critical gap that needs addressing is better prediction of resource pulses. A recent model used change in summer temperature from one year to the next (ΔT) for predicting masts for forest and grassland plants in New Zealand. We extend this climate-based method in the framework of a model for consumer-resource dynamics to predict invasive house mouse (Mus musculus) outbreaks in forest ecosystems. Compared with previous mast models based on absolute temperature, the ΔT method for predicting masts resulted in an improved model for mouse population dynamics. There was also a threshold effect of ΔT on the likelihood of an outbreak occurring. The improved climate-based method for predicting resource pulses and consumer responses provides a straightforward rule of thumb for determining, with one year's advance warning, whether management intervention might be required in invaded ecosystems. The approach could be applied to consumer-resource systems worldwide where climatic variables are used to model the size and duration of resource pulses, and may have particular relevance for ecosystems where global change scenarios predict increased variability in climatic events.

  20. Case studies of extended model-based flood forecasting: prediction of dike strength and flood impacts

    Science.gov (United States)

    Stuparu, Dana; Bachmann, Daniel; Bogaard, Tom; Twigt, Daniel; Verkade, Jan; de Bruijn, Karin; de Leeuw, Annemargreet

    2017-04-01

    Flood forecasts, warning and emergency response are important components in flood risk management. Most flood forecasting systems use models to translate weather predictions to forecasted discharges or water levels. However, this information is often not sufficient for real time decisions. A sound understanding of the reliability of embankments and flood dynamics is needed to react timely and reduce the negative effects of the flood. Where are the weak points in the dike system? When, how much and where the water will flow? When and where is the greatest impact expected? Model-based flood impact forecasting tries to answer these questions by adding new dimensions to the existing forecasting systems by providing forecasted information about: (a) the dike strength during the event (reliability), (b) the flood extent in case of an overflow or a dike failure (flood spread) and (c) the assets at risk (impacts). This work presents three study-cases in which such a set-up is applied. Special features are highlighted. Forecasting of dike strength. The first study-case focusses on the forecast of dike strength in the Netherlands for the river Rhine branches Waal, Nederrijn and IJssel. A so-called reliability transformation is used to translate the predicted water levels at selected dike sections into failure probabilities during a flood event. The reliability of a dike section is defined by fragility curves - a summary of the dike strength conditional to the water level. The reliability information enhances the emergency management and inspections of embankments. Ensemble forecasting. The second study-case shows the setup of a flood impact forecasting system in Dumfries, Scotland. The existing forecasting system is extended with a 2D flood spreading model in combination with the Delft-FIAT impact model. Ensemble forecasts are used to make use of the uncertainty in the precipitation forecasts, which is useful to quantify the certainty of a forecasted flood event. From global

  1. A Risk-based Model Predictive Control Approach to Adaptive Interventions in Behavioral Health.

    Science.gov (United States)

    Zafra-Cabeza, Ascensión; Rivera, Daniel E; Collins, Linda M; Ridao, Miguel A; Camacho, Eduardo F

    2011-07-01

    This paper examines how control engineering and risk management techniques can be applied in the field of behavioral health through their use in the design and implementation of adaptive behavioral interventions. Adaptive interventions are gaining increasing acceptance as a means to improve prevention and treatment of chronic, relapsing disorders, such as abuse of alcohol, tobacco, and other drugs, mental illness, and obesity. A risk-based Model Predictive Control (MPC) algorithm is developed for a hypothetical intervention inspired by Fast Track, a real-life program whose long-term goal is the prevention of conduct disorders in at-risk children. The MPC-based algorithm decides on the appropriate frequency of counselor home visits, mentoring sessions, and the availability of after-school recreation activities by relying on a model that includes identifiable risks, their costs, and the cost/benefit assessment of mitigating actions. MPC is particularly suited for the problem because of its constraint-handling capabilities, and its ability to scale to interventions involving multiple tailoring variables. By systematically accounting for risks and adapting treatment components over time, an MPC approach as described in this paper can increase intervention effectiveness and adherence while reducing waste, resulting in advantages over conventional fixed treatment. A series of simulations are conducted under varying conditions to demonstrate the effectiveness of the algorithm.

  2. A Novel Adaptive Conditional Probability-Based Predicting Model for User’s Personality Traits

    Directory of Open Access Journals (Sweden)

    Mengmeng Wang

    2015-01-01

    Full Text Available With the pervasive increase in social media use, the explosion of users’ generated data provides a potentially very rich source of information, which plays an important role in helping online researchers understand user’s behaviors deeply. Since user’s personality traits are the driving force of user’s behaviors, hence, in this paper, along with social network features, we first extract linguistic features, emotional statistical features, and topic features from user’s Facebook status updates, followed by quantifying importance of features via Kendall correlation coefficient. And then, on the basis of weighted features and dynamic updated thresholds of personality traits, we deploy a novel adaptive conditional probability-based predicting model which considers prior knowledge of correlations between user’s personality traits to predict user’s Big Five personality traits. In the experimental work, we explore the existence of correlations between user’s personality traits which provides a better theoretical support for our proposed method. Moreover, on the same Facebook dataset, compared to other methods, our method can achieve an F1-measure of 80.6% when taking into account correlations between user’s personality traits, and there is an impressive improvement of 5.8% over other approaches.

  3. Gas detonation cell width prediction model based on support vector regression

    Directory of Open Access Journals (Sweden)

    Jiyang Yu

    2017-10-01

    Full Text Available Detonation cell width is an important parameter in hydrogen explosion assessments. The experimental data on gas detonation are statistically analyzed to establish a universal method to numerically predict detonation cell widths. It is commonly understood that detonation cell width, λ, is highly correlated with the characteristic reaction zone width, δ. Classical parametric regression methods were widely applied in earlier research to build an explicit semiempirical correlation for the ratio of λ/δ. The obtained correlations formulate the dependency of the ratio λ/δ on a dimensionless effective chemical activation energy and a dimensionless temperature of the gas mixture. In this paper, support vector regression (SVR, which is based on nonparametric machine learning, is applied to achieve functions with better fitness to experimental data and more accurate predictions. Furthermore, a third parameter, dimensionless pressure, is considered as an additional independent variable. It is found that three-parameter SVR can significantly improve the performance of the fitting function. Meanwhile, SVR also provides better adaptability and the model functions can be easily renewed when experimental database is updated or new regression parameters are considered.

  4. Predictive modelling-based design and experiments for synthesis and spinning of bioinspired silk fibres

    Science.gov (United States)

    Gronau, Greta; Jacobsen, Matthew M.; Huang, Wenwen; Rizzo, Daniel J.; Li, David; Staii, Cristian; Pugno, Nicola M.; Wong, Joyce Y.; Kaplan, David L.; Buehler, Markus J.

    2016-01-01

    Scalable computational modelling tools are required to guide the rational design of complex hierarchical materials with predictable functions. Here, we utilize mesoscopic modelling, integrated with genetic block copolymer synthesis and bioinspired spinning process, to demonstrate de novo materials design that incorporates chemistry, processing and material characterization. We find that intermediate hydrophobic/hydrophilic block ratios observed in natural spider silks and longer chain lengths lead to outstanding silk fibre formation. This design by nature is based on the optimal combination of protein solubility, self-assembled aggregate size and polymer network topology. The original homogeneous network structure becomes heterogeneous after spinning, enhancing the anisotropic network connectivity along the shear flow direction. Extending beyond the classical polymer theory, with insights from the percolation network model, we illustrate the direct proportionality between network conductance and fibre Young's modulus. This integrated approach provides a general path towards de novo functional network materials with enhanced mechanical properties and beyond (optical, electrical or thermal) as we have experimentally verified. PMID:26017575

  5. Predictive modelling-based design and experiments for synthesis and spinning of bioinspired silk fibres.

    Science.gov (United States)

    Lin, Shangchao; Ryu, Seunghwa; Tokareva, Olena; Gronau, Greta; Jacobsen, Matthew M; Huang, Wenwen; Rizzo, Daniel J; Li, David; Staii, Cristian; Pugno, Nicola M; Wong, Joyce Y; Kaplan, David L; Buehler, Markus J

    2015-05-28

    Scalable computational modelling tools are required to guide the rational design of complex hierarchical materials with predictable functions. Here, we utilize mesoscopic modelling, integrated with genetic block copolymer synthesis and bioinspired spinning process, to demonstrate de novo materials design that incorporates chemistry, processing and material characterization. We find that intermediate hydrophobic/hydrophilic block ratios observed in natural spider silks and longer chain lengths lead to outstanding silk fibre formation. This design by nature is based on the optimal combination of protein solubility, self-assembled aggregate size and polymer network topology. The original homogeneous network structure becomes heterogeneous after spinning, enhancing the anisotropic network connectivity along the shear flow direction. Extending beyond the classical polymer theory, with insights from the percolation network model, we illustrate the direct proportionality between network conductance and fibre Young's modulus. This integrated approach provides a general path towards de novo functional network materials with enhanced mechanical properties and beyond (optical, electrical or thermal) as we have experimentally verified.

  6. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  7. Prediction of MHC class II binding peptides based on an iterative learning model

    Science.gov (United States)

    Murugan, Naveen; Dai, Yang

    2005-01-01

    Background Prediction of the binding ability of antigen peptides to major histocompatibility complex (MHC) class II molecules is important in vaccine development. The variable length of each binding peptide complicates this prediction. Motivated by a text mining model designed for building a classifier from labeled and unlabeled examples, we have developed an iterative supervised learning model for the prediction of MHC class II binding peptides. Results A linear programming (LP) model was employed for the learning task at each iteration, since it is fast and can re-optimize the previous classifier when the training sets are altered. The performance of the new model has been evaluated with benchmark datasets. The outcome demonstrates that the model achieves an accuracy of prediction that is competitive compared to the advanced predictors (the Gibbs sampler and TEPITOPE). The average areas under the ROC curve obtained from one variant of our model are 0.753 and 0.715 for the original and homology reduced benchmark sets, respectively. The corresponding values are respectively 0.744 and 0.673 for the Gibbs sampler and 0.702 and 0.667 for TEPITOPE. Conclusion The iterative learning procedure appears to be effective in prediction of MHC class II binders. It offers an alternative approach to this important predictionproblem. PMID:16351712

  8. Genomic prediction based on data from three layer lines using non-linear regression models

    NARCIS (Netherlands)

    Huang, H.; Windig, J.J.; Vereijken, A.; Calus, M.P.L.

    2014-01-01

    Background - Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. Methods - In an attempt to alleviate

  9. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  10. A model to predict vaginal delivery in nulliparous women based on maternal characteristics and intrapartum ultrasound.

    Science.gov (United States)

    Eggebø, Tørbjorn Moe; Wilhelm-Benartzi, Charlotte; Hassan, Wassim A; Usman, Sana; Salvesen, Kjell A; Lees, Christoph C

    2015-09-01

    Accurate prediction of whether a nulliparous woman will have a vaginal delivery would be a major advance in obstetrics. The objective of the study was to develop such a model based on maternal characteristics and the results of intrapartum ultrasound. One hundred twenty-two nulliparous women in the first stage of labor were included in a prospective observational 2-centre study. Labor was classified as prolonged according to the respective countries' national guidelines. Fetal head position was assessed with transabdominal ultrasound and cervical dilatation by digital examination, and transperineal ultrasound was used to determine head-perineum distance and the presence of caput succedaneum. The subjects were divided into a testing set (n = 61) and a validation set (n = 61) and a risk score derived using multivariable logistic regression with vaginal birth as the outcome, which was dichotomized into no/cesarean delivery and yes/vaginal birth. Covariates included head-perineum distance, caput succedaneum, and occiput posterior position, which were dichotomized respectively into the following: ≤40 mm, >40 mm, <10 mm, ≥10 mm, and no, yes. Maternal age, gestational age, and maternal body mass index were included as continuous covariates. Dichotomized score is significantly associated with vaginal delivery (P = .03). Women with a score above the median had greater than 10 times the odds of having a vaginal delivery as compared with those with a score below the median. The receiver-operating characteristic curve showed an area under the curve of 0.853 (95% confidence interval, 0.678-1.000). A risk score based on maternal characteristics and intrapartum findings can predict vaginal delivery in nulliparous women in the first stage of labor. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Application of model predictive control strategy based on fuzzy identification to an SP-100 space reactor

    Energy Technology Data Exchange (ETDEWEB)

    Na, Man Gyun [Department of Nuclear Engineering, Chosun University, 375 Seosuk-dong, Dong-gu, Gwangju 501-759 (Korea, Republic of)]. E-mail: magyna@chosun.ac.kr; Upadhyaya, Belle R. [Department of Nuclear Engineering, University of Tennessee, Knoxville, TN 37996-2300 (United States)

    2006-11-15

    In this work, a model predictive control method combined with fuzzy identification, is applied to the design of the thermoelectric (TE) power control in the SP-100 space reactor. The future TE power is predicted by using the fuzzy model identified by a subtractive clustering method of a fast and robust algorithm. The objectives of the proposed fuzzy model predictive controller are to minimize both the difference between the predicted TE power and the desired power, and the variation of control drum angle that adjusts the control reactivity. Also, the objectives are subject to maximum and minimum control drum angle and maximum drum angle variation speed. The genetic algorithm that is effective in accomplishing multiple objectives is used to optimize the fuzzy model predictive controller. A lumped parameter simulation model of the SP-100 nuclear space reactor is used to verify the proposed controller. The results of numerical simulations to check the performance of the proposed controller show that the TE generator power level controlled by the proposed controller could track the target power level effectively, satisfying all control constraints.

  12. Driver Vision Based Perception-Response Time Prediction and Assistance Model on Mountain Highway Curve.

    Science.gov (United States)

    Li, Yi; Chen, Yuren

    2016-12-30

    To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.

  13. Prediction model for the diffusion length in silicon-based solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Cheknane, A [Laboratoire d' Etude et Developpement des Materiaux Semiconducteurs et Dielectrques, Universite Amar Telidji de Laghouat, BP 37G, Laghouat 03000 (Algeria); Benouaz, T, E-mail: cheknanali@yahoo.co [Laboratoire de Modelisation, Universite Abou BakarBelkaid de Tlemcen Algerie (Algeria)

    2009-07-15

    A novel approach to compute diffusion lengths in solar cells is presented. Thus, a simulation is done; it aims to give computational support to the general development of a neural networks (NNs), which is a very powerful predictive modelling technique used to predict the diffusion length in mono-crystalline silicon solar cells. Furthermore, the computation of the diffusion length and the comparison with measurement data, using the infrared injection method, are presented and discussed.

  14. A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model

    Directory of Open Access Journals (Sweden)

    Yanbing Liu

    2014-01-01

    Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.

  15. Prediction of paraquat exposure and toxicity in clinically ill poisoned patients: a model based approach.

    Science.gov (United States)

    Wunnapuk, Klintean; Mohammed, Fahim; Gawarammana, Indika; Liu, Xin; Verbeeck, Roger K; Buckley, Nicholas A; Roberts, Michael S; Musuamba, Flora T

    2014-10-01

    Paraquat poisoning is a medical problem in many parts of Asia and the Pacific. The mortality rate is extremely high as there is no effective treatment. We analyzed data collected during an ongoing cohort study on self-poisoning and from a randomized controlled trial assessing the efficacy of immunosuppressive therapy in hospitalized paraquat-intoxicated patients. The aim of this analysis was to characterize the toxicokinetics and toxicodynamics of paraquat in this population. A non-linear mixed effects approach was used to perform a toxicokinetic/toxicodynamic population analysis in a cohort of 78 patients. The paraquat plasma concentrations were best fitted by a two compartment toxicokinetic structural model with first order absorption and first order elimination. Changes in renal function were used for the assessment of paraquat toxicodynamics. The estimates of toxicokinetic parameters for the apparent clearance, the apparent volume of distribution and elimination half-life were 1.17 l h(-1) , 2.4 l kg(-1) and 87 h, respectively. Renal function, namely creatinine clearance, was the most significant covariate to explain between patient variability in paraquat clearance.This model suggested that a reduction in paraquat clearance occurred within 24 to 48 h after poison ingestion, and afterwards the clearance was constant over time. The model estimated that a paraquat concentration of 429 μg l(-1) caused 50% of maximum renal toxicity. The immunosuppressive therapy tested during this study was associated with only 8% improvement of renal function. The developed models may be useful as prognostic tools to predict patient outcome based on patient characteristics on admission and to assess drug effectiveness during antidote drug development. © 2014 The British Pharmacological Society.

  16. A diffusivity model for predicting VOC diffusion in porous building materials based on fractal theory

    International Nuclear Information System (INIS)

    Liu, Yanfeng; Zhou, Xiaojun; Wang, Dengjia; Song, Cong; Liu, Jiaping

    2015-01-01

    Highlights: • Fractal theory is introduced into the prediction of VOC diffusion coefficient. • MSFC model of the diffusion coefficient is developed for porous building materials. • The MSFC model contains detailed pore structure parameters. • The accuracy of the MSFC model is verified by independent experiments. - Abstract: Most building materials are porous media, and the internal diffusion coefficients of such materials have an important influences on the emission characteristics of volatile organic compounds (VOCs). The pore structure of porous building materials has a significant impact on the diffusion coefficient. However, the complex structural characteristics bring great difficulties to the model development. The existing prediction models of the diffusion coefficient are flawed and need to be improved. Using scanning electron microscope (SEM) observations and mercury intrusion porosimetry (MIP) tests of typical porous building materials, this study developed a new diffusivity model: the multistage series-connection fractal capillary-bundle (MSFC) model. The model considers the variable-diameter capillaries formed by macropores connected in series as the main mass transfer paths, and the diameter distribution of the capillary bundles obeys a fractal power law in the cross section. In addition, the tortuosity of the macrocapillary segments with different diameters is obtained by the fractal theory. Mesopores serve as the connections between the macrocapillary segments rather than as the main mass transfer paths. The theoretical results obtained using the MSFC model yielded a highly accurate prediction of the diffusion coefficients and were in a good agreement with the VOC concentration measurements in the environmental test chamber.

  17. A diffusivity model for predicting VOC diffusion in porous building materials based on fractal theory

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yanfeng, E-mail: lyfxjd@163.com; Zhou, Xiaojun; Wang, Dengjia; Song, Cong; Liu, Jiaping

    2015-12-15

    Highlights: • Fractal theory is introduced into the prediction of VOC diffusion coefficient. • MSFC model of the diffusion coefficient is developed for porous building materials. • The MSFC model contains detailed pore structure parameters. • The accuracy of the MSFC model is verified by independent experiments. - Abstract: Most building materials are porous media, and the internal diffusion coefficients of such materials have an important influences on the emission characteristics of volatile organic compounds (VOCs). The pore structure of porous building materials has a significant impact on the diffusion coefficient. However, the complex structural characteristics bring great difficulties to the model development. The existing prediction models of the diffusion coefficient are flawed and need to be improved. Using scanning electron microscope (SEM) observations and mercury intrusion porosimetry (MIP) tests of typical porous building materials, this study developed a new diffusivity model: the multistage series-connection fractal capillary-bundle (MSFC) model. The model considers the variable-diameter capillaries formed by macropores connected in series as the main mass transfer paths, and the diameter distribution of the capillary bundles obeys a fractal power law in the cross section. In addition, the tortuosity of the macrocapillary segments with different diameters is obtained by the fractal theory. Mesopores serve as the connections between the macrocapillary segments rather than as the main mass transfer paths. The theoretical results obtained using the MSFC model yielded a highly accurate prediction of the diffusion coefficients and were in a good agreement with the VOC concentration measurements in the environmental test chamber.

  18. To Set Up a Logistic Regression Prediction Model for Hepatotoxicity of Chinese Herbal Medicines Based on Traditional Chinese Medicine Theory

    Science.gov (United States)

    Liu, Hongjie; Li, Tianhao; Zhan, Sha; Pan, Meilan; Ma, Zhiguo; Li, Chenghua

    2016-01-01

    Aims. To establish a logistic regression (LR) prediction model for hepatotoxicity of Chinese herbal medicines (HMs) based on traditional Chinese medicine (TCM) theory and to provide a statistical basis for predicting hepatotoxicity of HMs. Methods. The correlations of hepatotoxic and nonhepatotoxic Chinese HMs with four properties, five flavors, and channel tropism were analyzed with chi-square test for two-way unordered categorical data. LR prediction model was established and the accuracy of the prediction by this model was evaluated. Results. The hepatotoxic and nonhepatotoxic Chinese HMs were related with four properties (p 0.05). There were totally 12 variables from four properties and five flavors for the LR. Four variables, warm and neutral of the four properties and pungent and salty of five flavors, were selected to establish the LR prediction model, with the cutoff value being 0.204. Conclusions. Warm and neutral of the four properties and pungent and salty of five flavors were the variables to affect the hepatotoxicity. Based on such results, the established LR prediction model had some predictive power for hepatotoxicity of Chinese HMs. PMID:27656240

  19. Development and validation of a prediction model for long-term sickness absence based on occupational health survey variables

    DEFF Research Database (Denmark)

    Roelen, Corné; Thorsen, Sannie; Heymans, Martijn

    2018-01-01

    Purpose: The purpose of this study is to develop and validate a prediction model for identifying employees at increased risk of long-term sickness absence (LTSA), by using variables commonly measured in occupational health surveys. Materials and methods: Based on the literature, 15 predictor...... = 0.68; 95% CI 0.61–0.76), but not practically useful. Conclusions: A prediction model based on occupational health survey variables identified employees with an increased LTSA risk, but should be further developed into a practically useful tool to predict the risk of LTSA in the general working...... LTSA during follow-up. Results: The 15-predictor model was reduced to a 9-predictor model including age, gender, education, self-rated health, mental health, prior LTSA, work ability, emotional job demands, and recognition by the management. Discrimination by the 9-predictor model was significant (AUC...

  20. Prediction of Combine Economic Life Based on Repair and Maintenance Costs Model

    Directory of Open Access Journals (Sweden)

    A Rohani

    2014-09-01

    Full Text Available Farm machinery managers often need to make complex economic decisions on machinery replacement. Repair and maintenance costs can have significant impacts on this economic decision. The farm manager must be able to predict farm machinery repair and maintenance costs. This study aimed to identify a regression model that can adequately represent the repair and maintenance costs in terms of machine age in cumulative hours of use. The regression model has the ability to predict the repair and maintenance costs for longer time periods. Therefore, it can be used for the estimation of the economic life. The study was conducted using field data collected from 11 John-Deer 955 combine harvesters used in several western provinces of Iran. It was found that power model has a better performance for the prediction of combine repair and maintenance costs. The results showed that the optimum replacement age of John-Deer 955 combine was 54300 cumulative hours.

  1. Self-adaptive prediction of cloud resource demands using ensemble model and subtractive-fuzzy clustering based fuzzy neural network.

    Science.gov (United States)

    Chen, Zhijia; Zhu, Yuanchang; Di, Yanqiang; Feng, Shaochong

    2015-01-01

    In IaaS (infrastructure as a service) cloud environment, users are provisioned with virtual machines (VMs). To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN). We analyze the characters of user preferences and demands. Then the architecture of the prediction model is constructed. We adopt some base predictors to compose the ensemble model. Then the structure and learning algorithm of fuzzy neural network is researched. To obtain the number of fuzzy rules and the initial value of the premise and consequent parameters, this paper proposes the fuzzy c-means combined with subtractive clustering algorithm, that is, the subtractive-fuzzy clustering. Finally, we adopt different criteria to evaluate the proposed method. The experiment results show that the method is accurate and effective in predicting the resource demands.

  2. Self-Adaptive Prediction of Cloud Resource Demands Using Ensemble Model and Subtractive-Fuzzy Clustering Based Fuzzy Neural Network

    Directory of Open Access Journals (Sweden)

    Zhijia Chen

    2015-01-01

    Full Text Available In IaaS (infrastructure as a service cloud environment, users are provisioned with virtual machines (VMs. To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN. We analyze the characters of user preferences and demands. Then the architecture of the prediction model is constructed. We adopt some base predictors to compose the ensemble model. Then the structure and learning algorithm of fuzzy neural network is researched. To obtain the number of fuzzy rules and the initial value of the premise and consequent parameters, this paper proposes the fuzzy c-means combined with subtractive clustering algorithm, that is, the subtractive-fuzzy clustering. Finally, we adopt different criteria to evaluate the proposed method. The experiment results show that the method is accurate and effective in predicting the resource demands.

  3. Self-Adaptive Prediction of Cloud Resource Demands Using Ensemble Model and Subtractive-Fuzzy Clustering Based Fuzzy Neural Network

    Science.gov (United States)

    Chen, Zhijia; Zhu, Yuanchang; Di, Yanqiang; Feng, Shaochong

    2015-01-01

    In IaaS (infrastructure as a service) cloud environment, users are provisioned with virtual machines (VMs). To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN). We analyze the characters of user preferences and demands. Then the architecture of the prediction model is constructed. We adopt some base predictors to compose the ensemble model. Then the structure and learning algorithm of fuzzy neural network is researched. To obtain the number of fuzzy rules and the initial value of the premise and consequent parameters, this paper proposes the fuzzy c-means combined with subtractive clustering algorithm, that is, the subtractive-fuzzy clustering. Finally, we adopt different criteria to evaluate the proposed method. The experiment results show that the method is accurate and effective in predicting the resource demands. PMID:25691896

  4. Three-dimensional Simulation and Prediction of Solenoid Valve Failure Mechanism Based on Finite Element Model

    Science.gov (United States)

    Li, Jianfeng; Xiao, Mingqing; Liang, Yajun; Tang, Xilang; Li, Chao

    2018-01-01

    The solenoid valve is a kind of basic automation component applied widely. It’s significant to analyze and predict its degradation failure mechanism to improve the reliability of solenoid valve and do research on prolonging life. In this paper, a three-dimensional finite element analysis model of solenoid valve is established based on ANSYS Workbench software. A sequential coupling method used to calculate temperature filed and mechanical stress field of solenoid valve is put forward. The simulation result shows the sequential coupling method can calculate and analyze temperature and stress distribution of solenoid valve accurately, which has been verified through the accelerated life test. Kalman filtering algorithm is introduced to the data processing, which can effectively reduce measuring deviation and restore more accurate data information. Based on different driving current, a kind of failure mechanism which can easily cause the degradation of coils is obtained and an optimization design scheme of electro-insulating rubbers is also proposed. The high temperature generated by driving current and the thermal stress resulting from thermal expansion can easily cause the degradation of coil wires, which will decline the electrical resistance of coils and result in the eventual failure of solenoid valve. The method of finite element analysis can be applied to fault diagnosis and prognostic of various solenoid valves and improve the reliability of solenoid valve’s health management.

  5. 3D structure prediction of lignolytic enzymes lignin peroxidase and manganese peroxidase based on homology modelling

    Directory of Open Access Journals (Sweden)

    SWAPNIL K. KALE

    2016-04-01

    Full Text Available Lignolytic enzymes have great biotechnological value in biopulping, biobleaching, and bioremediation. Manganese peroxidase (EC 1:11:1:13 and lignin peroxidase (EC 1:11:1:14 are extracellular and hem-containing peroxidases that catalyze H2O2-dependent oxidation of lignin. Because of their ability to catalyse oxidation of a wide range of organic compounds and even some inorganic compounds, they got tremendous industrial importance. In this study, 3D structure of lignin and manganese peroxidase has been predicted on the basis of homology modeling using Swiss PDB workspace. The physicochemical properties like molecular weight, isoelectric point, Grand average of hydropathy, instability and aliphatic index of the target enzymes were performed using Protparam. The predicted secondary structure of MnP has 18 helices and 6 strands, while LiP has 20 helices and 4 strands. Generated 3D structure was visualized in Pymol. The generated model for MnP and LiP has Z-score Qmean of 0.01 and -0.71, respectively. The predicted models were validated through Ramachandran Plot, which indicated that 96.1 and 95.5% of the residues are in most favored regions for MnP and LiP respectively. The quality of predicted models were assessed and confirmed by VERIFY 3D, PROCHECK and ERRAT. The modeled structure of MnP and LiP were submitted to the Protein Model Database.

  6. Genome-wide prediction, display and refinement of binding sites with information theory-based models

    Directory of Open Access Journals (Sweden)

    Leeder J Steven

    2003-09-01

    Full Text Available Abstract Background We present Delila-genome, a software system for identification, visualization and analysis of protein binding sites in complete genome sequences. Binding sites are predicted by scanning genomic sequences with information theory-based (or user-defined weight matrices. Matrices are refined by adding experimentally-defined binding sites to published binding sites. Delila-Genome was used to examine the accuracy of individual information contents of binding sites detected with refined matrices as a measure of the strengths of the corresponding protein-nucleic acid interactions. The software can then be used to predict novel sites by rescanning the genome with the refined matrices. Results Parameters for genome scans are entered using a Java-based GUI interface and backend scripts in Perl. Multi-processor CPU load-sharing minimized the average response time for scans of different chromosomes. Scans of human genome assemblies required 4–6 hours for transcription factor binding sites and 10–19 hours for splice sites, respectively, on 24- and 3-node Mosix and Beowulf clusters. Individual binding sites are displayed either as high-resolution sequence walkers or in low-resolution custom tracks in the UCSC genome browser. For large datasets, we applied a data reduction strategy that limited displays of binding sites exceeding a threshold information content to specific chromosomal regions within or adjacent to genes. An HTML document is produced listing binding sites ranked by binding site strength or chromosomal location hyperlinked to the UCSC custom track, other annotation databases and binding site sequences. Post-genome scan tools parse binding site annotations of selected chromosome intervals and compare the results of genome scans using different weight matrices. Comparisons of multiple genome scans can display binding sites that are unique to each scan and identify sites with significantly altered binding strengths

  7. Prediction of soft soil foundation settlement in Guangxi granite area based on fuzzy neural network model

    Science.gov (United States)

    Luo, Junhui; Wu, Chao; Liu, Xianlin; Mi, Decai; Zeng, Fuquan; Zeng, Yongjun

    2018-01-01

    At present, the prediction of soft foundation settlement mostly use the exponential curve and hyperbola deferred approximation method, and the correlation between the results is poor. However, the application of neural network in this area has some limitations, and none of the models used in the existing cases adopted the TS fuzzy neural network of which calculation combines the characteristics of fuzzy system and neural network to realize the mutual compatibility methods. At the same time, the developed and optimized calculation program is convenient for engineering designers. Taking the prediction and analysis of soft foundation settlement of gully soft soil in granite area of Guangxi Guihe road as an example, the fuzzy neural network model is established and verified to explore the applicability. The TS fuzzy neural network is used to construct the prediction model of settlement and deformation, and the corresponding time response function is established to calculate and analyze the settlement of soft foundation. The results show that the prediction of short-term settlement of the model is accurate and the final settlement prediction result has certain engineering reference value.

  8. Nonlinear quantitative radiation sensitivity prediction model based on NCI-60 cancer cell lines.

    Science.gov (United States)

    Zhang, Chunying; Girard, Luc; Das, Amit; Chen, Sun; Zheng, Guangqiang; Song, Kai

    2014-01-01

    We proposed a nonlinear model to perform a novel quantitative radiation sensitivity prediction. We used the NCI-60 panel, which consists of nine different cancer types, as the platform to train our model. Important radiation therapy (RT) related genes were selected by significance analysis of microarrays (SAM). Orthogonal latent variables (LVs) were then extracted by the partial least squares (PLS) method as the new compressive input variables. Finally, support vector machine (SVM) regression model was trained with these LVs to predict the SF2 (the surviving fraction of cells after a radiation dose of 2 Gy γ-ray) values of the cell lines. Comparison with the published results showed significant improvement of the new method in various ways: (a) reducing the root mean square error (RMSE) of the radiation sensitivity prediction model from 0.20 to 0.011; and (b) improving prediction accuracy from 62% to 91%. To test the predictive performance of the gene signature, three different types of cancer patient datasets were used. Survival analysis across these different types of cancer patients strongly confirmed the clinical potential utility of the signature genes as a general prognosis platform. The gene regulatory network analysis identified six hub genes that are involved in canonical cancer pathways.

  9. Nonlinear Quantitative Radiation Sensitivity Prediction Model Based on NCI-60 Cancer Cell Lines

    Directory of Open Access Journals (Sweden)

    Chunying Zhang

    2014-01-01

    Full Text Available We proposed a nonlinear model to perform a novel quantitative radiation sensitivity prediction. We used the NCI-60 panel, which consists of nine different cancer types, as the platform to train our model. Important radiation therapy (RT related genes were selected by significance analysis of microarrays (SAM. Orthogonal latent variables (LVs were then extracted by the partial least squares (PLS method as the new compressive input variables. Finally, support vector machine (SVM regression model was trained with these LVs to predict the SF2 (the surviving fraction of cells after a radiation dose of 2 Gy γ-ray values of the cell lines. Comparison with the published results showed significant improvement of the new method in various ways: (a reducing the root mean square error (RMSE of the radiation sensitivity prediction model from 0.20 to 0.011; and (b improving prediction accuracy from 62% to 91%. To test the predictive performance of the gene signature, three different types of cancer patient datasets were used. Survival analysis across these different types of cancer patients strongly confirmed the clinical potential utility of the signature genes as a general prognosis platform. The gene regulatory network analysis identified six hub genes that are involved in canonical cancer pathways.

  10. Wind Power Prediction Based on LS-SVM Model with Error Correction

    Directory of Open Access Journals (Sweden)

    ZHANG, Y.

    2017-02-01

    Full Text Available As conventional energy sources are non-renewable, the world's major countries are investing heavily in renewable energy research. Wind power represents the development trend of future energy, but the intermittent and volatility of wind energy are the main reasons that leads to the poor accuracy of wind power prediction. However, by analyzing the error level at different time points, it can be found that the errors of adjacent time are often approximately the same, the least square support vector machine (LS-SVM model with error correction is used to predict the wind power in this paper. According to the simulation of wind power data of two wind farms, the proposed method can effectively improve the prediction accuracy of wind power, and the error distribution is concentrated almost without deviation. The improved method proposed in this paper takes into account the error correction process of the model, which improved the prediction accuracy of the traditional model (RBF, Elman, LS-SVM. Compared with the single LS-SVM prediction model in this paper, the mean absolute error of the proposed method had decreased by 52 percent. The research work in this paper will be helpful to the reasonable arrangement of dispatching operation plan, the normal operation of the wind farm and the large-scale development as well as fully utilization of renewable energy resources.

  11. Evaluation of an ARPS-based canopy flow modeling system for use in future operational smoke prediction efforts

    Science.gov (United States)

    M. T. Kiefer; S. Zhong; W. E. Heilman; J. J. Charney; X. Bian

    2013-01-01

    Efforts to develop a canopy flow modeling system based on the Advanced Regional Prediction System (ARPS) model are discussed. The standard version of ARPS is modified to account for the effect of drag forces on mean and turbulent flow through a vegetation canopy, via production and sink terms in the momentum and subgrid-scale turbulent kinetic energy (TKE) equations....

  12. Geoelectrical parameter-based multivariate regression borehole yield model for predicting aquifer yield in managing groundwater resource sustainability

    Directory of Open Access Journals (Sweden)

    Kehinde Anthony Mogaji

    2016-07-01

    Full Text Available This study developed a GIS-based multivariate regression (MVR yield rate prediction model of groundwater resource sustainability in the hard-rock geology terrain of southwestern Nigeria. This model can economically manage the aquifer yield rate potential predictions that are often overlooked in groundwater resources development. The proposed model relates the borehole yield rate inventory of the area to geoelectrically derived parameters. Three sets of borehole yield rate conditioning geoelectrically derived parameters—aquifer unit resistivity (ρ, aquifer unit thickness (D and coefficient of anisotropy (λ—were determined from the acquired and interpreted geophysical data. The extracted borehole yield rate values and the geoelectrically derived parameter values were regressed to develop the MVR relationship model by applying linear regression and GIS techniques. The sensitivity analysis results of the MVR model evaluated at P ⩽ 0.05 for the predictors ρ, D and λ provided values of 2.68 × 10−05, 2 × 10−02 and 2.09 × 10−06, respectively. The accuracy and predictive power tests conducted on the MVR model using the Theil inequality coefficient measurement approach, coupled with the sensitivity analysis results, confirmed the model yield rate estimation and prediction capability. The MVR borehole yield prediction model estimates were processed in a GIS environment to model an aquifer yield potential prediction map of the area. The information on the prediction map can serve as a scientific basis for predicting aquifer yield potential rates relevant in groundwater resources sustainability management. The developed MVR borehole yield rate prediction mode provides a good alternative to other methods used for this purpose.

  13. A Predictive Model to Identify Patients With Fecal Incontinence Based on High-Definition Anorectal Manometry.

    Science.gov (United States)

    Zifan, Ali; Ledgerwood-Lee, Melissa; Mittal, Ravinder K

    2016-12-01

    Three-dimensional high-definition anorectal manometry (3D-HDAM) is used to assess anal sphincter function; it determines profiles of regional pressure distribution along the length and circumference of the anal canal. There is no consensus, however, on the best way to analyze data from 3D-HDAM to distinguish healthy individuals from persons with sphincter dysfunction. We developed a computer analysis system to analyze 3D-HDAM data and to aid in the diagnosis and assessment of patients with fecal incontinence (FI). In a prospective study, we performed 3D-HDAM analysis of 24 asymptomatic healthy subjects (control subjects; all women; mean age, 39 ± 10 years) and 24 patients with symptoms of FI (all women; mean age, 58 ± 13 years). Patients completed a standardized questionnaire (FI severity index) to score the severity of FI symptoms. We developed and evaluated a robust prediction model to distinguish patients with FI from control subjects using linear discriminant, quadratic discriminant, and logistic regression analyses. In addition to collecting pressure information from the HDAM data, we assessed regional features based on shape characteristics and the anal sphincter pressure symmetry index. The combination of pressure values, anal sphincter area, and reflective symmetry values was identified in patients with FI versus control subjects with an area under the curve value of 1.0. In logistic regression analyses using different predictors, the model identified patients with FI with an area under the curve value of 0.96 (interquartile range, 0.22). In discriminant analysis, results were classified with a minimum error of 0.02, calculated using 10-fold cross-validation; different combinations of predictors produced median classification errors of 0.16 in linear discriminant analysis (interquartile range, 0.25) and 0.08 in quadratic discriminant analysis (interquartile range, 0.25). We developed and validated a novel prediction model to analyze 3D-HDAM data. This

  14. Multiscale modeling of interwoven Kevlar fibers based on random walk to predict yarn structural response

    Science.gov (United States)

    Recchia, Stephen

    Kevlar is the most common high-end plastic filament yarn used in body armor, tire reinforcement, and wear resistant applications. Kevlar is a trade name for an aramid fiber. These are fibers in which the chain molecules are highly oriented along the fiber axis, so the strength of the chemical bond can be exploited. The bulk material is extruded into filaments that are bound together into yarn, which may be chorded with other materials as in car tires, woven into a fabric, or layered in an epoxy to make composite panels. The high tensile strength to low weight ratio makes this material ideal for designs that decrease weight and inertia, such as automobile tires, body panels, and body armor. For designs that use Kevlar, increasing the strength, or tenacity, to weight ratio would improve performance or reduce cost of all products that are based on this material. This thesis computationally and experimentally investigates the tenacity and stiffness of Kevlar yarns with varying twist ratios. The test boundary conditions were replicated with a geometrically accurate finite element model, resulting in a customized code that can reproduce tortuous filaments in a yarn was developed. The solid model geometry capturing filament tortuosity was implemented through a random walk method of axial geometry creation. A finite element analysis successfully recreated the yarn strength and stiffness dependency observed during the tests. The physics applied in the finite element model was reproduced in an analytical equation that was able to predict the failure strength and strain dependency of twist ratio. The analytical solution can be employed to optimize yarn design for high strength applications.

  15. Predictive model of nicotine dependence based on mental health indicators and self-concept

    Directory of Open Access Journals (Sweden)

    Hamid Kazemi Zahrani

    2014-12-01

    Full Text Available Background: The purpose of this research was to investigate the predictive power of anxiety, depression, stress and self-concept dimensions (Mental ability, job efficiency, physical attractiveness, social skills, and deficiencies and merits as predictors of nicotine dependency among university students in Isfahan. Methods: In this correlational study, 110 male nicotine-dependent students at Isfahan University were selected by convenience sampling. All samples were assessed by Depression Anxiety Stress Scale (DASS, self-concept test and Nicotine Dependence Syndrome Scale. Data were analyzed by Pearson correlation and stepwise regression. Results: The result showed that anxiety had the highest strength to predict nicotine dependence. In addition, the self-concept and its dimensions predicted only 12% of the variance in nicotine dependence, which was not significant. Conclusion: Emotional processing variables involved in mental health play an important role in presenting a model to predict students’ dependence on nicotine more than identity variables such as different dimensions of self-concept.

  16. A Comparison of Energy Consumption Prediction Models Based on Neural Networks of a Bioclimatic Building

    Directory of Open Access Journals (Sweden)

    Hamid R. Khosravani

    2016-01-01

    Full Text Available Energy consumption has been increasing steadily due to globalization and industrialization. Studies have shown that buildings are responsible for the biggest proportion of energy consumption; for example in European Union countries, energy consumption in buildings represents around 40% of the total energy consumption. In order to control energy consumption in buildings, different policies have been proposed, from utilizing bioclimatic architectures to the use of predictive models within control approaches. There are mainly three groups of predictive models including engineering, statistical and artificial intelligence models. Nowadays, artificial intelligence models such as neural networks and support vector machines have also been proposed because of their high potential capabilities of performing accurate nonlinear mappings between inputs and outputs in real environments which are not free of noise. The main objective of this paper is to compare a neural network model which was designed utilizing statistical and analytical methods, with a group of neural network models designed benefiting from a multi objective genetic algorithm. Moreover, the neural network models were compared to a naïve autoregressive baseline model. The models are intended to predict electric power demand at the Solar Energy Research Center (Centro de Investigación en Energía SOLar or CIESOL in Spanish bioclimatic building located at the University of Almeria, Spain. Experimental results show that the models obtained from the multi objective genetic algorithm (MOGA perform comparably to the model obtained through a statistical and analytical approach, but they use only 0.8% of data samples and have lower model complexity.

  17. Prediction of protein continuum secondary structure with probabilistic models based on NMR solved structures

    Directory of Open Access Journals (Sweden)

    Bailey Timothy L

    2006-02-01

    Full Text Available Abstract Background The structure of proteins may change as a result of the inherent flexibility of some protein regions. We develop and explore probabilistic machine learning methods for predicting a continuum secondary structure, i.e. assigning probabilities to the conformational states of a residue. We train our methods using data derived from high-quality NMR models. Results Several probabilistic models not only successfully estimate the continuum secondary structure, but also provide a categorical output on par with models directly trained on categorical data. Importantly, models trained on the continuum secondary structure are also better than their categorical counterparts at identifying the conformational state for structurally ambivalent residues. Conclusion Cascaded probabilistic neural networks trained on the continuum secondary structure exhibit better accuracy in structurally ambivalent regions of proteins, while sustaining an overall classification accuracy on par with standard, categorical prediction methods.

  18. Computer based prognosis model with dimensionality reduction and validation of attributes for prolonged survival prediction

    Directory of Open Access Journals (Sweden)

    C.G. Raji

    2017-01-01

    Full Text Available Medical databases contain large volume of data about patients and their clinical information. For extracting the features and their relationships from a huge database, various data mining techniques need to be employed. As Liver transplantation is the curative surgical procedure for the patients suffering from end stage liver disease, predicting the survival rate after Liver transplantation has a big impact. Appropriate selection of attributes and methods are necessary for the survival prediction. Liver transplantation data with 256 attributes were collected from 389 attributes of the United Nations Organ Sharing registry for the survival prediction. Initially 59 attributes were filtered manually, and then Principal Component Analysis (PCA was applied for reducing the dimensionality of the data. After performing PCA, 197 attributes were obtained and they were ranked into 27 strong/relevant attributes. Using association rule mining techniques, the association between the selected attributes was identified and verified. Comparison of rules generated by various association rules mining algorithm before and after PCA was also carried out for affirming the results. The various rule mining algorithms used were Apriori, Treap mining and Tertius algorithms. Among these algorithms, Treap mining algorithm generated the rules with high accuracy. A Multilayer Perceptron model was built for predicting the long term survival of patients after Liver transplantation which produced high accuracy prediction result. The model performance was compared with Radial Basis Function model to prove the accuracy of survival of liver patients'. The top ranked attributes obtained from rule mining were fed to the models for effective training. This ensures that Treap mining generated associations of high impact attributes which in-turn made the survival prediction flawless.

  19. An Analytical Model for Fatigue Life Prediction Based on Fracture Mechanics and Crack Closure

    DEFF Research Database (Denmark)

    Ibsø, Jan Behrend; Agerskov, Henning

    1996-01-01

    test specimens are compared with fatigue life predictions using a fracture mechanics approach. In the calculation of the fatigue life, the influence of the welding residual stresses and crack closure on the fatigue crack growth is considered. A description of the crack closure model for analytical...

  20. Predicting speech intelligibility in adverse conditions: evaluation of the speech-based envelope power spectrum model

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2011-01-01

    conditions by comparing predictions to measured data from [Kjems et al. (2009). J. Acoust. Soc. Am. 126 (3), 1415-1426] where speech is mixed with four different interferers, including speech-shaped noise, bottle noise, car noise, and cafe noise. The model accounts well for the differences in intelligibility...

  1. Model-Based Predictive Control Scheme for Cost Optimization and Balancing Services for Supermarket Refrigeration Systems

    DEFF Research Database (Denmark)

    Weerts, Hermanus H. M.; Shafiei, Seyed Ehsan; Stoustrup, Jakob

    2014-01-01

    A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predicti...

  2. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  3. Physics-based Models for Aeroservoelasticity Prediction and Control, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. proposes to develop and demonstrate computational fluid dynamics (CFD)-based, reduced-order aeroservoelasticity modeling and simulation...

  4. Physics-Based Models for Aeroservoelasticity Prediction and Control, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. proposes to develop and demonstrate computational fluid dynamics (CFD)-based, reduced-order aeroservoelasticity modeling and simulation...

  5. Prediction of Placental Barrier Permeability: A Model Based on Partial Least Squares Variable Selection Procedure

    Directory of Open Access Journals (Sweden)

    Yong-Hong Zhang

    2015-05-01

    Full Text Available Assessing the human placental barrier permeability of drugs is very important to guarantee drug safety during pregnancy. Quantitative structure–activity relationship (QSAR method was used as an effective assessing tool for the placental transfer study of drugs, while in vitro human placental perfusion is the most widely used method. In this study, the partial least squares (PLS variable selection and modeling procedure was used to pick out optimal descriptors from a pool of 620 descriptors of 65 compounds and to simultaneously develop a QSAR model between the descriptors and the placental barrier permeability expressed by the clearance indices (CI. The model was subjected to internal validation by cross-validation and y-randomization and to external validation by predicting CI values of 19 compounds. It was shown that the model developed is robust and has a good predictive potential (r2 = 0.9064, RMSE = 0.09, q2 = 0.7323, rp2 = 0.7656, RMSP = 0.14. The mechanistic interpretation of the final model was given by the high variable importance in projection values of descriptors. Using PLS procedure, we can rapidly and effectively select optimal descriptors and thus construct a model with good stability and predictability. This analysis can provide an effective tool for the high-throughput screening of the placental barrier permeability of drugs.

  6. Predictive modelling for startup and investor relationship based on crowdfunding platform data

    Science.gov (United States)

    Alamsyah, Andry; Buono Asto Nugroho, Tri

    2018-03-01

    Crowdfunding platform is a place where startup shows off publicly their idea for the purpose to get their project funded. Crowdfunding platform such as Kickstarter are becoming popular today, it provides the efficient way for startup to get funded without liabilities, it also provides variety project category that can be participated. There is an available safety procedure to ensure achievable low-risk environment. The startup promoted project must accomplish their funded goal target. If they fail to reach the target, then there is no investment activity take place. It motivates startup to be more active to promote or disseminate their project idea and it also protect investor from losing money. The study objective is to predict the successfulness of proposed project and mapping investor trend using data mining framework. To achieve the objective, we proposed 3 models. First model is to predict whether a project is going to be successful or failed using K-Nearest Neighbour (KNN). Second model is to predict the number of successful project using Artificial Neural Network (ANN). Third model is to map the trend of investor in investing the project using K-Means clustering algorithm. KNN gives 99.04% model accuracy, while ANN best configuration gives 16-14-1 neuron layers and 0.2 learning rate, and K-Means gives 6 best separation clusters. The results of those models can help startup or investor to make decision regarding startup investment.

  7. Response surface and neural network based predictive models of cutting temperature in hard turning

    Directory of Open Access Journals (Sweden)

    Mozammel Mia

    2016-11-01

    Full Text Available The present study aimed to develop the predictive models of average tool-workpiece interface temperature in hard turning of AISI 1060 steels by coated carbide insert. The Response Surface Methodology (RSM and Artificial Neural Network (ANN were employed to predict the temperature in respect of cutting speed, feed rate and material hardness. The number and orientation of the experimental trials, conducted in both dry and high pressure coolant (HPC environments, were planned using full factorial design. The temperature was measured by using the tool-work thermocouple. In RSM model, two quadratic equations of temperature were derived from experimental data. The analysis of variance (ANOVA and mean absolute percentage error (MAPE were performed to suffice the adequacy of the models. In ANN model, 80% data were used to train and 20% data were employed for testing. Like RSM, herein, the error analysis was also conducted. The accuracy of the RSM and ANN model was found to be ⩾99%. The ANN models exhibit an error of ∼5% MAE for testing data. The regression coefficient was found to be greater than 99.9% for both dry and HPC. Both these models are acceptable, although the ANN model demonstrated a higher accuracy. These models, if employed, are expected to provide a better control of cutting temperature in turning of hardened steel.

  8. Development and validation of a prediction model for long-term sickness absence based on occupational health survey variables.

    Science.gov (United States)

    Roelen, Corné; Thorsen, Sannie; Heymans, Martijn; Twisk, Jos; Bültmann, Ute; Bjørner, Jakob

    2018-01-01

    The purpose of this study is to develop and validate a prediction model for identifying employees at increased risk of long-term sickness absence (LTSA), by using variables commonly measured in occupational health surveys. Based on the literature, 15 predictor variables were retrieved from the DAnish National working Environment Survey (DANES) and included in a model predicting incident LTSA (≥4 consecutive weeks) during 1-year follow-up in a sample of 4000 DANES participants. The 15-predictor model was reduced by backward stepwise statistical techniques and then validated in a sample of 2524 DANES participants, not included in the development sample. Identification of employees at increased LTSA risk was investigated by receiver operating characteristic (ROC) analysis; the area-under-the-ROC-curve (AUC) reflected discrimination between employees with and without LTSA during follow-up. The 15-predictor model was reduced to a 9-predictor model including age, gender, education, self-rated health, mental health, prior LTSA, work ability, emotional job demands, and recognition by the management. Discrimination by the 9-predictor model was significant (AUC = 0.68; 95% CI 0.61-0.76), but not practically useful. A prediction model based on occupational health survey variables identified employees with an increased LTSA risk, but should be further developed into a practically useful tool to predict the risk of LTSA in the general working population. Implications for rehabilitation Long-term sickness absence risk predictions would enable healthcare providers to refer high-risk employees to rehabilitation programs aimed at preventing or reducing work disability. A prediction model based on health survey variables discriminates between employees at high and low risk of long-term sickness absence, but discrimination was not practically useful. Health survey variables provide insufficient information to determine long-term sickness absence risk profiles. There is a need for

  9. NRFixer: Sentiment Based Model for Predicting the Fixability of Non-Reproducible Bugs

    Directory of Open Access Journals (Sweden)

    Anjali Goyal

    2017-08-01

    Full Text Available Software maintenance is an essential step in software development life cycle. Nowadays, software companies spend approximately 45\\% of total cost in maintenance activities. Large software projects maintain bug repositories to collect, organize and resolve bug reports. Sometimes it is difficult to reproduce the reported bug with the information present in a bug report and thus this bug is marked with resolution non-reproducible (NR. When NR bugs are reconsidered, a few of them might get fixed (NR-to-fix leaving the others with the same resolution (NR. To analyse the behaviour of developers towards NR-to-fix and NR bugs, the sentiment analysis of NR bug report textual contents has been conducted. The sentiment analysis of bug reports shows that NR bugs' sentiments incline towards more negativity than reproducible bugs. Also, there is a noticeable opinion drift found in the sentiments of NR-to-fix bug reports. Observations driven from this analysis were an inspiration to develop a model that can judge the fixability of NR bugs. Thus a framework, {NRFixer,} which predicts the probability of NR bug fixation, is proposed. {NRFixer} was evaluated with two dimensions. The first dimension considers meta-fields of bug reports (model-1 and the other dimension additionally incorporates the sentiments (model-2 of developers for prediction. Both models were compared using various machine learning classifiers (Zero-R, naive Bayes, J48, random tree and random forest. The bug reports of Firefox and Eclipse projects were used to test {NRFixer}. In Firefox and Eclipse projects, J48 and Naive Bayes classifiers achieve the best prediction accuracy, respectively. It was observed that the inclusion of sentiments in the prediction model shows a rise in the prediction accuracy ranging from 2 to 5\\% for various classifiers.

  10. Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction

    Science.gov (United States)

    Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.

    2015-12-01

    Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.

  11. An empirical model to predict road dust emissions based on pavement and traffic characteristics.

    Science.gov (United States)

    Padoan, Elio; Ajmone-Marsan, Franco; Querol, Xavier; Amato, Fulvio

    2017-11-08

    The relative impact of non-exhaust sources (i.e. road dust, tire wear, road wear and brake wear particles) on urban air quality is increasing. Among them, road dust resuspension has generally the highest impact on PM concentrations but its spatio-temporal variability has been rarely studied and modeled. Some recent studies attempted to observe and describe the time-variability but, as it is driven by traffic and meteorology, uncertainty remains on the seasonality of emissions. The knowledge gap on spatial variability is much wider, as several factors have been pointed out as responsible for road dust build-up: pavement characteristics, traffic intensity and speed, fleet composition, proximity to traffic lights, but also the presence of external sources. However, no parameterization is available as a function of these variables. We investigated mobile road dust smaller than 10 μm (MF10) in two cities with different climatic and traffic conditions (Barcelona and Turin), to explore MF10 seasonal variability and the relationship between MF10 and site characteristics (pavement macrotexture, traffic intensity and proximity to braking zone). Moreover, we provide the first estimates of emission factors in the Po Valley both in summer and winter conditions. Our results showed a good inverse relationship between MF10 and macro-texture, traffic intensity and distance from the nearest braking zone. We also found a clear seasonal effect of road dust emissions, with higher emission in summer, likely due to the lower pavement moisture. These results allowed building a simple empirical mode, predicting maximal dust loadings and, consequently, emission potential, based on the aforementioned data. This model will need to be scaled for meteorological effect, using methods accounting for weather and pavement moisture. This can significantly improve bottom-up emission inventory for spatial allocation of emissions and air quality management, to select those roads with higher emissions

  12. Equivalent Alkane Carbon Number of Live Crude Oil: A Predictive Model Based on Thermodynamics

    Directory of Open Access Journals (Sweden)

    Creton Benoit

    2016-09-01

    Full Text Available We took advantage of recently published works and new experimental data to propose a model for the prediction of the Equivalent Alkane Carbon Number of live crude oil (EACNlo for EOR processes. The model necessitates the a priori knowledge of reservoir pressure and temperature conditions as well as the initial gas to oil ratio. Additionally, some required volumetric properties for hydrocarbons were predicted using an equation of state. The model has been validated both on our own experimental data and data from the literature. These various case studies cover broad ranges of conditions in terms of API gravity index, gas to oil ratio, reservoir pressure and temperature, and composition of representative gas. The predicted EACNlo values reasonably agree with experimental EACN values, i.e. determined by comparison with salinity scans for a series of n-alkanes from nC8 to nC18. The model has been used to generate high pressure high temperature data, showing competing effects of the gas to oil ratio, pressure and temperature. The proposed model allows to strongly narrow down the spectrum of possibilities in terms of EACNlo values, and thus a more rational use of equipments.

  13. A meteo-hydrological prediction system based on a multi-model approach for precipitation forecasting

    Directory of Open Access Journals (Sweden)

    S. Davolio

    2008-02-01

    Full Text Available The precipitation forecasted by a numerical weather prediction model, even at high resolution, suffers from errors which can be considerable at the scales of interest for hydrological purposes. In the present study, a fraction of the uncertainty related to meteorological prediction is taken into account by implementing a multi-model forecasting approach, aimed at providing multiple precipitation scenarios driving the same hydrological model. Therefore, the estimation of that uncertainty associated with the quantitative precipitation forecast (QPF, conveyed by the multi-model ensemble, can be exploited by the hydrological model, propagating the error into the hydrological forecast.

    The proposed meteo-hydrological forecasting system is implemented and tested in a real-time configuration for several episodes of intense precipitation affecting the Reno river basin, a medium-sized basin located in northern Italy (Apennines. These episodes are associated with flood events of different intensity and are representative of different meteorological configurations responsible for severe weather affecting northern Apennines.

    The simulation results show that the coupled system is promising in the prediction of discharge peaks (both in terms of amount and timing for warning purposes. The ensemble hydrological forecasts provide a range of possible flood scenarios that proved to be useful for the support of civil protection authorities in their decision.

  14. Well-log based prediction of temperature models in the exploration of sedimentary settings

    DEFF Research Database (Denmark)

    Fuchs, Sven; Förster, Andrea; Wonik, Thomas

    porosity. TC vs. depth profiles corrected for in situ (p, T) conditions finally were used in conjunction with a published site-specific heat-flow value to model a temperature profile. The methodology is shown on the example of a 4-km deep borehole at Hannover in the North German Basin. This borehole...... these measurements are not available or only measured to a certain depth so that a temperature model needs to developed. A prerequisite for such a model is the knowledge of the regional heat flow and the geological conditions translated into lithology and thermal rock properties. For the determination of continuous...... borehole temperature profiles we propose a two-step procedure: (1) the use of standard petrophysical well logs and (2) the inversion of predicted TC to temperature gradients by applying Fourier’s law of heat conduction. The prediction of TC is solved by using set of equations (Fuchs & Förster, 2014...

  15. New mechanistically based model for predicting reduction of biosolids waste by ozonation of return activated sludge.

    Science.gov (United States)

    Isazadeh, Siavash; Feng, Min; Urbina Rivas, Luis Enrique; Frigon, Dominic

    2014-04-15

    Two pilot-scale activated sludge reactors were operated for 98 days to provide the necessary data to develop and validate a new mathematical model predicting the reduction of biosolids production by ozonation of the return activated sludge (RAS). Three ozone doses were tested during the study. In addition to the pilot-scale study, laboratory-scale experiments were conducted with mixed liquor suspended solids and with pure cultures to parameterize the biomass inactivation process during exposure to ozone. The experiments revealed that biomass inactivation occurred even at the lowest doses, but that it was not associated with extensive COD solubilization. For validation, the model was used to simulate the temporal dynamics of the pilot-scale operational data. Increasing the description accuracy of the inactivation process improved the precision of the model in predicting the operational data. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Modeling and simulation of adaptive Neuro-fuzzy based intelligent system for predictive stabilization in structured overlay networks

    Directory of Open Access Journals (Sweden)

    Ramanpreet Kaur

    2017-02-01

    Full Text Available Intelligent prediction of neighboring node (k well defined neighbors as specified by the dht protocol dynamism is helpful to improve the resilience and can reduce the overhead associated with topology maintenance of structured overlay networks. The dynamic behavior of overlay nodes depends on many factors such as underlying user’s online behavior, geographical position, time of the day, day of the week etc. as reported in many applications. We can exploit these characteristics for efficient maintenance of structured overlay networks by implementing an intelligent predictive framework for setting stabilization parameters appropriately. Considering the fact that human driven behavior usually goes beyond intermittent availability patterns, we use a hybrid Neuro-fuzzy based predictor to enhance the accuracy of the predictions. In this paper, we discuss our predictive stabilization approach, implement Neuro-fuzzy based prediction in MATLAB simulation and apply this predictive stabilization model in a chord based overlay network using OverSim as a simulation tool. The MATLAB simulation results present that the behavior of neighboring nodes is predictable to a large extent as indicated by the very small RMSE. The OverSim based simulation results also observe significant improvements in the performance of chord based overlay network in terms of lookup success ratio, lookup hop count and maintenance overhead as compared to periodic stabilization approach.

  17. A New Predictive Model Based on the ABC Optimized Multivariate Adaptive Regression Splines Approach for Predicting the Remaining Useful Life in Aircraft Engines

    Directory of Open Access Journals (Sweden)

    Paulino José García Nieto

    2016-05-01

    Full Text Available Remaining useful life (RUL estimation is considered as one of the most central points in the prognostics and health management (PHM. The present paper describes a nonlinear hybrid ABC–MARS-based model for the prediction of the remaining useful life of aircraft engines. Indeed, it is well-known that an accurate RUL estimation allows failure prevention in a more controllable way so that the effective maintenance can be carried out in appropriate time to correct impending faults. The proposed hybrid model combines multivariate adaptive regression splines (MARS, which have been successfully adopted for regression problems, with the artificial bee colony (ABC technique. This optimization technique involves parameter setting in the MARS training procedure, which significantly influences the regression accuracy. However, its use in reliability applications has not yet been widely explored. Bearing this in mind, remaining useful life values have been predicted here by using the hybrid ABC–MARS-based model from the remaining measured parameters (input variables for aircraft engines with success. A correlation coefficient equal to 0.92 was obtained when this hybrid ABC–MARS-based model was applied to experimental data. The agreement of this model with experimental data confirmed its good performance. The main advantage of this predictive model is that it does not require information about the previous operation states of the aircraft engine.

  18. Molecular surface area based predictive models for the adsorption and diffusion of disperse dyes in polylactic acid matrix.

    Science.gov (United States)

    Xu, Suxin; Chen, Jiangang; Wang, Bijia; Yang, Yiqi

    2015-11-15

    Two predictive models were presented for the adsorption affinities and diffusion coefficients of disperse dyes in polylactic acid matrix. Quantitative structure-sorption behavior relationship would not only provide insights into sorption process, but also enable rational engineering for desired properties. The thermodynamic and kinetic parameters for three disperse dyes were measured. The predictive model for adsorption affinity was based on two linear relationships derived by interpreting the experimental measurements with molecular structural parameters and compensation effect: ΔH° vs. dye size and ΔS° vs. ΔH°. Similarly, the predictive model for diffusion coefficient was based on two derived linear relationships: activation energy of diffusion vs. dye size and logarithm of pre-exponential factor vs. activation energy of diffusion. The only required parameters for both models are temperature and solvent accessible surface area of the dye molecule. These two predictive models were validated by testing the adsorption and diffusion properties of new disperse dyes. The models offer fairly good predictive ability. The linkage between structural parameter of disperse dyes and sorption behaviors might be generalized and extended to other similar polymer-penetrant systems. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Risk-Predicting Model for Incident of Essential Hypertension Based on Environmental and Genetic Factors with Support Vector Machine.

    Science.gov (United States)

    Pei, Zhiyong; Liu, Jielin; Liu, Manjiao; Zhou, Wenchao; Yan, Pengcheng; Wen, Shaojun; Chen, Yubao

    2018-03-01

    Essential hypertension (EH) has become a major chronic disease around the world. To build a risk-predicting model for EH can help to interpose people's lifestyle and dietary habit to decrease the risk of getting EH. In this study, we constructed a EH risk-predicting model considering both environmental and genetic factors with support vector machine (SVM). The data were collected through Epidemiological investigation questionnaire from Beijing Chinese Han population. After data cleaning, we finally selected 9 environmental factors and 12 genetic factors to construct the predicting model based on 1200 samples, including 559 essential hypertension patients and 641 controls. Using radial basis kernel function, predictive accuracy via SVM with function with only environmental factor and only genetic factor were 72.8 and 54.4%, respectively; after considering both environmental and genetic factor the accuracy improved to 76.3%. Using the model via SVM with Laplacian function, the accuracy with only environmental factor and only genetic factor were 76.9 and 57.7%, respectively; after combining environmental and genetic factor, the accuracy improved to 80.1%. The predictive accuracy of SVM model constructed based on Laplacian function was higher than radial basis kernel function, as well as sensitivity and specificity, which were 63.3 and 86.7%, respectively. In conclusion, the model based on SVM with Laplacian kernel function had better performance in predicting risk of hypertension. And SVM model considering both environmental and genetic factors had better performance than the model with environmental or genetic factors only.

  20. Developing a support vector machine based QSPR model for prediction of half-life of some herbicides.

    Science.gov (United States)

    Samghani, Kobra; HosseinFatemi, Mohammad

    2016-07-01

    The half-life (t1/2) of 58 herbicides were modeled by quantitative structure-property relationship (QSPR) based molecular structure descriptors. After calculation and the screening of a large number of molecular descriptors, the most relevant those ones selected by stepwise multiple linear regression were used for developing linear and nonlinear models which developed by using multiple linear regression and support vector machine, respectively. Comparison between statistical parameters of linear and nonlinear models indicates the suitability of SVM over MLR model for predicting the half-life of herbicides. The statistical parameters of R(2) and standard error for training set of SVM model were; 0.96 and 0.087, respectively, and were 0.93 and 0.092 for the test set. The SVM model was evaluated by leave one out cross validation test, which its result indicates the robustness and predictability of the model. The established SVM model was used for predicting the half-life of other herbicides that are located in the applicability domain of model that were determined via leverage approach. The results of this study indicate that the relationship among selected molecular descriptors and herbicide's half-life is non-linear. These results emphases that the process of degradation of herbicides in the environment is very complex and can be affected by various environmental and structural features, therefore simple linear model cannot be able to successfully predict it. Copyright © 2016. Published by Elsevier Inc.

  1. Model predictive controller-based multi-model control system for longitudinal stability of distributed drive electric vehicle.

    Science.gov (United States)

    Shi, Ke; Yuan, Xiaofang; Liu, Liang

    2018-01-01

    Distributed drive electric vehicle(DDEV) has been widely researched recently, its longitudinal stability is a very important research topic. Conventional wheel slip ratio control strategies are usually designed for one special operating mode and the optimal performance cannot be obtained as DDEV works under various operating modes. In this paper, a novel model predictive controller-based multi-model control system (MPC-MMCS) is proposed to solve the longitudinal stability problem of DDEV. Firstly, the operation state of DDEV is summarized as three kinds of typical operating modes. A submodel set is established to accurately represent the state value of the corresponding operating mode. Secondly, the matching degree between the state of actual DDEV and each submodel is analyzed. The matching degree is expressed as the weight coefficient and calculated by a modified recursive Bayes theorem. Thirdly, a nonlinear MPC is designed to achieve the optimal wheel slip ratio for each submodel. The optimal design of MPC is realized by parallel chaos optimization algorithm(PCOA)with computational accuracy and efficiency. Finally, the control output of MPC-MMCS is computed by the weighted output of each MPC to achieve smooth switching between operating modes. The proposed MPC-MMCS is evaluated on eight degrees of freedom(8DOF)DDEV model simulation platform and simulation results of different condition show the benefits of the proposed control system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Prediction of the Pharmacokinetics, Pharmacodynamics, and Efficacy of a Monoclonal Antibody, Using a Physiologically Based Pharmacokinetic FcRn Model

    Science.gov (United States)

    Chetty, Manoranjenni; Li, Linzhong; Rose, Rachel; Machavaram, Krishna; Jamei, Masoud; Rostami-Hodjegan, Amin; Gardner, Iain

    2015-01-01

    Although advantages of physiologically based pharmacokinetic models (PBPK) are now well established, PBPK models that are linked to pharmacodynamic (PD) models to predict pharmacokinetics (PK), PD, and efficacy of monoclonal antibodies (mAbs) in humans are uncommon. The aim of this study was to develop a PD model that could be linked to a physiologically based mechanistic FcRn model to predict PK, PD, and efficacy of efalizumab. The mechanistic FcRn model for mAbs with target-mediated drug disposition within the Simcyp population-based simulator was used to simulate the pharmacokinetic profiles for three different single doses and two multiple doses of efalizumab administered to virtual Caucasian healthy volunteers. The elimination of efalizumab was modeled with both a target-mediated component (specific) and catabolism in the endosome (non-specific). This model accounted for the binding between neonatal Fc receptor (FcRn) and efalizumab (protective against elimination) and for changes in CD11a target concentration. An integrated response model was then developed to predict the changes in mean Psoriasis Area and Severity Index (PASI) scores that were measured in a clinical study as an efficacy marker for efalizumab treatment. PASI scores were approximated as continuous and following a first-order asymptotic progression model. The reported steady state asymptote (Y ss) and baseline score [Y (0)] was applied and parameter estimation was used to determine the half-life of progression (Tp) of psoriasis. Results suggested that simulations using this model were able to recover the changes in PASI scores (indicating efficacy) observed during clinical studies. Simulations of both single dose and multiple doses of efalizumab concentration-time profiles as well as suppression of CD11a concentrations recovered clinical data reasonably well. It can be concluded that the developed PBPK FcRn model linked to a PD model adequately predicted PK, PD, and efficacy of efalizumab. PMID

  3. A diffusivity model for predicting VOC diffusion in porous building materials based on fractal theory.

    Science.gov (United States)

    Liu, Yanfeng; Zhou, Xiaojun; Wang, Dengjia; Song, Cong; Liu, Jiaping

    2015-12-15

    Most building materials are porous media, and the internal diffusion coefficients of such materials have an important influences on the emission characteristics of volatile organic compounds (VOCs). The pore structure of porous building materials has a significant impact on the diffusion coefficient. However, the complex structural characteristics bring great difficulties to the model development. The existing prediction models of the diffusion coefficient are flawed and need to be improved. Using scanning electron microscope (SEM) observations and mercury intrusion porosimetry (MIP) tests of typical porous building materials, this study developed a new diffusivity model: the multistage series-connection fractal capillary-bundle (MSFC) model. The model considers the variable-diameter capillaries formed by macropores connected in series as the main mass transfer paths, and the diameter distribution of the capillary bundles obeys a fractal power law in the cross section. In addition, the tortuosity of the macrocapillary segments with different diameters is obtained by the fractal theory. Mesopores serve as the connections between the macrocapillary segments rather than as the main mass transfer paths. The theoretical results obtained using the MSFC model yielded a highly accurate prediction of the diffusion coefficients and were in a good agreement with the VOC concentration measurements in the environmental test chamber. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Improved prediction of higher heating value of biomass using an artificial neural network model based on proximate analysis.

    Science.gov (United States)

    Uzun, Harun; Yıldız, Zeynep; Goldfarb, Jillian L; Ceylan, Selim

    2017-06-01

    As biomass becomes more integrated into our energy feedstocks, the ability to predict its combustion enthalpies from routine data such as carbon, ash, and moisture content enables rapid decisions about utilization. The present work constructs a novel artificial neural network model with a 3-3-1 tangent sigmoid architecture to predict biomasses' higher heating values from only their proximate analyses, requiring minimal specificity as compared to models based on elemental composition. The model presented has a considerably higher correlation coefficient (0.963) and lower root mean square (0.375), mean absolute (0.328), and mean bias errors (0.010) than other models presented in the literature which, at least when applied to the present data set, tend to under-predict the combustion enthalpy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Mortality Prediction Model of Septic Shock Patients Based on Routinely Recorded Data

    Directory of Open Access Journals (Sweden)

    Marta Carrara

    2015-01-01

    Full Text Available We studied the problem of mortality prediction in two datasets, the first composed of 23 septic shock patients and the second composed of 73 septic subjects selected from the public database MIMIC-II. For each patient we derived hemodynamic variables, laboratory results, and clinical information of the first 48 hours after shock onset and we performed univariate and multivariate analyses to predict mortality in the following 7 days. The results show interesting features that individually identify significant differences between survivors and nonsurvivors and features which gain importance only when considered together with the others in a multivariate regression model. This preliminary study on two small septic shock populations represents a novel contribution towards new personalized models for an integration of multiparameter patient information to improve critical care management of shock patients.

  6. Prediction model of energy consumption in Jiangsu Province based on constraint condition of carbon emission

    Science.gov (United States)

    Chang, Z. G.; Xue, T. T.; Chen, Y. J.; Chao, X. H.

    2017-11-01

    In order to achieve the targets for energy conservation and economic development goals in Jiangsu Province under the constraint of carbon emission, this paper uses the gray GM (1,1) model to predict and optimize the consumption structure of major energy sources (coal, oil, natural gas, etc.) in Jiangsu province in the "13th Five-Year" period and the next seven years. The predictions meet the requirement of reducing carbon dioxide emissions per unit GDP of China by 50%. The results show that the proposed approach and model is effective. Finally, we put forward opinions and suggestions on the way of energy-saving and emission-reduction, the adjustment of energy structure and the policy of coal consumption in Jiangsu Province.

  7. An Analytical Model for Fatigue Life Prediction Based on Fracture Mechanics and Crack Closure

    DEFF Research Database (Denmark)

    Ibsø, Jan Behrend; Agerskov, Henning

    1996-01-01

    test specimens are compared with fatigue life predictions using a fracture mechanics approach. In the calculation of the fatigue life, the influence of the welding residual stresses and crack closure on the fatigue crack growth is considered. A description of the crack closure model for analytical...... of the analytical fatigue lives. Both the analytical and experimental results obtained show that the Miner rule may give quite unconservative predictions of the fatigue life for the types of stochastic loading studied....... determination of the fatigue life is included. Furthermore, the results obtained in studies of the various parameters that have an influence on the fatigue life, are given. A very good agreement between experimental and analytical results is obtained, when the crack closure model is used in determination...

  8. Application of new methods based on ECMWF ensemble model for predicting severe convective weather situations

    Science.gov (United States)

    Lazar, Dora; Ihasz, Istvan

    2013-04-01

    The short and medium range operational forecasts, warning and alarm of the severe weather are one of the most important activities of the Hungarian Meteorological Service. Our study provides comprehensive summary of newly developed methods based on ECMWF ensemble forecasts to assist successful prediction of the convective weather situations. . In the first part of the study a brief overview is given about the components of atmospheric convection, which are the atmospheric lifting force, convergence and vertical wind shear. The atmospheric instability is often used to characterize the so-called instability index; one of the most popular and often used indexes is the convective available potential energy. Heavy convective events, like intensive storms, supercells and tornadoes are needed the vertical instability, adequate moisture and vertical wind shear. As a first step statistical studies of these three parameters are based on nine years time series of 51-member ensemble forecasting model based on convective summer time period, various statistical analyses were performed. Relationship of the rate of the convective and total precipitation and above three parameters was studied by different statistical methods. Four new visualization methods were applied for supporting successful forecasts of severe weathers. Two of the four visualization methods the ensemble meteogram and the ensemble vertical profiles had been available at the beginning of our work. Both methods show probability of the meteorological parameters for the selected location. Additionally two new methods have been developed. First method provides probability map of the event exceeding predefined values, so the incident of the spatial uncertainty is well-defined. The convective weather events are characterized by the incident of space often rhapsodic occurs rather have expected the event area can be selected so that the ensemble forecasts give very good support. Another new visualization tool shows time

  9. A discriminant analysis prediction model of non-syndromic cleft lip with or without cleft palate based on risk factors.

    Science.gov (United States)

    Li, Huixia; Luo, Miyang; Luo, Jiayou; Zheng, Jianfei; Zeng, Rong; Du, Qiyun; Fang, Junqun; Ouyang, Na

    2016-11-23

    A risk prediction model of non-syndromic cleft lip with or without cleft palate (NSCL/P) was established by a discriminant analysis to predict the individual risk of NSCL/P in pregnant women. A hospital-based case-control study was conducted with 113 cases of NSCL/P and 226 controls without NSCL/P. The cases and the controls were obtained from 52 birth defects' surveillance hospitals in Hunan Province, China. A questionnaire was administered in person to collect the variables relevant to NSCL/P by face to face interviews. Logistic regression models were used to analyze the influencing factors of NSCL/P, and a stepwise Fisher discriminant analysis was subsequently used to construct the prediction model. In the univariate analysis, 13 influencing factors were related to NSCL/P, of which the following 8 influencing factors as predictors determined the discriminant prediction model: family income, maternal occupational hazards exposure, premarital medical examination, housing renovation, milk/soymilk intake in the first trimester of pregnancy, paternal occupational hazards exposure, paternal strong tea drinking, and family history of NSCL/P. The model had statistical significance (lambda = 0.772, chi-square = 86.044, df = 8, P predicted to be NSCL/P cases or controls with a sensitivity of 74.3 % and a specificity of 88.5 %. The area under the receiver operating characteristic curve (AUC) was 0.846. The prediction model that was established using the risk factors of NSCL/P can be useful for predicting the risk of NSCL/P. Further research is needed to improve the model, and confirm the validity and reliability of the model.

  10. A tuning algorithm for model predictive controllers based on genetic algorithms and fuzzy decision making.

    Science.gov (United States)

    van der Lee, J H; Svrcek, W Y; Young, B R

    2008-01-01

    Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.

  11. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  12. Prediction of interindividual variation in drug plasma levels in vivo from individual enzyme kinetic data and physiologically based pharmacokinetic modeling

    NARCIS (Netherlands)

    Bogaards, J.J.P.; Hissink, E.M.; Briggs, M.; Weaver, R.; Jochemsen, R.; Jackson, P.; Bertrand, M.; Bladeren, P. van

    2000-01-01

    A strategy is presented to predict interindividual variation in drug plasma levels in vivo by the use of physiologically based pharmacokinetic modeling and human in vitro metabolic parameters, obtained through the combined use of microsomes containing single cytochrome P450 enzymes and a human liver

  13. The Relevance Voxel Machine (RVoxM): A Self-Tuning Bayesian Model for Informative Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2012-01-01

    This paper presents the relevance voxel machine (RVoxM), a dedicated Bayesian model for making predictions based on medical imaging data. In contrast to the generic machine learning algorithms that have often been used for this purpose, the method is designed to utilize a small number of spatiall...

  14. Development and validation of a prediction model for long-term sickness absence based on occupational health survey variables

    NARCIS (Netherlands)

    Roelen, Corne; Thorsen, Sannie; Heymans, Martijn; Twisk, Jos; Bultmann, Ute; Bjorner, Jakob

    2018-01-01

    Purpose: The purpose of this study is to develop and validate a prediction model for identifying employees at increased risk of long-term sickness absence (LTSA), by using variables commonly measured in occupational health surveys. Materials and methods: Based on the literature, 15 predictor

  15. Predictions on the Development Dimensions of Provincial Tourism Discipline Based on the Artificial Neural Network BP Model

    Science.gov (United States)

    Yang, Yang; Hu, Jun; Lv, Yingchun; Zhang, Mu

    2013-01-01

    As the tourism industry has gradually become the strategic mainstay industry of the national economy, the scope of the tourism discipline has developed rigorously. This paper makes a predictive study on the development of the scope of Guangdong provincial tourism discipline based on the artificial neural network BP model in order to find out how…

  16. Wave Disturbance Reduction of a Floating Wind Turbine Using a Reference Model-based Predictive Control

    DEFF Research Database (Denmark)

    Christiansen, Søren; Tabatabaeipour, Seyed Mojtaba; Bak, Thomas

    2013-01-01

    a controller designed for an onshore wind turbine yields instability in the fore-aft rotation. In this paper, we propose a general framework, where a reference model models the desired closed-loop behavior of the system. Model predictive control combined with a state estimator finds the optimal rotor blade...... pitch such that the state trajectories of the controlled system tracks the reference trajectories. The framework is demonstrated with a reference model of the desired closed-loop system undisturbed by the incident waves. This allows the wave-induced motion of the platform to be damped significantly...... compared to a baseline floating wind turbine controller at the cost of more pitch action....

  17. A Simple Physics-Based Model Predicts Oil Production from Thousands of Horizontal Wells in Shales

    KAUST Repository

    Patzek, Tadeusz

    2017-10-18

    Over the last six years, crude oil production from shales and ultra-deep GOM in the United States has accounted for most of the net increase of global oil production. Therefore, it is important to have a good predictive model of oil production and ultimate recovery in shale wells. Here we introduce a simple model of producing oil and solution gas from the horizontal hydrofractured wells. This model is consistent with the basic physics and geometry of the extraction process. We then apply our model thousands of wells in the Eagle Ford shale. Given well geometry, we obtain a one-dimensional nonlinear pressure diffusion equation that governs flow of mostly oil and solution gas. In principle, solutions of this equation depend on many parameters, but in practice and within a given oil shale, all but three can be fixed at typical values, leading to a nonlinear diffusion problem we linearize and solve exactly with a scaling

  18. Template-based and free modeling of I-TASSER and QUARK pipelines using predicted contact maps in CASP12.

    Science.gov (United States)

    Zhang, Chengxin; Mortuza, S M; He, Baoji; Wang, Yanting; Zhang, Yang

    2018-03-01

    We develop two complementary pipelines, "Zhang-Server" and "QUARK", based on I-TASSER and QUARK pipelines for template-based modeling (TBM) and free modeling (FM), and test them in the CASP12 experiment. The combination of I-TASSER and QUARK successfully folds three medium-size FM targets that have more than 150 residues, even though the interplay between the two pipelines still awaits further optimization. Newly developed sequence-based contact prediction by NeBcon plays a critical role to enhance the quality of models, particularly for FM targets, by the new pipelines. The inclusion of NeBcon predicted contacts as restraints in the QUARK simulations results in an average TM-score of 0.41 for the best in top five predicted models, which is 37% higher than that by the QUARK simulations without contacts. In particular, there are seven targets that are converted from non-foldable to foldable (TM-score >0.5) due to the use of contact restraints in the simulations. Another additional feature in the current pipelines is the local structure quality prediction by ResQ, which provides a robust residue-level modeling error estimation. Despite the success, significant challenges still remain in ab initio modeling of multi-domain proteins and folding of β-proteins with complicated topologies bound by long-range strand-strand interactions. Improvements on domain boundary and long-range contact prediction, as well as optimal use of the predicted contacts and multiple threading alignments, are critical to address these issues seen in the CASP12 experiment. © 2017 Wiley Periodicals, Inc.

  19. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, J. D. (Prostat, Mesa, AZ); Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

    2006-10-01

    Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

  20. Toward improving the reliability of hydrologic prediction: Model structure uncertainty and its quantification using ensemble-based genetic programming framework

    Science.gov (United States)

    Parasuraman, Kamban; Elshorbagy, Amin

    2008-12-01

    Uncertainty analysis is starting to be widely acknowledged as an integral part of hydrological modeling. The conventional treatment of uncertainty analysis in hydrologic modeling is to assume a deterministic model structure, and treat its associated parameters as imperfectly known, thereby neglecting the uncertainty associated with the model structure. In this paper, a modeling framework that can explicitly account for the effect of model structure uncertainty has been proposed. The modeling framework is based on initially generating different realizations of the original data set using a non-parametric bootstrap method, and then exploiting the ability of the self-organizing algorithms, namely genetic programming, to evolve their own model structure for each of the resampled data sets. The resulting ensemble of models is then used to quantify the uncertainty associated with the model structure. The performance of the proposed modeling framework is analyzed with regards to its ability in characterizing the evapotranspiration process at the Southwest Sand Storage facility, located near Ft. McMurray, Alberta. Eddy-covariance-measured actual evapotranspiration is modeled as a function of net radiation, air temperature, ground temperature, relative humidity, and wind speed. Investigating the relation between model complexity, prediction accuracy, and uncertainty, two sets of experiments were carried out by varying the level of mathematical operators that can be used to define the predictand-predictor relationship. While the first set uses just the additive operators, the second set uses both the additive and the multiplicative operators to define the predictand-predictor relationship. The results suggest that increasing the model complexity may lead to better prediction accuracy but at an expense of increasing uncertainty. Compared to the model parameter uncertainty, the relative contribution of model structure uncertainty to the predictive uncertainty of a model is

  1. Learning MRI-based classification models for MGMT methylation status prediction in glioblastoma.

    Science.gov (United States)

    Kanas, Vasileios G; Zacharaki, Evangelia I; Thomas, Ginu A; Zinn, Pascal O; Megalooikonomou, Vasileios; Colen, Rivka R

    2017-03-01

    The O 6 -methylguanine-DNA-methyltransferase (MGMT) promoter methylation has been shown to be associated with improved outcomes in patients with glioblastoma (GBM) and may be a predictive marker of sensitivity to chemotherapy. However, determination of the MGMT promoter methylation status requires tissue obtained via surgical resection or biopsy. The aim of this study was to assess the ability of quantitative and qualitative imaging variables in predicting MGMT methylation status noninvasively. A retrospective analysis of MR images from GBM patients was conducted. Multivariate prediction models were obtained by machine-learning methods and tested on data from The Cancer Genome Atlas (TCGA) database. The status of MGMT promoter methylation was predicted with an accuracy of up to 73.6%. Experimental analysis showed that the edema/necrosis volume ratio, tumor/necrosis volume ratio, edema volume, and tumor location and enhancement characteristics were the most significant variables in respect to the status of MGMT promoter methylation in GBM. The obtained results provide further evidence of an association between standard preoperative MRI variables and MGMT methylation status in GBM. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Early Prediction of Disease Progression in Small Cell Lung Cancer: Toward Model-Based Personalized Medicine in Oncology.

    Science.gov (United States)

    Buil-Bruna, Núria; Sahota, Tarjinder; López-Picazo, José-María; Moreno-Jiménez, Marta; Martín-Algarra, Salvador; Ribba, Benjamin; Trocóniz, Iñaki F

    2015-06-15

    Predictive biomarkers can play a key role in individualized disease monitoring. Unfortunately, the use of biomarkers in clinical settings has thus far been limited. We have previously shown that mechanism-based pharmacokinetic/pharmacodynamic modeling enables integration of nonvalidated biomarker data to provide predictive model-based biomarkers for response classification. The biomarker model we developed incorporates an underlying latent variable (disease) representing (unobserved) tumor size dynamics, which is assumed to drive biomarker production and to be influenced by exposure to treatment. Here, we show that by integrating CT scan data, the population model can be expanded to include patient outcome. Moreover, we show that in conjunction with routine medical monitoring data, the population model can support accurate individual predictions of outcome. Our combined model predicts that a change in disease of 29.2% (relative standard error 20%) between two consecutive CT scans (i.e., 6-8 weeks) gives a probability of disease progression of 50%. We apply this framework to an external dataset containing biomarker data from 22 small cell lung cancer patients (four patients progressing during follow-up). Using only data up until the end of treatment (a total of 137 lactate dehydrogenase and 77 neuron-specific enolase observations), the statistical framework prospectively identified 75% of the individuals as having a predictable outcome in follow-up visits. This included two of the four patients who eventually progressed. In all identified individuals, the model-predicted outcomes matched the observed outcomes. This framework allows at risk patients to be identified early and therapeutic intervention/monitoring to be adjusted individually, which may improve overall patient survival. ©2015 American Association for Cancer Research.

  3. Predicting corrosion product transport in nuclear power stations using a solubility-based model for flow-accelerated corrosion

    International Nuclear Information System (INIS)

    Burrill, K.A.; Cheluget, E.L.

    1995-01-01

    A general model of solubility-driven flow-accelerated corrosion of carbon steel was derived based on the assumption that the solubilities of ferric oxyhydroxide and magnetite control the rate of film dissolution. This process involves the dissolution of an oxide film due to fast-flowing coolant unsaturated in iron. The soluble iron is produced by (i) the corrosion of base metal under a porous oxide film and (ii) the dissolution of the oxide film at the fluid-oxide film interface. The iron released at the pipe wall is transferred into the bulk flow by turbulent mass transfer. The model is suitable for calculating concentrations of dissolved iron in feedtrain lines. These iron levels were used to calculate sludge transport rates around the feedtrain. The model was used to predict sludge transport rates due to flow accelerated corrosion of major feedtrain piping in a CANDU reactor. The predictions of the model compare well with plant measurements

  4. Coarsening of the Sn-Pb Solder Microstructure in Constitutive Model-Based Predictions of Solder Joint Thermal Mechanical Fatigue

    Energy Technology Data Exchange (ETDEWEB)

    Vianco, P.T.; Burchett, S.N.; Neilsen, M.K.; Rejent, J.A.; Frear, D.R.

    1999-04-12

    Thermal mechanical fatigue (TMF) is an important damage mechanism for solder joints exposed to cyclic temperature environments. Predicting the service reliability of solder joints exposed to such conditions requires two knowledge bases: first, the extent of fatigue damage incurred by the solder microstructure leading up to fatigue crack initiation, must be quantified in both time and space domains. Secondly, fatigue crack initiation and growth must be predicted since this metric determines, explicitly, the loss of solder joint functionality as it pertains to its mechanical fastening as well as electrical continuity roles. This paper will describe recent progress in a research effort to establish a microstructurally-based, constitutive model that predicts TMF deformation to 63Sn-37Pb solder in electronic solder joints up to the crack initiation step. The model is implemented using a finite element setting; therefore, the effects of both global and local thermal expansion mismatch conditions in the joint that would arise from temperature cycling.

  5. Evaluating crown fire rate of spread predictions from physics-based models

    Science.gov (United States)

    C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont

    2015-01-01

    Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...

  6. A Model to Predict Nitrogen Losses in Advanced Soil-Based Wastewater Treatment Systems

    Science.gov (United States)

    Morales, I.; Cooper, J.; Loomis, G.; Kalen, D.; Amador, J.; Boving, T. B.

    2014-12-01

    Most of the non-point source Nitrogen (N) load in rural areas is attributed to onsite wastewater treatment systems (OWTS). Nitrogen compounds are considered environmental pollutants because they deplete the oxygen availability in water bodies and produce eutrophication. The objective of this study was to simulate the fate and transport of Nitrogen in OWTS. The commercially-available 2D/3D HYDRUS software was used to develop a transport and fate model. Experimental data from a laboratory meso-cosm study included the soil moisture content, NH4 and NO3- data. That data set was used to calibrate the model. Three types of OWTS were simulated: (1) pipe-and-stone (P&S), (2) advanced soil drainfields, pressurized shallow narrow drainfield (SND) and (3) Geomat (GEO), a variation of SND. To better understand the nitrogen removal mechanism and the performance of OWTS technologies, replicate (n = 3) intact soil mesocosms were used with 15N-labelled nitrogen inputs. As a result, it was estimated that N removal by denitrification was predominant in P&S. However, it is suggested that N was removed by nitrification in SND and GEO. The calibrated model was used to estimate Nitrogen fluxes for both conventional and advanced OWTS. Also, the model predicted the N losses from nitrification and denitrification in all OWTS. These findings help to provide practitioners with guidelines to estimate N removal efficiencies for OWTS, and predict N loads and spatial distribution for identifying non-point sources.

  7. Developing predictive models for toxicity of organic chemicals to green algae based on mode of action.

    Science.gov (United States)

    Bakire, Serge; Yang, Xinya; Ma, Guangcai; Wei, Xiaoxuan; Yu, Haiying; Chen, Jianrong; Lin, Hongjun

    2018-01-01

    Organic chemicals in the aquatic ecosystem may inhibit algae growth and subsequently lead to the decline of primary productivity. Growth inhibition tests are required for ecotoxicological assessments for regulatory purposes. In silico study is playing an important role in replacing or reducing animal tests and decreasing experimental expense due to its efficiency. In this work, a series of theoretical models was developed for predicting algal growth inhibition (log EC 50 ) after 72 h exposure to diverse chemicals. In total 348 organic compounds were classified into five modes of toxic action using the Verhaar Scheme. Each model was established by using molecular descriptors that characterize electronic and structural properties. The external validation and leave-one-out cross validation proved the statistical robustness of the derived models. Thus they can be used to predict log EC 50 values of chemicals that lack authorized algal growth inhibition values (72 h). This work systematically studied algal growth inhibition according to toxic modes and the developed model suite covers all five toxic modes. The outcome of this research will promote toxic mechanism analysis and be made applicable to structural diversity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Predicting the potential geographical distribution of Rhodnius neglectus (Hemiptera, Reduviidae) based on ecological niche modeling.

    Science.gov (United States)

    Gurgel-Goncalves, Rodrigo; Cuba, César Augusto Cuba

    2009-07-01

    Rhodnius neglectus is frequently found in palm trees and bird nests in sylvatic environments. However, adult specimens infected by Trypanosoma cruzi have been invading houses in central Brazil. Analyzing and predicting the geographical distribution of this species may improve vector surveillance strategies for Chagas disease. Ecological niche modeling using the genetic algorithm for rule-set production (GARP) was applied to predict the geographical distribution of R. neglectus from occurrence records and a set of 23 predictor variables (e.g., temperature, precipitation, altitude, and vegetation). Additionally, the geographical distribution of R. neglectus was compared with the geographical distribution of four species of palm trees and two species of birds from the study region. The models were able to predict, with high probability, the occurrence of R. neglectus as a regular (although nonendemic) species of the Cerrado biome in central Brazil. Caatinga, Amazonian savanna, Pantanal, and the Bolivian Chaco appear as areas with lower probabilities of potential occurrence for the species. A great overlap was observed between the distribution of R. neglectus, palm trees (Acrocomia aculeata and Syagrus oleracea), and birds (Phacellodomus ruber and Pseudoseisura cristata). By including new records for R. neglectus (from both sylvatic and domestic environments), our study showed a distribution increase toward the west and northeast areas of Brazil in the "diagonal of open/dry ecoregions of South America". These results should aid Chagas disease vector surveillance programs, given that household invasion by Rhodnius species maintains the risk of disease transmission and limits control strategies.

  9. Survival prediction based on compound covariate under Cox proportional hazard models.

    Directory of Open Access Journals (Sweden)

    Takeshi Emura

    Full Text Available Survival prediction from a large number of covariates is a current focus of statistical and medical research. In this paper, we study a methodology known as the compound covariate prediction performed under univariate Cox proportional hazard models. We demonstrate via simulations and real data analysis that the compound covariate method generally competes well with ridge regression and Lasso methods, both already well-studied methods for predicting survival outcomes with a large number of covariates. Furthermore, we develop a refinement of the compound covariate method by incorporating likelihood information from multivariate Cox models. The new proposal is an adaptive method that borrows information contained in both the univariate and multivariate Cox regression estimators. We show that the new proposal has a theoretical justification from a statistical large sample theory and is naturally interpreted as a shrinkage-type estimator, a popular class of estimators in statistical literature. Two datasets, the primary biliary cirrhosis of the liver data and the non-small-cell lung cancer data, are used for illustration. The proposed method is implemented in R package "compound.Cox" available in CRAN at http://cran.r-project.org/.

  10. Survival prediction based on compound covariate under Cox proportional hazard models.

    Science.gov (United States)

    Emura, Takeshi; Chen, Yi-Hau; Chen, Hsuan-Yu

    2012-01-01

    Survival prediction from a large number of covariates is a current focus of statistical and medical research. In this paper, we study a methodology known as the compound covariate prediction performed under univariate Cox proportional hazard models. We demonstrate via simulations and real data analysis that the compound covariate method generally competes well with ridge regression and Lasso methods, both already well-studied methods for predicting survival outcomes with a large number of covariates. Furthermore, we develop a refinement of the compound covariate method by incorporating likelihood information from multivariate Cox models. The new proposal is an adaptive method that borrows information contained in both the univariate and multivariate Cox regression estimators. We show that the new proposal has a theoretical justification from a statistical large sample theory and is naturally interpreted as a shrinkage-type estimator, a popular class of estimators in statistical literature. Two datasets, the primary biliary cirrhosis of the liver data and the non-small-cell lung cancer data, are used for illustration. The proposed method is implemented in R package "compound.Cox" available in CRAN at http://cran.r-project.org/.

  11. A prediction model-based algorithm for computer-assisted database screening of adverse drug reactions in the Netherlands.

    Science.gov (United States)

    Scholl, Joep H G; van Hunsel, Florence P A M; Hak, Eelko; van Puijenbroek, Eugène P

    2018-02-01

    The statistical screening of pharmacovigilance databases containing spontaneously reported adverse drug reactions (ADRs) is mainly based on disproportionality analysis. The aim of this study was to improve the efficiency of full database screening using a prediction model-based approach. A logistic regression-based prediction model containing 5 candidate predictors was developed and internally validated using the Summary of Product Characteristics as the gold standard for the outcome. All drug-ADR associations, with the exception of those related to vaccines, with a minimum of 3 reports formed the training data for the model. Performance was based on the area under the receiver operating characteristic curve (AUC). Results were compared with the current method of database screening based on the number of previously analyzed associations. A total of 25 026 unique drug-ADR associations formed the training data for the model. The final model contained all 5 candidate predictors (number of reports, disproportionality, reports from healthcare professionals, reports from marketing authorization holders, Naranjo score). The AUC for the full model was 0.740 (95% CI; 0.734-0.747). The internal validity was good based on the calibration curve and bootstrapping analysis (AUC after bootstrapping = 0.739). Compared with the old method, the AUC increased from 0.649 to 0.740, and the proportion of potential signals increased by approximately 50% (from 12.3% to 19.4%). A prediction model-based approach can be a useful tool to create priority-based listings for signal detection in databases consisting of spontaneous ADRs. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  12. M5 model tree based predictive modeling of road accidents on non-urban sections of highways in India.

    Science.gov (United States)

    Singh, Gyanendra; Sachdeva, S N; Pal, Mahesh

    2016-11-01

    This work examines the application of M5 model tree and conventionally used fixed/random effect negative binomial (FENB/RENB) regression models for accident prediction on non-urban sections of highway in Haryana (India). Road accident data for a period of 2-6 years on different sections of 8 National and State Highways in Haryana was collected from police records. Data related to road geometry, traffic and road environment related variables was collected through field studies. Total two hundred and twenty two data points were gathered by dividing highways into sections with certain uniform geometric characteristics. For prediction of accident frequencies using fifteen input parameters, two modeling approaches: FENB/RENB regression and M5 model tree were used. Results suggest that both models perform comparably well in terms of correlation coefficient and root mean square error values. M5 model tree provides simple linear equations that are easy to interpret and provide better insight, indicating that this approach can effectively be used as an alternative to RENB approach if the sole purpose is to predict motor vehicle crashes. Sensitivity analysis using M5 model tree also suggests that its results reflect the physical conditions. Both models clearly indicate that to improve safety on Indian highways minor accesses to the highways need to be properly designed and controlled, the service roads to be made functional and dispersion of speeds is to be brought down. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A Model-Based Temperature-Prediction Method by Temperature-Induced Spectral Variation and Correction of the Temperature Effect.

    Science.gov (United States)

    Yang, Qhi-xiao; Peng, Si-long; Shan, Peng; Bi, Yi-ming; Tang, Liang; Xie, Qiong

    2015-05-01

    In the present paper, a new model-based method was proposed for temperature prediction and correction. First, a temperature prediction model was obtained from training samples; then, the temperature of test samples were predicted; and finally, the correction model was used to reduce the nonlinear effects of spectra from temperature variations. Two experiments were used to verify the proposed method, including a water-ethanol mixture experiment and a ternary mixture experiment. The results show that, compared with classic method such as continuous piecewise direct standardization (CPDS), our method is efficient for temperature correction. Furthermore, the temperatures of test samples are not necessary in the proposed method, making it easier to use in real applications.

  14. Real-time prediction of respiratory motion based on a local dynamic model in an augmented space.

    Science.gov (United States)

    Hong, S-M; Jung, B-H; Ruan, D

    2011-03-21

    Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively

  15. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    Science.gov (United States)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  16. Predicting quality of life after breast cancer surgery using ANN-based models: performance comparison with MR.

    Science.gov (United States)

    Tsai, Jinn-Tsong; Hou, Ming-Feng; Chen, Yao-Mei; Wan, Thomas T H; Kao, Hao-Yun; Shi, Hon-Yi

    2013-05-01

    The goal was to develop models for predicting long-term quality of life (QOL) after breast cancer surgery. Data were obtained from 203 breast cancer patients who completed the SF-36 health survey before and 2 years after surgery. Two of the models used to predict QOL after surgery were artificial neural networks (ANNs), which included one multilayer perceptron (MLP) network and one radial basis function (RBF) network. The third model was a multiple regression (MR) model. The criteria for evaluating the accuracy of the system models were mean square error (MSE) and mean absolute percentage error (MAPE). Compared to the MR model, the ANN-based models generally had smaller MSE values and smaller MAPE values in the test data set. One exception was the second year MSE for the test value. Most MAPE values for the ANN models ranged from 10 to 20 %. The one exception was the 6-month physical component summary score (PCS), which ranged from 23.19 to 26.86 %. Comparison of criteria for evaluating system performance showed that the ANN-based systems outperformed the MR system in terms of prediction accuracy. In both the MLP and RBF networks, surgical procedure type was the most sensitive parameter affecting PCS, and preoperative functional status was the most sensitive parameter affecting mental component summary score. The three systems can be combined to obtain a conservative prediction, and a combined approach is a potential supplemental tool for predicting long-term QOL after surgical treatment for breast cancer. Patients should also be advised that their postoperative QOL might depend not only on the success of their operations but also on their preoperative functional status.

  17. A Theoretical Model to Predict Both Horizontal Displacement and Vertical Displacement for Electromagnetic Induction-Based Deep Displacement Sensors

    OpenAIRE

    Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong

    2011-01-01

    Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both h...

  18. Spatial characterization and prediction of Neanderthal sites based on environmental information and stochastic modelling

    Science.gov (United States)

    Maerker, Michael; Bolus, Michael

    2014-05-01

    We present a unique spatial dataset of Neanderthal sites in Europe that was used to train a set of stochastic models to reveal the correlations between the site locations and environmental indices. In order to assess the relations between the Neanderthal sites and environmental variables as described above we applied a boosted regression tree approach (TREENET) a statistical mechanics approach (MAXENT) and support vector machines. The stochastic models employ a learning algorithm to identify a model that best fits the relationship between the attribute set (predictor variables (environmental variables) and the classified response variable which is in this case the types of Neanderthal sites. A quantitative evaluation of model performance was done by determining the suitability of the model for the geo-archaeological applications and by helping to identify those aspects of the methodology that need improvements. The models' predictive performances were assessed by constructing the Receiver Operating Characteristics (ROC) curves for each Neanderthal class, both for training and test data. In a ROC curve the Sensitivity is plotted over the False Positive Rate (1-Specificity) for all possible cut-off points. The quality of a ROC curve is quantified by the measure of the parameter area under the ROC curve. The dependent variable or target variable in this study are the locations of Neanderthal sites described by latitude and longitude. The information on the site location was collected from literature and own research. All sites were checked for site accuracy using high resolution maps and google earth. The study illustrates that the models show a distinct ranking in model performance with TREENET outperforming the other approaches. Moreover Pre-Neanderthals, Early Neanderthals and Classic Neanderthals show a specific spatial distribution. However, all models show a wide correspondence in the selection of the most important predictor variables generally showing less

  19. SRMDAP: SimRank and Density-Based Clustering Recommender Model for miRNA-Disease Association Prediction

    Directory of Open Access Journals (Sweden)

    Xiaoying Li

    2018-01-01

    Full Text Available Aberrant expression of microRNAs (miRNAs can be applied for the diagnosis, prognosis, and treatment of human diseases. Identifying the relationship between miRNA and human disease is important to further investigate the pathogenesis of human diseases. However, experimental identification of the associations between diseases and miRNAs is time-consuming and expensive. Computational methods are efficient approaches to determine the potential associations between diseases and miRNAs. This paper presents a new computational method based on the SimRank and density-based clustering recommender model for miRNA-disease associations prediction (SRMDAP. The AUC of 0.8838 based on leave-one-out cross-validation and case studies suggested the excellent performance of the SRMDAP in predicting miRNA-disease associations. SRMDAP could also predict diseases without any related miRNAs and miRNAs without any related diseases.

  20. BIOACCUMULATION DYNAMICS OF HEAVY METALS IN Oreochromis nilotycus: PREDICTED THROUGH A BIOACCUMULATION MODEL CONSTRUCTED BASED ON BIOTIC LIGAND MODEL (BLM

    Directory of Open Access Journals (Sweden)

    Sri Noegrohati

    2010-06-01

    Full Text Available In estuarine ecosystem, sediments are not only functioning as heavy metal scavenger, but also as one of potential sources for heavy metals to the ecosystem. Due the capability of aquatic organisms to accumulate heavy metals, there is possibility of heavy metals to exert their toxic effect towards the organisms and other organisms positioned in higher trophic level, such as fish, and further to human beings. To understand the different processes of heavy metal bioaccumulation in a dynamic manner, a bioaccumulation model is required. Since bioaccumulation starts with the uptake of chemical across a biological membrane, the bioaccumulation model was constructed based on Biotic Ligand Model (BLM. The input for the model was determined from laboratory scale simulated estuarine ecosystem of  sediment-brackish water (seawater:Aquaâ 1:1 for determining the heavy metal fractions in sediments; simulated Oreochromis nilotycus - brackish water (fish-water ecosystem for determining the rate constants; simulated fish-water-sediment ecosystem for evaluating the closeness between model-predicted and measured concentration, routes and distribution within specific internal organs. From these bioaccumulation studies, it was confirmed that the internalization of metals into the cells of gills and internal epithelias follows similar mechanisms, and governed mostly by the waterborne or hydrophilic heavy metals. The level of hydrophilic heavy metals are determined by desorption equilibrium coefficients, 1/KD, and influenced by salinity. Physiologically, the essential Cu and Zn body burden in O. nilotycus are tightly homeostasis regulated, shown as decreasing uptake efficiency factor, EW, at higher exposure concentrations, while non essential Cd and Hg were less or not regulated. From the distribution within specific internal organs, it was revealed that carcass was more relevant in describing the bioaccumulation condition than liver. It is clear that every heavy

  1. A Gaussian mixture copula model based localized Gaussian process regression approach for long-term wind speed prediction

    International Nuclear Information System (INIS)

    Yu, Jie; Chen, Kuilin; Mori, Junichi; Rashid, Mudassir M.

    2013-01-01

    Optimizing wind power generation and controlling the operation of wind turbines to efficiently harness the renewable wind energy is a challenging task due to the intermittency and unpredictable nature of wind speed, which has significant influence on wind power production. A new approach for long-term wind speed forecasting is developed in this study by integrating GMCM (Gaussian mixture copula model) and localized GPR (Gaussian process regression). The time series of wind speed is first classified into multiple non-Gaussian components through the Gaussian mixture copula model and then Bayesian inference strategy is employed to incorporate the various non-Gaussian components using the posterior probabilities. Further, the localized Gaussian process regression models corresponding to different non-Gaussian components are built to characterize the stochastic uncertainty and non-stationary seasonality of the wind speed data. The various localized GPR models are integrated through the posterior probabilities as the weightings so that a global predictive model is developed for the prediction of wind speed. The proposed GMCM–GPR approach is demonstrated using wind speed data from various wind farm locations and compared against the GMCM-based ARIMA (auto-regressive integrated moving average) and SVR (support vector regression) methods. In contrast to GMCM–ARIMA and GMCM–SVR methods, the proposed GMCM–GPR model is able to well characterize the multi-seasonality and uncertainty of wind speed series for accurate long-term prediction. - Highlights: • A novel predictive modeling method is proposed for long-term wind speed forecasting. • Gaussian mixture copula model is estimated to characterize the multi-seasonality. • Localized Gaussian process regression models can deal with the random uncertainty. • Multiple GPR models are integrated through Bayesian inference strategy. • The proposed approach shows higher prediction accuracy and reliability

  2. A Novel Approach to Predict 24-Hour Energy Expenditure Based on Hematologic Volumes: Development and Validation of Models Comparable to Mifflin-St Jeor and Body Composition Models.

    Science.gov (United States)

    Chang, Douglas C; Piaggi, Paolo; Krakoff, Jonathan

    2017-08-01

    Accurate prediction of 24-hour energy expenditure (24EE) relies on knowing body composition, in particular fat-free mass (FFM), the largest determinant of 24EE. FFM is closely correlated with hematologic volumes: blood volume (BV), red cell mass (RCM), and plasma volume (PV). However, it is unknown whether predicted hematologic volumes, based on easily collected variables, can improve 24EE prediction. The aim was to develop and validate equations to predict 24EE based on predicted BV, RCM, and PV and to compare the accuracy and agreement with models developed from FFM and with the Mifflin-St Jeor equation, which is recommended for clinical use by the Academy of Nutrition and Dietetics. Participants had body composition measured by underwater weighing and 24EE by respiratory chamber. BV, RCM, and PV were calculated from five published equations. Native American and white men and women were studied (n=351). Participants were healthy adults aged 18 to 49 years from the Phoenix, AZ, metropolitan area. Accuracy to within ±10% of measured 24EE and agreement by Bland-Altman analysis. Regression models to predict 24EE from hematologic and body composition variables were developed in half the dataset and validated in the other half. Hematologic volumes were all strongly correlated with FFM in both men and women (r≥0.94). Whereas the accuracy of FFM alone was 69%, four hematologic volumes were individually more accurate (75% to 78%) in predicting 24EE. Equations based on hematologic volumes plus demographics had mean prediction errors comparable to those based on body composition plus demographics; although the Mifflin-St Jeor had modestly better mean prediction error, body composition, hematologic, and Mifflin-St Jeor models all had similar accuracy (approximately 80%). Prediction equations based on hematologic volumes were developed, validated, and found to be comparable to Mifflin-St Jeor and body composition models in this population of healthy adults. Published by

  3. Modeling and Prediction of Coal Ash Fusion Temperature based on BP Neural Network

    Directory of Open Access Journals (Sweden)

    Miao Suzhen

    2016-01-01

    Full Text Available Coal ash is the residual generated from combustion of coal. The ash fusion temperature (AFT of coal gives detail information on the suitability of a coal source for gasification procedures, and specifically to which extent ash agglomeration or clinkering is likely to occur within the gasifier. To investigate the contribution of oxides in coal ash to AFT, data of coal ash chemical compositions and Softening Temperature (ST in different regions of China were collected in this work and a BP neural network model was established by XD-APC PLATFORM. In the BP model, the inputs were the ash compositions and the output was the ST. In addition, the ash fusion temperature prediction model was obtained by industrial data and the model was generalized by different industrial data. Compared to empirical formulas, the BP neural network obtained better results. By different tests, the best result and the best configurations for the model were obtained: hidden layer nodes of the BP network was setted as three, the component contents (SiO2, Al2O3, Fe2O3, CaO, MgO were used as inputs and ST was used as output of the model.

  4. General fugacity-based model to predict the environmental fate of multiple chemical species.

    Science.gov (United States)

    Cahill, Thomas M; Cousins, Ian; Mackay, Donald

    2003-03-01

    A general multimedia environmental fate model is presented that is capable of simulating the fate of up to four interconverting chemical species. It is an extension of the existing equilibrium criterion (EQC) fugacity model, which is limited to single-species assessments. It is suggested that multispecies chemical assessments are warranted when a degradation product of a released chemical is either more toxic or more persistent than the parent chemical or where there is cycling between species, as occurs with association, disassociation, or ionization. The model is illustratively applied to three chemicals, namely chlorpyrifos, pentachlorophenol, and perfluorooctane sulfonate, for which multispecies assessments are advisable. The model results compare favorably with field data for chlorpyrifos and pentachlorophenol, while the perfluorooctane sulfonate simulation is more speculative due to uncertainty in input parameters and the paucity of field data to validate the predictions. The model thus provides a tool for assessing the environmental fate and behavior of a group of chemicals that hitherto have not been addressed by evaluative models such as EQC.

  5. Boundary-layer transition prediction using a simplified correlation-based model

    Directory of Open Access Journals (Sweden)

    Xia Chenchao

    2016-02-01

    Full Text Available This paper describes a simplified transition model based on the recently developed correlation-based γ-Reθt transition model. The transport equation of transition momentum thickness Reynolds number is eliminated for simplicity, and new transition length function and critical Reynolds number correlation are proposed. The new model is implemented into an in-house computational fluid dynamics (CFD code and validated for low and high-speed flow cases, including the zero pressure flat plate, airfoils, hypersonic flat plate and double wedge. Comparisons between the simulation results and experimental data show that the boundary-layer transition phenomena can be reasonably illustrated by the new model, which gives rise to significant improvements over the fully laminar and fully turbulent results. Moreover, the new model has comparable features of accuracy and applicability when compared with the original γ-Reθt model. In the meantime, the newly proposed model takes only one transport equation of intermittency factor and requires fewer correlations, which simplifies the original model greatly. Further studies, especially on separation-induced transition flows, are required for the improvement of the new model.

  6. Model predictive control of PMSG-based wind turbines for frequency regulation in an isolated grid

    DEFF Research Database (Denmark)

    Wang, Haixin; Yang, Junyou; Ma, Yiming

    2017-01-01

    on system parameters, a model predictive controller (MPC) of wind farm is designed to generate torque compensation for each deloaded WTG. The key feature of this strategy is that each WTG reacts to grid disturbances in different ways, which depends on generator speeds. Hardware-in-the-loop simulation...... in different speed regions and provide WTGs a certain capacity of power reserves. Considering the torque compensation may bring about power oscillation, speed reference of conventional pitch control system should be reset. Moreover, to suppress disturbances of load and wind speed as well as overcome dependence...

  7. Selection and Validation of Predictive Models of Radiation Effects on Tumor Growth Based on Noninvasive Imaging Data.

    Science.gov (United States)

    Lima, E A B F; Oden, J T; Wohlmuth, B; Shahmoradi, A; Hormuth, D A; Yankeelov, T E; Scarabosio, L; Horger, T

    2017-12-01

    The use of mathematical and computational models for reliable predictions of tumor growth and decline in living organisms is one of the foremost challenges in modern predictive science, as it must cope with uncertainties in observational data, model selection, model parameters, and model inadequacy, all for very complex physical and biological systems. In this paper, large classes of parametric models of tumor growth in vascular tissue are discussed including models for radiation therapy. Observational data is obtained from MRI of a murine model of glioma and observed over a period of about three weeks, with X-ray radiation administered 14.5 days into the experimental program. Parametric models of tumor proliferation and decline are presented based on the balance laws of continuum mixture theory, particularly mass balance, and from accepted biological hypotheses on tumor growth. Among these are new model classes that include characterizations of effects of radiation and simple models of mechanical deformation of tumors. The Occam Plausibility Algorithm (OPAL) is implemented to provide a Bayesian statistical calibration of the model classes, 39 models in all, as well as the determination of the most plausible models in these classes relative to the observational data, and to assess model inadequacy through statistical validation processes. Discussions of the numerical analysis of finite element approximations of the system of stochastic, nonlinear partial differential equations characterizing the model classes, as well as the sampling algorithms for Monte Carlo and Markov chain Monte Carlo (MCMC) methods employed in solving the forward stochastic problem, and in computing posterior distributions of parameters and model plausibilities are provided. The results of the analyses described suggest that the general framework developed can provide a useful approach for predicting tumor growth and the effects of radiation.

  8. Prediction of Reservoir Sediment Quality Based on Erosion Processes in Watershed Using Mathematical Modelling

    Directory of Open Access Journals (Sweden)

    Natalia Junakova

    2017-12-01

    Full Text Available Soil erosion, as a significant contributor to nonpoint-source pollution, is ranked top of sediment sources, pollutants attached to sediment, and pollutants in the solution in surface water. This paper is focused on the design of mathematical model intended to predict the total content of nitrogen (N, phosphorus (P, and potassium (K in bottom sediments in small water reservoirs depending on water erosion processes, together with its application and validation in small agricultural watershed of the Tisovec River, Slovakia. The designed model takes into account the calculation of total N, P, and K content adsorbed on detached and transported soil particles, which consists of supplementing the soil loss calculation with a determination of the average nutrient content in topsoils. The dissolved forms of these elements are neglected in this model. Validation of the model was carried out by statistical assessment of calculated concentrations and measured concentrations in Kľušov, a small water reservoir (Slovakia, using the t-test and F-test, at a 0.05 significance level. Calculated concentrations of total N, P, and K in reservoir sediments were in the range from 0.188 to 0.236 for total N, from 0.065 to 0.078 for total P, and from 1.94 to 2.47 for total K. Measured nutrient concentrations in composite sediment samples ranged from 0.16 to 0.26% for total N, from 0.049 to 0.113% for total P, and from 1.71 to 2.42% for total K. The statistical assessment indicates the applicability of the model in predicting the reservoir’s sediment quality detached through erosion processes in the catchment.

  9. A predictive ligand-based Bayesian model for human drug-induced liver injury.

    Science.gov (United States)

    Ekins, Sean; Williams, Antony J; Xu, Jinghai J

    2010-12-01

    Drug-induced liver injury (DILI) is one of the most important reasons for drug development failure at both preapproval and postapproval stages. There has been increased interest in developing predictive in vivo, in vitro, and in silico models to identify compounds that cause idiosyncratic hepatotoxicity. In the current study, we applied machine learning, a Bayesian modeling method with extended connectivity fingerprints and other interpretable descriptors. The model that was developed and internally validated (using a training set of 295 compounds) was then applied to a large test set relative to the training set (237 compounds) for external validation. The resulting concordance of 60%, sensitivity of 56%, and specificity of 67% were comparable to results for internal validation. The Bayesian model with extended connectivity functional class fingerprints of maximum diameter 6 (ECFC_6) and interpretable descriptors suggested several substructures that are chemically reactive and may also be important for DILI-causing compounds, e.g., ketones, diols, and α-methyl styrene type structures. Using Smiles Arbitrary Target Specification (SMARTS) filters published by several pharmaceutical companies, we evaluated whether such reactive substructures could be readily detected by any of the published filters. It was apparent that the most stringent filters used in this study, such as the Abbott alerts, which captures thiol traps and other compounds, may be of use in identifying DILI-causing compounds (sensitivity 67%). A significant outcome of the present study is that we provide predictions for many compounds that cause DILI by using the knowledge we have available from previous studies. These computational models may represent cost-effective selection criteria before in vitro or in vivo experimental studies.

  10. Urban climate model MUKLIMO_3 in prediction mode - evaluation of model performance based on the case study of Vienna

    Science.gov (United States)

    Hollosi, Brigitta; Zuvela-Aloise, Maja

    2017-04-01

    To reduce negative health impacts of extreme heat load in urban areas is the application of early warning systems that use weather forecast models to predict forthcoming heat events of utmost importance. In the state-of-the-art operational heat warning systems the meteorological information relies on the weather forecast from the regional numerical models and monitoring stations that do not include details of urban structure. In this study, the dynamical urban climate model MUKLIMO3 (horizontal resolution of 100 - 200 m) is initialized with the vertical profiles from the archived daily forecast data of the ZAMG from the hydrostatic ALARO numerical weather prediction model run at 0600 UTC to simulate the development of the urban heat island in Vienna on a daily basis. The aim is to evaluate the performance of the urban climate model, so far applied only for climatological studies, in a weather prediction mode using the summer period 2011-2015 as a test period. The focus of the investigation is on assessment of the urban heat load during the day-time. The model output has been evaluated against the monitoring data at the weather stations in the area of the city. The model results for daily maximum temperature show good agreement with the observations, especially at the urban and suburban stations where the mean bias is low. The results are highly dependent on the input data from the meso-scale model that leads to larger deviation from observations if the prediction is not representative for the given day. This study can be used to support urban planning strategies and to improve existing practices to alert decision-makers and the public to impending dangers of excessive heat.

  11. Forestry trial data can be used to evaluate climate-based species distribution models in predicting tree invasions

    Directory of Open Access Journals (Sweden)

    Rethabile Motloung

    2014-01-01

    Full Text Available Climate is frequently used to predict the outcome of species introductions based on the results from species distribution models (SDMs. However, despite the widespread use of SDMs for pre- and post-border risk assessments, data that can be used to validate predictions is often not available until after an invasion has occurred. Here we explore the potential for using historical forestry trials to assess the performance of climate-based SDMs. SDMs were parameterized based on the native range distribution of 36 Australian acacias, and predictions were compared against both the results of 150 years of government forestry trials, and current invasive distribution in southern Africa using true skill statistic, sensitivity and specificity. Classification tree analysis was used to evaluate why some Australian acacias failed in trials while others were successful. Predicted suitability was significantly related to the invaded range (sensitivity = 0.87 and success in forestry trials (sensitivity = 0.80, but forestry trial failures were under-predicted (specificity = 0.35. Notably, for forestry trials, the success in trials was greater for species invasive somewhere in the world. SDM predictions also indicate a considerable invasion potential of eight species that are currently naturalized but not yet widespread. Forestry trial data clearly provides a useful additional source of data to validate and refine SDMs in the context of risk assessment. Our study identified the climatic factors required for successful invasion of acacias, and accentuates the importance of integration of status elsewhere for risk assessment.

  12. Development and validation of an MRI-based model to predict response to chemoradiotherapy for rectal cancer.

    Science.gov (United States)

    Bulens, Philippe; Couwenberg, Alice; Haustermans, Karin; Debucquoy, Annelies; Vandecaveye, Vincent; Philippens, Marielle; Zhou, Mu; Gevaert, Olivier; Intven, Martijn

    2018-03-01

    To safely implement organ preserving treatment strategies for patients with rectal cancer, well-considered selection of patients with favourable response is needed. In this study, we develop and validate an MRI-based response predicting model. A multivariate model using T2-volumetric and DWI parameters before and 6 weeks after chemoradiation (CRT) was developed using a cohort of 85 rectal cancer patients and validated in an external cohort of 55 patients that underwent preoperative CRT. Twenty-two patients (26%) achieved ypT0-1N0 response in the development cohort versus 13 patients (24%) in the validation cohort. Two T2-volumetric parameters (ΔVolume% and Sphere_post) and two DWI parameters (ADC_avg_post and ADCratio_avg) were retained in a model predicting (near-)complete response (ypT0-1N0). In the development cohort, this model had a good predictive performance (AUC = 0.89; 95% CI 0.80-0.98). Validation of the model in an external cohort resulted in a similar performance (AUC = 0.88 95% CI 0.79-0.98). An MRI-based prediction model of (near-)complete pathological response following CRT in rectal cancer patients, shows a high predictive performance in an external validation cohort. The clinically relevant features in the model make it an interesting tool for implementation of organ-preserving strategies in rectal cancer. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Scale-up of a physiologically-based pharmacokinetic model to predict the disposition of monoclonal antibodies in monkeys.

    Science.gov (United States)

    Glassman, Patrick M; Chen, Yang; Balthasar, Joseph P

    2015-10-01

    Preclinical assessment of monoclonal antibody (mAb) disposition during drug development often includes investigations in non-human primate models. In many cases, mAb exhibit non-linear disposition that relates to mAb-target binding [i.e., target-mediated disposition (TMD)]. The goal of this work was to develop a physiologically-based pharmacokinetic (PBPK) model to predict non-linear mAb disposition in plasma and in tissues in monkeys. Physiological parameters for monkeys were collected from several sources, and plasma data for several mAbs associated with linear pharmacokinetics were digitized from prior literature reports. The digitized data displayed great variability; therefore, parameters describing inter-antibody variability in the rates of pinocytosis and convection were estimated. For prediction of the disposition of individual antibodies, we incorporated tissue concentrations of target proteins, where concentrations were estimated based on categorical immunohistochemistry scores, and with assumed localization of target within the interstitial space of each organ. Kinetics of target-mAb binding and target turnover, in the presence or absence of mAb, were implemented. The model was then employed to predict concentration versus time data, via Monte Carlo simulation, for two mAb that have been shown to exhibit TMD (2F8 and tocilizumab). Model predictions, performed a priori with no parameter fitting, were found to provide good prediction of dose-dependencies in plasma clearance, the areas under plasma concentration versu time curves, and the time-course of plasma concentration data. This PBPK model may find utility in predicting plasma and tissue concentration versus time data and, potentially, the time-course of receptor occupancy (i.e., mAb-target binding) to support the design and interpretation of preclinical pharmacokinetic-pharmacodynamic investigations in non-human primates.

  14. State-Space Modeling and Performance Analysis of Variable-Speed Wind Turbine Based on a Model Predictive Control Approach

    Directory of Open Access Journals (Sweden)

    H. Bassi

    2017-04-01

    Full Text Available Advancements in wind energy technologies have led wind turbines from fixed speed to variable speed operation. This paper introduces an innovative version of a variable-speed wind turbine based on a model predictive control (MPC approach. The proposed approach provides maximum power point tracking (MPPT, whose main objective is to capture the maximum wind energy in spite of the variable nature of the wind’s speed. The proposed MPC approach also reduces the constraints of the two main functional parts of the wind turbine: the full load and partial load segments. The pitch angle for full load and the rotating force for the partial load have been fixed concurrently in order to balance power generation as well as to reduce the operations of the pitch angle. A mathematical analysis of the proposed system using state-space approach is introduced. The simulation results using MATLAB/SIMULINK show that the performance of the wind turbine with the MPC approach is improved compared to the traditional PID controller in both low and high wind speeds.

  15. Predictability and interpretability of hybrid link-level crash frequency models for urban arterials compared to cluster-based and general negative binomial regression models.

    Science.gov (United States)

    Najaf, Pooya; Duddu, Venkata R; Pulugurtha, Srinivas S

    2018-03-01

    Machine learning (ML) techniques have higher prediction accuracy compared to conventional statistical methods for crash frequency modelling. However, their black-box nature limits the interpretability. The objective of this research is to combine both ML and statistical methods to develop hybrid link-level crash frequency models with high predictability and interpretability. For this purpose, M5' model trees method (M5') is introduced and applied to classify the crash data and then calibrate a model for each homogenous class. The data for 1134 and 345 randomly selected links on urban arterials in the city of Charlotte, North Carolina was used to develop and validate models, respectively. The outputs from the hybrid approach are compared with the outputs from cluster-based negative binomial regression (NBR) and general NBR models. Findings indicate that M5' has high predictability and is very reliable to interpret the role of different attributes on crash frequency compared to other developed models.

  16. Predictive accuracy of the PanCan lung cancer risk prediction model - external validation based on CT from the Danish Lung Cancer Screening Trial

    International Nuclear Information System (INIS)

    Winkler Wille, Mathilde M.; Dirksen, Asger; Riel, Sarah J. van; Jacobs, Colin; Scholten, Ernst T.; Ginneken, Bram van; Saghir, Zaigham; Pedersen, Jesper Holst; Hohwue Thomsen, Laura; Skovgaard, Lene T.

    2015-01-01

    Lung cancer risk models should be externally validated to test generalizability and clinical usefulness. The Danish Lung Cancer Screening Trial (DLCST) is a population-based prospective cohort study, used to assess the discriminative performances of the PanCan models. From the DLCST database, 1,152 nodules from 718 participants were included. Parsimonious and full PanCan risk prediction models were applied to DLCST data, and also coefficients of the model were recalculated using DLCST data. Receiver operating characteristics (ROC) curves and area under the curve (AUC) were used to evaluate risk discrimination. AUCs of 0.826-0.870 were found for DLCST data based on PanCan risk prediction models. In the DLCST, age and family history were significant predictors (p = 0.001 and p = 0.013). Female sex was not confirmed to be associated with higher risk of lung cancer; in fact opposing effects of sex were observed in the two cohorts. Thus, female sex appeared to lower the risk (p = 0.047 and p = 0.040) in the DLCST. High risk discrimination was validated in the DLCST cohort, mainly determined by nodule size. Age and family history of lung cancer were significant predictors and could be included in the parsimonious model. Sex appears to be a less useful predictor. (orig.)

  17. Predictive accuracy of the PanCan lung cancer risk prediction model - external validation based on CT from the Danish Lung Cancer Screening Trial

    Energy Technology Data Exchange (ETDEWEB)

    Winkler Wille, Mathilde M.; Dirksen, Asger [Gentofte Hospital, Department of Respiratory Medicine, Hellerup (Denmark); Riel, Sarah J. van; Jacobs, Colin; Scholten, Ernst T.; Ginneken, Bram van [Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Saghir, Zaigham [Herlev Hospital, Department of Respiratory Medicine, Herlev (Denmark); Pedersen, Jesper Holst [Copenhagen University Hospital, Department of Thoracic Surgery, Rigshospitalet, Koebenhavn Oe (Denmark); Hohwue Thomsen, Laura [Hvidovre Hospital, Department of Respiratory Medicine, Hvidovre (Denmark); Skovgaard, Lene T. [University of Copenhagen, Department of Biostatistics, Koebenhavn Oe (Denmark)

    2015-10-15

    Lung cancer risk models should be externally validated to test generalizability and clinical usefulness. The Danish Lung Cancer Screening Trial (DLCST) is a population-based prospective cohort study, used to assess the discriminative performances of the PanCan models. From the DLCST database, 1,152 nodules from 718 participants were included. Parsimonious and full PanCan risk prediction models were applied to DLCST data, and also coefficients of the model were recalculated using DLCST data. Receiver operating characteristics (ROC) curves and area under the curve (AUC) were used to evaluate risk discrimination. AUCs of 0.826-0.870 were found for DLCST data based on PanCan risk prediction models. In the DLCST, age and family history were significant predictors (p = 0.001 and p = 0.013). Female sex was not confirmed to be associated with higher risk of lung cancer; in fact opposing effects of sex were observed in the two cohorts. Thus, female sex appeared to lower the risk (p = 0.047 and p = 0.040) in the DLCST. High risk discrimination was validated in the DLCST cohort, mainly determined by nodule size. Age and family history of lung cancer were significant predictors and could be included in the parsimonious model. Sex appears to be a less useful predictor. (orig.)

  18. Damage based constitutive model for predicting the performance degradation of concrete

    Directory of Open Access Journals (Sweden)

    Zhi Wang

    Full Text Available An anisotropic elastic-damage coupled constitutive model for plain concrete is developed, which describes the concrete performance degradation. The damage variable, related to the surface density of micro-cracks and micro-voids, and represented by a second order tensor, is governed by the principal tension strain components. For adequately describing the partial crack opening/closure effect under tension and compression for concrete, a new suitable thermodynamic potential is proposed to express the state equations for modeling the mechanical behaviors. Within the frame-work of thermodynamic potential, concrete strain mechanisms are identified in the proposed anisotropic damage model while each state variable is physically explained and justified. The strain equivalence hypothesis is used for deriving the constitutive equations, which leads to the development of a decoupled algorithm for effective stress computation and damage evolution. Additionally, a detailed numerical algorithm is described and the simulations are shown for uni-axial compression, tension and multi-axial loadings. For verifying the numerical results, a series of experiments on concrete were carried out. Reasonably good agreement between experimental results and the predicted values was observed. The proposed constitutive model can be used to accurately model the concrete behaviors under uni-axial compression, tension and multi-axial loadings. Additionally, the presented work is expected to be very useful in the nonlinear finite element analysis of large-scale concrete structures.

  19. [Simulation and prediction of water environmental carrying capacity in Liaoning Province based on system dynamics model].

    Science.gov (United States)

    Wang, Jian; Li, Xue-liang; Li, Fa-yun; Bao, Hong-xu

    2009-09-01

    By the methods of system dynamics, a water environmental carrying capacity (WECC) model was constructed, and the dynamic trend of the WECC in Liaoning Province was simulated by using this model, in combining with analytical hierarchy process (AHP) and the vector norm method. It was predicted that under the conditions of maintaining present development schemes, the WECC in this province in 2000-2050 would be decreased year after year. Only increasing water resources supply while not implementing scientific and rational management of water environment could not improve the regional WECC, and the integration of searching for new and saving present water resources with controlling wastewater pollution and reducing sewage discharge would be the only effective way to improve the WECC and the coordinated development of economy, society, and environment in Liaoning.

  20. MLP based models to predict PM10, O3 concentrations, in Sines industrial area

    Science.gov (United States)

    Durao, R.; Pereira, M. J.

    2012-04-01

    Sines is an important Portuguese industrial area located southwest cost of Portugal with important nearby protected natural areas. The main economical activities are related with this industrial area, the deep-water port, petrochemical and thermo-electric industry. Nevertheless, tourism is also an important economic activity especially in summer time with potential to grow. The aim of this study is to develop prediction models of pollutant concentration categories (e.g. low concentration and high concentration) in order to provide early warnings to the competent authorities who are responsible for the air quality management. The knowledge in advanced of pollutant high concentrations occurrence will allow the implementation of mitigation actions and the release of precautionary alerts to population. The regional air quality monitoring network consists in three monitoring stations where a set of pollutants' concentrations are registered on a continuous basis. From this set stands out the tropospheric ozone (O3) and particulate matter (PM10) due to the high concentrations occurring in the region and their adverse effects on human health. Moreover, the major industrial plants of the region monitor SO2, NO2 and particles emitted flows at the principal chimneys (point sources), also on a continuous basis,. Therefore Artificial neuronal networks (ANN) were the applied methodology to predict next day pollutant concentrations; due to the ANNs structure they have the ability to capture the non-linear relationships between predictor variables. Hence the first step of this study was to apply multivariate exploratory techniques to select the best predictor variables. The classification trees methodology (CART) was revealed to be the most appropriate in this case.. Results shown that pollutants atmospheric concentrations are mainly dependent on industrial emissions and a complex combination of meteorological factors and the time of the year. In the second step, the Multi

  1. Development and External Validation of a Melanoma Risk Prediction Model Based on Self-assessed Risk Factors.

    Science.gov (United States)

    Vuong, Kylie; Armstrong, Bruce K; Weiderpass, Elisabete; Lund, Eiliv; Adami, Hans-Olov; Veierod, Marit B; Barrett, Jennifer H; Davies, John R; Bishop, D Timothy; Whiteman, David C; Olsen, Catherine M; Hopper, John L; Mann, Graham J; Cust, Anne E; McGeechan, Kevin

    2016-08-01

    Identifying individuals at high risk of melanoma can optimize primary and secondary prevention strategies. To develop and externally validate a risk prediction model for incident first-primary cutaneous melanoma using self-assessed risk factors. We used unconditional logistic regression to develop a multivariable risk prediction model. Relative risk estimates from the model were combined with Australian melanoma incidence and competing mortality rates to obtain absolute risk estimates. A risk prediction model was developed using the Australian Melanoma Family Study (629 cases and 535 controls) and externally validated using 4 independent population-based studies: the Western Australia Melanoma Study (511 case-control pairs), Leeds Melanoma Case-Control Study (960 cases and 513 controls), Epigene-QSkin Study (44 544, of which 766 with melanoma), and Swedish Women's Lifestyle and Health Cohort Study (49 259 women, of which 273 had melanoma). We validated model performance internally and externally by assessing discrimination using the area under the receiver operating curve (AUC). Additionally, using the Swedish Women's Lifestyle and Health Cohort Study, we assessed model calibration and clinical usefulness. The risk prediction model included hair color, nevus density, first-degree family history of melanoma, previous nonmelanoma skin cancer, and lifetime sunbed use. On internal validation, the AUC was 0.70 (95% CI, 0.67-0.73). On external validation, the AUC was 0.66 (95% CI, 0.63-0.69) in the Western Australia Melanoma Study, 0.67 (95% CI, 0.65-0.70) in the Leeds Melanoma Case-Control Study, 0.64 (95% CI, 0.62-0.66) in the Epigene-QSkin Study, and 0.63 (95% CI, 0.60-0.67) in the Swedish Women's Lifestyle and Health Cohort Study. Model calibration showed close agreement between predicted and observed numbers of incident melanomas across all deciles of predicted risk. In the external validation setting, there was higher net benefit when using the risk prediction

  2. Warranty optimisation based on the prediction of costs to the manufacturer using neural network model and Monte Carlo simulation

    Science.gov (United States)

    Stamenkovic, Dragan D.; Popovic, Vladimir M.

    2015-02-01

    Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.

  3. Combining process-based and correlative models improves predictions of climate change effects on Schistosoma mansoni transmission in eastern Africa

    Directory of Open Access Journals (Sweden)

    Anna-Sofie Stensgaard

    2016-03-01

    Full Text Available Currently, two broad types of approach for predicting the impact of climate change on vector-borne diseases can be distinguished: i empirical-statistical (correlative approaches that use statistical models of relationships between vector and/or pathogen presence and environmental factors; and ii process-based (mechanistic approaches that seek to simulate detailed biological or epidemiological processes that explicitly describe system behavior. Both have advantages and disadvantages, but it is generally acknowledged that both approaches have value in assessing the response of species in general to climate change. Here, we combine a previously developed dynamic, agentbased model of the temperature-sensitive stages of the Schistosoma mansoni and intermediate host snail lifecycles, with a statistical model of snail habitat suitability for eastern Africa. Baseline model output compared to empirical prevalence data suggest that the combined model performs better than a temperature-driven model alone, and highlights the importance of including snail habitat suitability when modeling schistosomiasis risk. There was general agreement among models in predicting changes in risk, with 24-36% of the eastern Africa region predicted to experience an increase in risk of up-to 20% as a result of increasing temperatures over the next 50 years. Vice versa the models predicted a general decrease in risk in 30-37% of the study area. The snail habitat suitability models also suggest that anthropogenically altered habitat play a vital role for the current distribution of the intermediate snail host, and hence we stress the importance of accounting for land use changes in models of future changes in schistosomiasis risk.

  4. Advancing hydrometeorological prediction capabilities through standards-based cyberinfrastructure development: The community WRF-Hydro modeling system

    Science.gov (United States)

    gochis, David; Parodi, Antonio; Hooper, Rick; Jha, Shantenu; Zaslavsky, Ilya

    2013-04-01

    The need for improved assessments and predictions of many key environmental variables is driving a multitude of model development efforts in the geosciences. The proliferation of weather and climate impacts research is driving a host of new environmental prediction model development efforts as society seeks to understand how climate does and will impact key societal activities and resources and, in turn, how human activities influence climate and the environment. This surge in model development has highlighted the role of model coupling as a fundamental activity itself and, at times, a significant bottleneck in weather and climate impacts research. This talk explores some of the recent activities and progress that has been made in assessing the attributes of various approaches to the coupling of physics-based process models for hydrometeorology. One example modeling system that is emerging from these efforts is the community 'WRF-Hydro' modeling system which is based on the modeling architecture of the Weather Research and Forecasting (WRF). An overview of the structural components of WRF-Hydro will be presented as will results from several recent applications which include the prediction of flash flooding events in the Rocky Mountain Front Range region of the U.S. and along the Ligurian coastline in the northern Mediterranean. Efficient integration of the coupled modeling system with distributed infrastructure for collecting and sharing hydrometeorological observations is one of core themes of the work. Specifically, we aim to demonstrate how data management infrastructures used in the US and Europe, in particular data sharing technologies developed within the CUAHSI Hydrologic Information System and UNIDATA, can interoperate based on international standards for data discovery and exchange, such as standards developed by the Open Geospatial Consortium and adopted by GEOSS. The data system we envision will help manage WRF-Hydro prediction model data flows, enabling

  5. New sunshine-based models for predicting global solar radiation using PSO (particle swarm optimization) technique

    International Nuclear Information System (INIS)

    Behrang, M.A.; Assareh, E.; Noghrehabadi, A.R.; Ghanbarzadeh, A.

    2011-01-01

    PSO (particle swarm optimization) technique is applied to estimate monthly average daily GSR (global solar radiation) on horizontal surface for different regions of Iran. To achieve this, five new models were developed as well as six models were chosen from the literature. First, for each city, the empirical coefficients for all models were separately determined using PSO technique. The results indicate that new models which are presented in this study have better performance than existing models in the literature for 10 cities from 17 considered cities in this study. It is also shown that the empirical coefficients found for a given latitude can be generalized to estimate solar radiation in cities at similar latitude. Some case studies are presented to demonstrate this generalization with the result showing good agreement with the measurements. More importantly, these case studies further validate the models developed, and demonstrate the general applicability of the models developed. Finally, the obtained results of PSO technique were compared with the obtained results of SRTs (statistical regression techniques) on Angstrom model for all 17 cities. The results showed that obtained empirical coefficients for Angstrom model based on PSO have more accuracy than SRTs for all 17 cities. -- Highlights: → The first study to apply an intelligent optimization technique to more accurately determine empirical coefficients in solar radiation models. → New models which are presented in this study have better performance than existing models. → The empirical coefficients found for a given latitude can be generalized to estimate solar radiation in cities at similar latitude. → A fair comparison between the performance of PSO and SRTs on GSR modeling.

  6. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    Science.gov (United States)

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  7. Multi-gene genetic programming based predictive models for municipal solid waste gasification in a fluidized bed gasifier.

    Science.gov (United States)

    Pandey, Daya Shankar; Pan, Indranil; Das, Saptarshi; Leahy, James J; Kwapinski, Witold

    2015-03-01

    A multi-gene genetic programming technique is proposed as a new method to predict syngas yield production and the lower heating value for municipal solid waste gasification in a fluidized bed gasifier. The study shows that the predicted outputs of the municipal solid waste gasification process are in good agreement with the experimental dataset and also generalise well to validation (untrained) data. Published experimental datasets are used for model training and validation purposes. The results show the effectiveness of the genetic programming technique for solving complex nonlinear regression problems. The multi-gene genetic programming are also compared with a single-gene genetic programming model to show the relative merits and demerits of the technique. This study demonstrates that the genetic programming based data-driven modelling strategy can be a good candidate for developing models for other types of fuels as well. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Application of physiologically based pharmacokinetic modeling in predicting drug–drug interactions for sarpogrelate hydrochloride in humans

    Directory of Open Access Journals (Sweden)

    Min JS

    2016-09-01

    Full Text Available Jee Sun Min,1 Doyun Kim,1 Jung Bae Park,1 Hyunjin Heo,1 Soo Hyeon Bae,2 Jae Hong Seo,1 Euichaul Oh,1 Soo Kyung Bae1 1Integrated Research Institute of Pharmaceutical Sciences, College of Pharmacy, The Catholic University of Korea, Bucheon, 2Department of Pharmacology, College of Medicine, The Catholic University of Korea, Seocho-gu, Seoul, South Korea Background: Evaluating the potential risk of metabolic drug–drug interactions (DDIs is clinically important. Objective: To develop a physiologically based pharmacokinetic (PBPK model for sarpogrelate hydrochloride and its active metabolite, (R,S-1-{2-[2-(3-methoxyphenylethyl]-phenoxy}-3-(dimethylamino-2-propanol (M-1, in order to predict DDIs between sarpogrelate and the clinically relevant cytochrome P450 (CYP 2D6 substrates, metoprolol, desipramine, dextromethorphan, imipramine, and tolterodine. Methods: The PBPK model was developed, incorporating the physicochemical and pharmacokinetic properties of sarpogrelate hydrochloride, and M-1 based on the findings from in vitro and in vivo studies. Subsequently, the model was verified by comparing the predicted concentration-time profiles and pharmacokinetic parameters of sarpogrelate and M-1 to the observed clinical data. Finally, the verified model was used to simulate clinical DDIs between sarpogrelate hydrochloride and sensitive CYP2D6 substrates. The predictive performance of the model was assessed by comparing predicted results to observed data after coadministering sarpogrelate hydrochloride and metoprolol. Results: The developed PBPK model accurately predicted sarpogrelate and M-1 plasma concentration profiles after single or multiple doses of sarpogrelate hydrochloride. The simulated ratios of area under the curve and maximum plasma concentration of metoprolol in the presence of sarpogrelate hydrochloride to baseline were in good agreement with the observed ratios. The predicted fold-increases in the area under the curve ratios of metoprolol

  9. Mechanics-Based Model for Predicting In-Plane Needle Deflection with Multiple Bends

    NARCIS (Netherlands)

    Roesthuis, Roy; Abayazid, Momen; Misra, Sarthak

    2012-01-01

    Bevel-tipped flexible needles naturally bend when inserted into soft tissue. Steering such needles along curved paths allows one to avoid anatomical obstacles and reach locations inside the human body which are unreachable with rigid needles. In this study, a mechanics-based model is presented which

  10. Prediction of human CNS pharmacokinetics using a physiologically-based pharmacokinetic modeling approach

    NARCIS (Netherlands)

    Yamamoto, Yumi; Valitalo, Pyry A.; Wong, Yin Cheong; Huntjens, Dymphy R.; Proost, Johannes H.; Vermeulen, An; Krauwinkel, Walter; Beukers, Margot W.; Kokki, Hannu; Kokki, Merja; Danhof, Meindert; van Hasselt, Johan G. C.; de Lange, Elizabeth C. M.

    2017-01-01

    Knowledge of drug concentration-time profiles at the central nervous system (CNS) target-site is critically important for rational development of CNS targeted drugs. Our aim was to translate a recently published comprehensive CNS physiologically-based pharmacokinetic (PBPK) model from rat to human,

  11. Dynamic Prediction of Power Storage and Delivery by Data-Based Fractional Differential Models of a Lithium Iron Phosphate Battery

    Directory of Open Access Journals (Sweden)

    Yunfeng Jiang

    2016-07-01

    Full Text Available A fractional derivative system identification approach for modeling battery dynamics is presented in this paper, where fractional derivatives are applied to approximate non-linear dynamic behavior of a battery system. The least squares-based state-variable filter (LSSVF method commonly used in the identification of continuous-time models is extended to allow the estimation of fractional derivative coefficents and parameters of the battery models by monitoring a charge/discharge demand signal and a power storage/delivery signal. In particular, the model is combined by individual fractional differential models (FDMs, where the parameters can be estimated by a least-squares algorithm. Based on experimental data, it is illustrated how the fractional derivative model can be utilized to predict the dynamics of the energy storage and delivery of a lithium iron phosphate battery (LiFePO 4 in real-time. The results indicate that a FDM can accurately capture the dynamics of the energy storage and delivery of the battery over a large operating range of the battery. It is also shown that the fractional derivative model exhibits improvements on prediction performance compared to standard integer derivative model, which in beneficial for a battery management system.

  12. Predictive Accuracy of the PanCan Lung Cancer Risk Prediction Model -External Validation based on CT from the Danish Lung Cancer Screening Trial

    DEFF Research Database (Denmark)

    Winkler Wille, Mathilde M.; van Riel, Sarah J.; Saghir, Zaigham

    2015-01-01

    ; in fact opposing effects of sex were observed in the two cohorts. Thus, female sex appeared to lower the risk (p = 0.047 and p = 0.040) in the DLCST. Conclusions: High risk discrimination was validated in the DLCST cohort, mainly determined by nodule size. Age and family history of lung cancer were......Objectives: Lung cancer risk models should be externally validated to test generalizability and clinical usefulness. The Danish Lung Cancer Screening Trial (DLCST) is a population-based prospective cohort study, used to assess the discriminative performances of the PanCan models. Methods: From...... used to evaluate risk discrimination. Results: AUCs of 0.826–0.870 were found for DLCST data based on PanCan risk prediction models. In the DLCST, age and family history were significant predictors (p = 0.001 and p = 0.013). Female sex was not confirmed to be associated with higher risk of lung cancer...

  13. A Model Predictive Control-Based Power Converter System for Oscillating Water Column Wave Energy Converters

    Directory of Open Access Journals (Sweden)

    Gimara Rajapakse

    2017-10-01

    Full Text Available Despite the predictability and availability at large scale, wave energy conversion (WEC has still not become a mainstream renewable energy technology. One of the main reasons is the large variations in the extracted power which could lead to instabilities in the power grid. In addition, maintaining the speed of the turbine within optimal range under changing wave conditions is another control challenge, especially in oscillating water column (OWC type WEC systems. As a solution to the first issue, this paper proposes the direct connection of a battery bank into the dc-link of the back-to-back power converter system, thereby smoothening the power delivered to the grid. For the second issue, model predictive controllers (MPCs are developed for the rectifier and the inverter of the back-to-back converter system aiming to maintain the turbine speed within its optimum range. In addition, MPC controllers are designed to control the battery current as well, in both charging and discharging conditions. Operations of the proposed battery direct integration scheme and control solutions are verified through computer simulations. Simulation results show that the proposed integrated energy storage and control solutions are capable of delivering smooth power to the grid while maintaining the turbine speed within its optimum range under varying wave conditions.

  14. Prognostic models based on patient snapshots and time windows: Predicting disease progression to assisted ventilation in Amyotrophic Lateral Sclerosis.

    Science.gov (United States)

    Carreiro, André V; Amaral, Pedro M T; Pinto, Susana; Tomás, Pedro; de Carvalho, Mamede; Madeira, Sara C

    2015-12-01

    Amyotrophic Lateral Sclerosis (ALS) is a devastating disease and the most common neurodegenerative disorder of young adults. ALS patients present a rapidly progressive motor weakness. This usually leads to death in a few years by respiratory failure. The correct prediction of respiratory insufficiency is thus key for patient management. In this context, we propose an innovative approach for prognostic prediction based on patient snapshots and time windows. We first cluster temporally-related tests to obtain snapshots of the patient's condition at a given time (patient snapshots). Then we use the snapshots to predict the probability of an ALS patient to require assisted ventilation after k days from the time of clinical evaluation (time window). This probability is based on the patient's current condition, evaluated using clinical features, including functional impairment assessments and a complete set of respiratory tests. The prognostic models include three temporal windows allowing to perform short, medium and long term prognosis regarding progression to assisted ventilation. Experimental results show an area under the receiver operating characteristics curve (AUC) in the test set of approximately 79% for time windows of 90, 180 and 365 days. Creating patient snapshots using hierarchical clustering with constraints outperforms the state of the art, and the proposed prognostic model becomes the first non population-based approach for prognostic prediction in ALS. The results are promising and should enhance the current clinical practice, largely supported by non-standardized tests and clinicians' experience. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Decentralized model predictive based load frequency control in an interconnected power system

    Energy Technology Data Exchange (ETDEWEB)

    Mohamed, T.H., E-mail: tarekhie@yahoo.co [High Institute of Energy, South Valley University (Egypt); Bevrani, H., E-mail: bevrani@ieee.or [Dept. of Electrical Engineering and Computer Science, University of Kurdistan (Iran, Islamic Republic of); Hassan, A.A., E-mail: aahsn@yahoo.co [Faculty of Engineering, Dept. of Electrical Engineering, Minia University, Minia (Egypt); Hiyama, T., E-mail: hiyama@cs.kumamoto-u.ac.j [Dept. of Electrical Engineering and Computer Science, Kumamoto University, Kumamoto (Japan)

    2011-02-15

    This paper presents a new load frequency control (LFC) design using the model predictive control (MPC) technique in a multi-area power system. The MPC technique has been designed such that the effect of the uncertainty due to governor and turbine parameters variation and load disturbance is reduced. Each local area controller is designed independently such that stability of the overall closed-loop system is guaranteed. A frequency response model of multi-area power system is introduced, and physical constraints of the governors and turbines are considered. The model was employed in the MPC structures. Digital simulations for both two and three-area power systems are provided to validate the effectiveness of the proposed scheme. The results show that, with the proposed MPC technique, the overall closed-loop system performance demonstrated robustness in the face of uncertainties due to governors and turbines parameters variation and loads disturbances. A performance comparison between the proposed controller and a classical integral control scheme is carried out confirming the superiority of the proposed MPC technique.

  16. Nonlinear Model-Based Predictive Control applied to Large Scale Cryogenic Facilities

    CERN Document Server

    Blanco Vinuela, Enrique; de Prada Moraga, Cesar

    2001-01-01

    The thesis addresses the study, analysis, development, and finally the real implementation of an advanced control system for the 1.8 K Cooling Loop of the LHC (Large Hadron Collider) accelerator. The LHC is the next accelerator being built at CERN (European Center for Nuclear Research), it will use superconducting magnets operating below a temperature of 1.9 K along a circumference of 27 kilometers. The temperature of these magnets is a control parameter with strict operating constraints. The first control implementations applied a procedure that included linear identification, modelling and regulation using a linear predictive controller. It did improve largely the overall performance of the plant with respect to a classical PID regulator, but the nature of the cryogenic processes pointed out the need of a more adequate technique, such as a nonlinear methodology. This thesis is a first step to develop a global regulation strategy for the overall control of the LHC cells when they will operate simultaneously....

  17. A theoretical model to predict both horizontal displacement and vertical displacement for electromagnetic induction-based deep displacement sensors.

    Science.gov (United States)

    Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong

    2012-01-01

    Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors' mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors' monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency.

  18. A Theoretical Model to Predict Both Horizontal Displacement and Vertical Displacement for Electromagnetic Induction-Based Deep Displacement Sensors

    Directory of Open Access Journals (Sweden)

    Xiong Li

    2011-12-01

    Full Text Available Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA has been proposed to quantitatively depict II-type sensors’ mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors’ monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency.

  19. Motivational cues predict the defensive system in team handball: A model based on regulatory focus theory.

    Science.gov (United States)

    Debanne, T; Laffaye, G

    2015-08-01

    This study was based on the naturalistic decision-making paradigm and regulatory focus theory. Its aim was to model coaches' decision-making processes for handball teams' defensive systems based on relevant cues of the reward structure, and to determine the weight of each of these cues. We collected raw data by video-recording 41 games that were selected using a simple random method. We considered the defensive strategy (DEF: aligned or staged) to be the dependent variable, and the three independent variables were (a) numerical difference between the teams; (b) score difference between the teams; and (c) game periods. We used a logistic regression design (logit model) and a multivariate logistic model to explain the link between DEF and the three category independent variables. Each factor was weighted differently during the decision-making process to select the defensive system, and combining these variables increased the impact on this process; for instance, a staged defense is 43 times more likely to be chosen during the final period in an unfavorable situation and in a man advantage. Finally, this shows that the coach's decision-making process could be based on a simple match or could require a diagnosis of the situation based on the relevant cues. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Discovering urban mobility patterns with PageRank based traffic modeling and prediction

    Science.gov (United States)

    Wang, Minjie; Yang, Su; Sun, Yi; Gao, Jun

    2017-11-01

    Urban transportation system can be viewed as complex network with time-varying traffic flows as links to connect adjacent regions as networked nodes. By computing urban traffic evolution on such temporal complex network with PageRank, it is found that for most regions, there exists a linear relation between the traffic congestion measure at present time and the PageRank value of the last time. Since the PageRank measure of a region does result from the mutual interactions of the whole network, it implies that the traffic state of a local region does not evolve independently but is affected by the evolution of the whole network. As a result, the PageRank values can act as signatures in predicting upcoming traffic congestions. We observe the aforementioned laws experimentally based on the trajectory data of 12000 taxies in Beijing city for one month.

  1. Assimilating Citizen-Based Observations from Low-Cost Sensors in Hydrological Models to Improve Flood Prediction

    Science.gov (United States)

    Mazzoleni, M.; Alfonso, L.; Solomatine, D.

    2015-12-01

    The main goal of this study is to demonstrate how integration of citizen-based observations coming from low-cost sensors (having variable uncertainty and intermittent characteristics) into hydrological models can be used to improve flood prediction. The methodology is applied in the Brue basin, located in the South-West part of UK. In order to estimate the response of the catchment to a given flood event, a conceptual hydrological model is implemented. The measured precipitation values are used as perfect forecast input in the hydrological models. Then, a Kalman filter is implemented and adapted to account for asynchronous streamflow observations coming at irregular time steps having random uncertainty. Synthetic streamflow values are used in this study due to the fact that citizen-based observations are not available. The results show how streamflow observations having variable uncertainty can improve the flood prediction. In particular, increasing the number of observations from low-cost sensors within two model time steps can improve the model accuracy leading to a better flood forecast. Observations uncertainty influences the model accuracy more than the irregular moments in which the streamflow observations are assimilated into the hydrological model. This study is part of the FP7 European Project WeSenseIt Citizen Water Observatory (www.http://wesenseit.eu/).

  2. Predictive modelling for shelf life determination of nutricereal based fermented baby food

    OpenAIRE

    Rasane, Prasad; Jha, Alok; Sharma, Nitya

    2014-01-01

    A shelf life model based on storage temperatures was developed for a nutricereal based fermented baby food formulation. The formulated baby food samples were packaged and stored at 10, 25, 37 and 45 °C for a test storage period of 180 days. A shelf life study was conducted using consumer and semi-trained panels, along with chemical analysis (moisture and acidity). The chemical parameters (moisture and titratable acidity) were found inadequate in determining the shelf life of the formulated pr...

  3. A novel model to combine clinical and pathway-based transcriptomic information for the prognosis prediction of breast cancer.

    Directory of Open Access Journals (Sweden)

    Sijia Huang

    2014-09-01

    Full Text Available Breast cancer is the most common malignancy in women worldwide. With the increasing awareness of heterogeneity in breast cancers, better prediction of breast cancer prognosis is much needed for more personalized treatment and disease management. Towards this goal, we have developed a novel computational model for breast cancer prognosis by combining the Pathway Deregulation Score (PDS based pathifier algorithm, Cox regression and L1-LASSO penalization method. We trained the model on a set of 236 patients with gene expression data and clinical information, and validated the performance on three diversified testing data sets of 606 patients. To evaluate the performance of the model, we conducted survival analysis of the dichotomized groups, and compared the areas under the curve based on the binary classification. The resulting prognosis genomic model is composed of fifteen pathways (e.g., P53 pathway that had previously reported cancer relevance, and it successfully differentiated relapse in the training set (log rank p-value = 6.25e-12 and three testing data sets (log rank p-value < 0.0005. Moreover, the pathway-based genomic models consistently performed better than gene-based models on all four data sets. We also find strong evidence that combining genomic information with clinical information improved the p-values of prognosis prediction by at least three orders of magnitude in comparison to using either genomic or clinical information alone. In summary, we propose a novel prognosis model that harnesses the pathway-based dysregulation as well as valuable clinical information. The selected pathways in our prognosis model are promising targets for therapeutic intervention.

  4. Development of a QTL-environment-based predictive model for node addition rate in common bean.

    Science.gov (United States)

    Zhang, Li; Gezan, Salvador A; Eduardo Vallejos, C; Jones, James W; Boote, Kenneth J; Clavijo-Michelangeli, Jose A; Bhakta, Mehul; Osorno, Juan M; Rao, Idupulapati; Beebe, Stephen; Roman-Paoli, Elvin; Gonzalez, Abiezer; Beaver, James; Ricaurte, Jaumer; Colbert, Raphael; Correll, Melanie J

    2017-05-01

    This work reports the effects of the genetic makeup, the environment and the genotype by environment interactions for node addition rate in an RIL population of common bean. This information was used to build a predictive model for node addition rate. To select a plant genotype that will thrive in targeted environments it is critical to understand the genotype by environment interaction (GEI). In this study, multi-environment QTL analysis was used to characterize node addition rate (NAR, node day - 1 ) on the main stem of the common bean (Phaseolus vulgaris L). This analysis was carried out with field data of 171 recombinant inbred lines that were grown at five sites (Florida, Puerto Rico, 2 sites in Colombia, and North Dakota). Four QTLs (Nar1, Nar2, Nar3 and Nar4) were identified, one of which had significant QTL by environment interactions (QEI), that is, Nar2 with temperature. Temperature was identified as the main environmental factor affecting NAR while day length and solar radiation played a minor role. Integration of sites as covariates into a QTL mixed site-effect model, and further replacing the site component with explanatory environmental covariates (i.e., temperature, day length and solar radiation) yielded a model that explained 73% of the phenotypic variation for NAR with root mean square error of 16.25% of the mean. The QTL consistency and stability was examined through a tenfold cross validation with different sets of genotypes and these four QTLs were always detected with 50-90% probability. The final model was evaluated using leave-one-site-out method to assess the influence of site on node addition rate. These analyses provided a quantitative measure of the effects on NAR of common beans exerted by the genetic makeup, the environment and their interactions.

  5. A CN-Based Ensembled Hydrological Model for Enhanced Watershed Runoff Prediction

    Directory of Open Access Journals (Sweden)

    Muhammad Ajmal

    2016-01-01

    Full Text Available A major structural inconsistency of the traditional curve number (CN model is its dependence on an unstable fixed initial abstraction, which normally results in sudden jumps in runoff estimation. Likewise, the lack of pre-storm soil moisture accounting (PSMA procedure is another inherent limitation of the model. To circumvent those problems, we used a variable initial abstraction after ensembling the traditional CN model and a French four-parameter (GR4J model to better quantify direct runoff from ungauged watersheds. To mimic the natural rainfall-runoff transformation at the watershed scale, our new parameterization designates intrinsic parameters and uses a simple structure. It exhibited more accurate and consistent results than earlier methods in evaluating data from 39 forest-dominated watersheds, both for small and large watersheds. In addition, based on different performance evaluation indicators, the runoff reproduction results show that the proposed model produced more consistent results for dry, normal, and wet watershed conditions than the other models used in this study.

  6. Model-Based Load Estimation for Predictive Condition Monitoring of Wind Turbines

    DEFF Research Database (Denmark)

    Perisic, Nevena; Pederen, Bo Juul; Grunnet, Jacob Deleuran

    The main objective of this paper is to present a Load Observer Tool (LOT) for condition monitoring of structural extreme and fatigue loads on the main wind turbine (WTG) components. LOT uses well-known methods from system identification, state estimation and fatigue analysis in a novel approach...... for application in condition monitoring. Fatigue loads are estimated online using a load observer and grey box models which include relevant WTG dynamics. Identification of model parameters and calibration of observer are performed offline using measurements from WTG prototype. Signal processing of estimated load...... signal is performed online, and a Load Indicator Signal (LIS) is formulated as a ratio between current estimated accumulated fatigue loads and its expected value based only on a priori knowledge (WTG dynamics and wind climate). LOT initialisation is based on a priori knowledge and can be obtained using...

  7. A biological network-based regularized artificial neural network model for robust phenotype prediction from gene expression data.

    Science.gov (United States)

    Kang, Tianyu; Ding, Wei; Zhang, Luoyan; Ziemek, Daniel; Zarringhalam, Kourosh

    2017-12-19

    Stratification of patient subpopulations that respond favorably to treatment or experience and adverse reaction is an essential step toward development of new personalized therapies and diagnostics. It is currently feasible to generate omic-scale biological measurements for all patients in a study, providing an opportunity for machine learning models to identify molecular markers for disease diagnosis and progression. However, the high variability of genetic background in human populations hampers the reproducibility of omic-scale markers. In this paper, we develop a biological network-based regularized artificial neural network model for prediction of phenotype from transcriptomic measurements in clinical trials. To improve model sparsity and the overall reproducibility of the model, we incorporate regularization for simultaneous shrinkage of gene sets based on active upstream regulatory mechanisms into the model. We benchmark our method against various regression, support vector machines and artificial neural network models and demonstrate the ability of our method in predicting the clinical outcomes using clinical trial data on acute rejection in kidney transplantation and response to Infliximab in ulcerative colitis. We show that integration of prior biological knowledge into the classification as developed in this paper, significantly improves the robustness and generalizability of predictions to independent datasets. We provide a Java code of our algorithm along with a parsed version of the STRING DB database. In summary, we present a method for prediction of clinical phenotypes using baseline genome-wide expression data that makes use of prior biological knowledge on gene-regulatory interactions in order to increase robustness and reproducibility of omic-scale markers. The integrated group-wise regularization methods increases the interpretability of biological signatures and gives stable performance estimates across independent test sets.

  8. MicroRNA prediction using a fixed-order Markov model based on the secondary structure pattern.

    Directory of Open Access Journals (Sweden)

    Wei Shen

    Full Text Available Predicting miRNAs is an arduous task, due to the diversity of the precursors and complexity of enzyme processes. Although several prediction approaches have reached impressive performances, few of them could achieve a full-function recognition of mature miRNA directly from the candidate hairpins across species. Therefore, researchers continue to seek a more powerful model close to biological recognition to miRNA structure. In this report, we describe a novel miRNA prediction algorithm, known as FOMmiR, using a fixed-order Markov model based on the secondary structural pattern. For a training dataset containing 809 human pre-miRNAs and 6441 human pseudo-miRNA hairpins, the model's parameters were defined and evaluated. The results showed that FOMmiR reached 91% accuracy on the human dataset through 5-fold cross-validation. Moreover, for the independent test datasets, the FOMmiR presented an outstanding prediction in human and other species including vertebrates, Drosophila, worms and viruses, even plants, in contrast to the well-known algorithms and models. Especially, the FOMmiR was not only able to distinguish the miRNA precursors from the hairpins, but also locate the position and strand of the mature miRNA. Therefore, this study provides a new generation of miRNA prediction algorithm, which successfully realizes a full-function recognition of the mature miRNAs directly from the hairpin sequences. And it presents a new understanding of the biological recognition based on the strongest signal's location detected by FOMmiR, which might be closely associated with the enzyme cleavage mechanism during the miRNA maturation.

  9. Determination of fruit maturity and its prediction model based on the pericarp index of absorbance difference (IAD for peaches.

    Directory of Open Access Journals (Sweden)

    Binbin Zhang

    Full Text Available Harvest maturity is closely related to peach fruit quality and has a very important effect on the fresh fruit market. Unfortunately, at present, it is difficult to determine the maturity level of peach fruits by artificial methods. The objectives of this study were to develop quadratic polynomial regression models using near-infrared spectroscopy that could determine the peel color difference, fruit firmness, soluble solids content (SSC, soluble sugar, organic acid components, and their relationships with the absorbance of chlorophyll (index of absorbance difference, IAD in late maturing 'Xiahui 8' peach and 'Xiaguang' nectarine fruits. The analysis was based on data for fruits at veraison, fruits at harvesting maturity, and all fruits. The results showed that firmness has the highest correlation coefficient with IAD. Prediction models for fruit maturity were established between firmness and the IAD of the two cultivars using the quadratic polynomial regression method. Further variance analysis on the one degree term and quadratic term of each equation showed that every partial regression coefficient reached a significant or extremely significant level. No significant difference was observed between estimated and observed values after regression prediction. The regression equations seem to fit well. Other peach and nectarine varieties were used to test the feasibility of maturity prediction by this method, and it was found that maturity was successfully predicted in all the samples. The result indicated that the IAD can be used as an index to predict peach fruit maturity.

  10. Additive SMILES-Based Carcinogenicity Models: Probabilistic Principles in the Search for Robust Predictions

    Directory of Open Access Journals (Sweden)

    Emilio Benfenati

    2009-07-01

    Full Text Available Optimal descriptors calculated with the simplified molecular input line entry system (SMILES have been utilized in modeling of carcinogenicity as continuous values (logTD50. These descriptors can be calculated using correlation weights of SMILES attributes calculated by the Monte Carlo method. A considerable subset of these attributes includes rare attributes. The use of these rare attributes can lead to overtraining. One can avoid the influence of the rare attributes if their correlation weights are fixed to zero. A function, limS, has been defined to identify rare attributes. The limS defines the minimum number of occurrences in the set of structures of the training (subtraining set, to accept attributes as usable. If an attribute is present less than limS, it is considered “rare”, and thus not used. Two systems of building up models were examined: 1. classic training-test system; 2. balance of correlations for the subtraining and calibration sets (together, they are the original training set: the function of the calibration set is imitation of a preliminary test set. Three random splits into subtraining, calibration, and test sets were analysed. Comparison of abovementioned systems has shown that balance of correlations gives more robust prediction of the carcinogenicity for all three splits (split 1: rtest2=0.7514, stest=0.684; split 2: rtest2=0.7998, stest=0.600; split 3: rtest2=0.7192, stest=0.728.

  11. Predictive Modeling for Blood Transfusion Following Adult Spinal Deformity Surgery: A Tree-Based Machine Learning Approach.

    Science.gov (United States)

    Durand, Wesley M; DePasse, J Mason; Daniels, Alan H

    2017-12-05

    Retrospective cohort study. Blood transfusion is frequently necessary following adult spinal deformity (ASD) surgery. We sought to develop predictive models for blood transfusion following ASD surgery, utilizing both classification tree and random forest machine-learning approaches. Past models for transfusion risk among spine surgery patients are disadvantaged through use of single-institutional data, potentially limiting generalizability. This investigation was conducted utilizing the ACS NSQIP dataset years 2012-2015. Patients undergoing surgery for ASD were identified using primary-listed CPT codes. In total, 1,029 patients were analyzed. The primary outcome measure was intra-/post-operative blood transfusion. Patients were divided into training (n = 824) and validation (n = 205) datasets. Single classification tree and random forest models were developed. Both models were tested on the validation dataset using AUC, which was compared between models. Overall, 46.5% (n = 479) of patients received a transfusion intraoperatively or within 72 h postoperatively. The final classification tree model utilized operative duration, hematocrit, and weight, exhibiting AUC = 0.79 (95%CI 0.73-0.85) on the validation set. The most influential variables in the random forest model were operative duration, surgical invasiveness, hematocrit, weight, and age. The random forest model exhibited AUC = 0.85 (95%CI 0.80-0.90). The difference between the classification tree and random forest AUCs was non-significant at the validation cohort size of 205 patients (p = 0.1551). This investigation produced tree-based machine-learning models of blood transfusion risk following ASD surgery. The random forest model offered very good predictive capability as measured by AUC. Our single classification tree model offered superior ease of implementation, but a lower AUC as compared to the random forest approach, though this difference was not statistically significant at

  12. A Non-linear Predictive Model of Borderline Personality Disorder Based on Multilayer Perceptron

    Directory of Open Access Journals (Sweden)

    Nelson M. Maldonato

    2018-04-01

    Full Text Available Borderline Personality Disorder is a serious mental disease, classified in Cluster B of DSM IV-TR personality disorders. People with this syndrome presents an anamnesis of traumatic experiences and shows dissociative symptoms. Since not all subjects who have been victims of trauma develop a Borderline Personality Disorder, the emergence of this serious disease seems to have the fragility of character as a predisposing condition. Infect, numerous studies show that subjects positive for diagnosis of Borderline Personality Disorder had scores extremely high or extremely low to some temperamental dimensions (harm Avoidance and reward dependence and character dimensions (cooperativeness and self directedness. In a sample of 602 subjects, who have had consecutive access to an Outpatient Mental Health Service, it was evaluated the presence of Borderline Personality Disorder using the semi-structured interview for the DSM IV-TR personality disorders. In this population we assessed the presence of dissociative symptoms with the Dissociative Experiences Scale and the personality traits with the Temperament and Character Inventory developed by Cloninger. To assess the weight and the predictive value of these psychopathological dimensions in relation to the Borderline Personality Disorder diagnosis, a neural network statistical model called “multilayer perceptron,” was implemented. This model was developed with a dichotomous dependent variable, consisting in the presence or absence of the diagnosis of borderline personality disorder and with five covariates. The first one is the taxonomic subscale of dissociative experience scale, the others are temperamental and characterial traits: Novelty-Seeking, Harm-Avoidance, Self-Directedness and Cooperativeness. The statistical model, that results satisfactory, showed a significance capacity (89% to predict the presence of borderline personality disorder. Furthermore, the dissociative symptoms seem to have a

  13. A Non-linear Predictive Model of Borderline Personality Disorder Based on Multilayer Perceptron.

    Science.gov (United States)

    Maldonato, Nelson M; Sperandeo, Raffaele; Moretto, Enrico; Dell'Orco, Silvia

    2018-01-01

    Borderline Personality Disorder is a serious mental disease, classified in Cluster B of DSM IV-TR personality disorders. People with this syndrome presents an anamnesis of traumatic experiences and shows dissociative symptoms. Since not all subjects who have been victims of trauma develop a Borderline Personality Disorder, the emergence of this serious disease seems to have the fragility of character as a predisposing condition. Infect, numerous studies show that subjects positive for diagnosis of Borderline Personality Disorder had scores extremely high or extremely low to some temperamental dimensions (harm Avoidance and reward dependence) and character dimensions (cooperativeness and self directedness). In a sample of 602 subjects, who have had consecutive access to an Outpatient Mental Health Service, it was evaluated the presence of Borderline Personality Disorder using the semi-structured interview for the DSM IV-TR personality disorders. In this population we assessed the presence of dissociative symptoms with the Dissociative Experiences Scale and the personality traits with the Temperament and Character Inventory developed by Cloninger. To assess the weight and the predictive value of these psychopathological dimensions in relation to the Borderline Personality Disorder diagnosis, a neural network statistical model called "multilayer perceptron," was implemented. This model was developed with a dichotomous dependent variable, consisting in the presence or absence of the diagnosis of borderline personality disorder and with five covariates. The first one is the taxonomic subscale of dissociative experience scale, the others are temperamental and characterial traits: Novelty-Seeking, Harm-Avoidance, Self-Directedness and Cooperativeness. The statistical model, that results satisfactory, showed a significance capacity (89%) to predict the presence of borderline personality disorder. Furthermore, the dissociative symptoms seem to have a greater influence than

  14. Neuro-Fuzzy Prediction of Cooperation Interaction Profile of Flexible Road Train Based on Hybrid Automaton Modeling

    Directory of Open Access Journals (Sweden)

    Banjanovic-Mehmedovic Lejla

    2016-01-01

    Full Text Available Accurate prediction of traffic information is important in many applications in relation to Intelligent Transport systems (ITS, since it reduces the uncertainty of future traffic states and improves traffic mobility. There is a lot of research done in the field of traffic information predictions such as speed, flow and travel time. The most important research was done in the domain of cooperative intelligent transport system (C-ITS. The goal of this paper is to introduce the novel cooperation behaviour profile prediction through the example of flexible Road Trains useful road cooperation parameter, which contributes to the improvement of traffic mobility in Intelligent Transportation Systems. This paper presents an approach towards the control and cooperation behaviour modelling of vehicles in the flexible Road Train based on hybrid automaton and neuro-fuzzy (ANFIS prediction of cooperation profile of the flexible Road Train. Hybrid automaton takes into account complex dynamics of each vehicle as well as discrete cooperation approach. The ANFIS is a particular class of the ANN family with attractive estimation and learning potentials. In order to provide statistical analysis, RMSE (root mean square error, coefficient of determination (R2 and Pearson coefficient (r, were utilized. The study results suggest that ANFIS would be an efficient soft computing methodology, which could offer precise predictions of cooperative interactions between vehicles in Road Train, which is useful for prediction mobility in Intelligent Transport systems.

  15. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  16. A neural network based computational model to predict the output power of different types of photovoltaic cells.

    Directory of Open Access Journals (Sweden)

    WenBo Xiao

    Full Text Available In this article, we introduced an artificial neural network (ANN based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-, multi-crystalline (multi-, and amorphous (amor- crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.

  17. A novel miRNA-based predictive model for biochemical failure following post-prostatectomy salvage radiation therapy.

    Directory of Open Access Journals (Sweden)

    Erica Hlavin Bell

    Full Text Available To develop a microRNA (miRNA-based predictive model for prostate cancer patients of 1 time to biochemical recurrence after radical prostatectomy and 2 biochemical recurrence after salvage radiation therapy following documented biochemical disease progression post-radical prostatectomy.Forty three patients who had undergone salvage radiation therapy following biochemical failure after radical prostatectomy with greater than 4 years of follow-up data were identified. Formalin-fixed, paraffin-embedded tissue blocks were collected for all patients and total RNA was isolated from 1mm cores enriched for tumor (>70%. Eight hundred miRNAs were analyzed simultaneously using the nCounter human miRNA v2 assay (NanoString Technologies; Seattle, WA. Univariate and multivariate Cox proportion hazards regression models as well as receiver operating characteristics were used to identify statistically significant miRNAs that were predictive of biochemical recurrence.Eighty eight miRNAs were identified to be significantly (p36 months. Nine miRNAs were identified to be significantly (p<0.05 associated by multivariate analysis with biochemical failure after salvage radiation therapy. A new predictive model for biochemical recurrence after salvage radiation therapy was developed; this model consisted of miR-4516 and miR-601 together with, Gleason score, and lymph node status. The area under the ROC curve (AUC was improved to 0.83 compared to that of 0.66 for Gleason score and lymph node status alone.miRNA signatures can distinguish patients who fail soon after radical prostatectomy versus late failures, giving insight into which patients may need adjuvant therapy. Notably, two novel miRNAs (miR-4516 and miR-601 were identified that significantly improve prediction of biochemical failure post-salvage radiation therapy compared to clinico-histopathological factors, supporting the use of miRNAs within clinically used predictive models. Both findings warrant further

  18. Global sensitivity analysis for model-based prediction of oxidative micropollutant transformation during drinking water treatment.

    Science.gov (United States)

    Neumann, Marc B; Gujer, Willi; von Gunten, Urs

    2009-03-01

    This study quantifies the uncertainty involved in predicting micropollutant oxidation during drinking water ozonation in a pilot plant reactor. The analysis is conducted for geosmin, methyl tert-butyl ether (MTBE), isopropylmethoxypyrazine (IPMP), bezafibrate, beta-cyclocitral and ciprofloxazin. These compounds are representative for a wide range of substances with second order rate constants between 0.1 and 1.9x10(4)M(-1)s(-1) for the reaction with ozone and between 2x10(9) and 8x10(9)M(-1)s(-1) for the reaction with OH-radicals. Uncertainty ranges are derived for second order rate constants, hydraulic parameters, flow- and ozone concentration data, and water characteristic parameters. The uncertain model factors are propagated via Monte Carlo simulation and the resulting probability distributions of the relative residual micropollutant concentrations are assessed. The importance of factors in determining model output variance is quantified using Extended Fourier Amplitude Sensitivity Testing (Extended-FAST). For substances that react slowly with ozone (MTBE, IPMP, geosmin) the water characteristic R(ct)-value (ratio of ozone- to OH-radical concentration) is the most influential factor explaining 80% of the output variance. In the case of bezafibrate the R(ct)-value and the second order rate constant for the reaction with ozone each contribute about 30% to the output variance. For beta-cyclocitral and ciprofloxazin (fast reacting with ozone) the second order rate constant for the reaction with ozone and the hydraulic model structure become the dominating sources of uncertainty.

  19. Stochastic Model Predictive Fault Tolerant Control Based on Conditional Value at Risk for Wind Energy Conversion System

    Directory of Open Access Journals (Sweden)

    Yun-Tao Shi

    2018-01-01

    Full Text Available Wind energy has been drawing considerable attention in recent years. However, due to the random nature of wind and high failure rate of wind energy conversion systems (WECSs, how to implement fault-tolerant WECS control is becoming a significant issue. This paper addresses the fault-tolerant control problem of a WECS with a probable actuator fault. A new stochastic model predictive control (SMPC fault-tolerant controller with the Conditional Value at Risk (CVaR objective function is proposed in this paper. First, the Markov jump linear model is used to describe the WECS dynamics, which are affected by many stochastic factors, like the wind. The Markov jump linear model can precisely model the random WECS properties. Second, the scenario-based SMPC is used as the controller to address the control problem of the WECS. With this controller, all the possible realizations of the disturbance in prediction horizon are enumerated by scenario trees so that an uncertain SMPC problem can be transformed into a deterministic model predictive control (MPC problem. Finally, the CVaR object function is adopted to improve the fault-tolerant control performance of the SMPC controller. CVaR can provide a balance between the performance and random failure risks of the system. The Min-Max performance index is introduced to compare the fault-tolerant control performance with the proposed controller. The comparison results show that the proposed method has better fault-tolerant control performance.

  20. Benzene patterns in different urban environments and a prediction model for benzene rates based on NOx values

    Science.gov (United States)

    Paz, Shlomit; Goldstein, Pavel; Kordova-Biezuner, Levana; Adler, Lea

    2017-04-01

    Exposure to benzene has been associated with multiple severe impacts on health. This notwithstanding, at most monitoring stations, benzene is not monitored on a regular basis. The aims of the study were to compare benzene rates in different urban environments (region with heavy traffic and industrial region), to analyse the relationship between benzene and meteorological parameters in a Mediterranean climate type, to estimate the linkages between benzene and NOx and to suggest a prediction model for benzene rates based on NOx levels in order contribute to a better estimation of benzene. Data were used from two different monitoring stations, located on the eastern Mediterranean coast: 1) a traffic monitoring station in Tel Aviv, Israel (TLV) located in an urban region with heavy traffic; 2) a general air quality monitoring station in Haifa Bay (HIB), located in Israel's main industrial region. At each station, hourly, daily, monthly, seasonal, and annual data of benzene, NOx, mean temperature, relative humidity, inversion level, and temperature gradient were analysed over three years: 2008, 2009, and 2010. A prediction model for benzene rates based on NOx levels (which are monitored regularly) was developed to contribute to a better estimation of benzene. The severity of benzene pollution was found to be considerably higher at the traffic monitoring station (TLV) than at the general air quality station (HIB), despite the location of the latter in an industrial area. Hourly, daily, monthly, seasonal, and annual patterns have been shown to coincide with anthropogenic activities (traffic), the day of the week, and atmospheric conditions. A strong correlation between NOx and benzene allowed the development of a prediction model for benzene rates, based on NOx, the day of the week, and the month. The model succeeded in predicting the benzene values throughout the year (except for September). The severity of benzene pollution was found to be considerably higher at the

  1. Prediction of oral pharmacokinetics of cMet kinase inhibitors in humans: physiologically based pharmacokinetic model versus traditional one-compartment model.

    Science.gov (United States)

    Yamazaki, Shinji; Skaptason, Judith; Romero, David; Vekich, Sylvia; Jones, Hannah M; Tan, Weiwei; Wilner, Keith D; Koudriakova, Tatiana

    2011-03-01

    The objective of this study was to assess the physiologically based pharmacokinetic (PBPK) model for predicting plasma concentration-time profiles of orally available cMet kinase inhibitors, (R)-3-[1-(2,6-dichloro-3-fluoro-phenyl)-ethoxy]-5-(1-piperidin-4-yl-1H-pyrazol-4-yl)-pyridin-2-ylamine (PF02341066) and 2-[4-(3-quinolin-6-ylmethyl-3H-[1,2,3]triazolo[4,5-b]pyrazin-5-yl)-pyrazol-1-yl]-ethanol (PF04217903), in humans. The prediction accuracy of pharmacokinetics (PK) by PBPK modeling was compared with that of a traditional one-compartment PK model based on allometric scaling. The predicted clearance values from allometric scaling with the correction for the interspecies differences in protein binding were used as a representative comparison, which showed more accurate PK prediction in humans than the other methods. Overall PBPK modeling provided better prediction of the area under the plasma concentration-time curves for both PF02341066 (1.2-fold error) and PF04217903 (1.3-fold error) compared with the one-compartment PK model (1.8- and 1.9-fold errors, respectively). Of more importance, the simulated plasma concentration-time profiles of PF02341066 and PF04217903 by PBPK modeling seemed to be consistent with the observed profiles showing multiexponential declines, resulting in more accurate prediction of the apparent half-lives (t(1/2)): the observed and predicted t(1/2) values were, respectively, 10 and 12 h for PF02341066 and 6.6 and 6.3 h for PF04217903. The predicted t(1/2) values by the one-compartment PK model were 17 h for PF02341066 and 1.9 h for PF04217903. Therefore, PBPK modeling has the potential to be more useful and reliable for the PK prediction of PF02341066 and PF04217903 in humans than the traditional one-compartment PK model. In summary, the present study has shown examples to indicate that the PBPK model can be used to predict PK profiles in humans.

  2. A model-based approach to predict muscle synergies using optimization: application to feedback control

    Directory of Open Access Journals (Sweden)

    Reza eSharif Razavian

    2015-10-01

    Full Text Available This paper presents a new model-based method to define muscle synergies. Unlike the conventional factorization approach, which extracts synergies from electromyographic data, the proposed method employs a biomechanical model and formally defines the synergies as the solution of an optimal control problem. As a result, the number of required synergies is directly related to the dimensions of the operational space. The estimated synergies are posture-dependent, which correlate well with the results of standard factorization methods. Two examples are used to showcase this method: a two-dimensional forearm model, and a three-dimensional driver arm model. It has been shown here that the synergies need to be task-specific (i.e. they are defined for the specific operational spaces: the elbow angle and the steering wheel angle in the two systems. This functional definition of synergies results in a low-dimensional control space, in which every force in the operational space is accurately created by a unique combination of synergies. As such, there is no need for extra criteria (e.g., minimizing effort in the process of motion control. This approach is motivated by the need for fast and bio-plausible feedback control of musculoskeletal systems, and can have important implications in engineering, motor control, and biomechanics.

  3. A model-based approach to predict muscle synergies using optimization: application to feedback control.

    Science.gov (United States)

    Sharif Razavian, Reza; Mehrabi, Naser; McPhee, John

    2015-01-01

    This paper presents a new model-based method to define muscle synergies. Unlike the conventional factorization approach, which extracts synergies from electromyographic data, the proposed method employs a biomechanical model and formally defines the synergies as the solution of an optimal control problem. As a result, the number of required synergies is directly related to the dimensions of the operational space. The estimated synergies are posture-dependent, which correlate well with the results of standard factorization methods. Two examples are used to showcase this method: a two-dimensional forearm model, and a three-dimensional driver arm model. It has been shown here that the synergies need to be task-specific (i.e., they are defined for the specific operational spaces: the elbow angle and the steering wheel angle in the two systems). This functional definition of synergies results in a low-dimensional control space, in which every force in the operational space is accurately created by a unique combination of synergies. As such, there is no need for extra criteria (e.g., minimizing effort) in the process of motion control. This approach is motivated by the need for fast and bio-plausible feedback control of musculoskeletal systems, and can have important implications in engineering, motor control, and biomechanics.

  4. A genetic-algorithm-based remnant grey prediction model for energy demand forecasting.

    Directory of Open Access Journals (Sweden)

    Yi-Chung Hu

    Full Text Available Energy demand is an important economic index, and demand forecasting has played a significant role in drawing up energy development plans for cities or countries. As the use of large datasets and statistical assumptions is often impractical to forecast energy demand, the GM(1,1 model is commonly used because of its simplicity and ability to characterize an unknown system by using a limited number of data points to construct a time series model. This paper proposes a genetic-algorithm-based remnant GM(1,1 (GARGM(1,1 with sign estimation to further improve the forecasting accuracy of the original GM(1,1 model. The distinctive feature of GARGM(1,1 is that it simultaneously optimizes the parameter specifications of the original and its residual models by using the GA. The results of experiments pertaining to a real case of energy demand in China showed that the proposed GARGM(1,1 outperforms other remnant GM(1,1 variants.

  5. A genetic-algorithm-based remnant grey prediction model for energy demand forecasting.

    Science.gov (United States)

    Hu, Yi-Chung

    2017-01-01

    Energy demand is an important economic index, and demand forecasting has played a significant role in drawing up energy development plans for cities or countries. As the use of large datasets and statistical assumptions is often impractical to forecast energy demand, the GM(1,1) model is commonly used because of its simplicity and ability to characterize an unknown system by using a limited number of data points to construct a time series model. This paper proposes a genetic-algorithm-based remnant GM(1,1) (GARGM(1,1)) with sign estimation to further improve the forecasting accuracy of the original GM(1,1) model. The distinctive feature of GARGM(1,1) is that it simultaneously optimizes the parameter specifications of the original and its residual models by using the GA. The results of experiments pertaining to a real case of energy demand in China showed that the proposed GARGM(1,1) outperforms other remnant GM(1,1) variants.

  6. A Coupled Transport and Chemical Model for Durability Predictions of Cement Based Materials

    DEFF Research Database (Denmark)

    Jensen, Mads Mønster; Johannesson, Björn; Geiker, Mette Rica

    for the multi-physics durability model, established in this work, is an extended version of the Poisson-Nernst-Planck system of equations. The extension of the Poisson-Nernst-Planck system includes a two phase description of the moisture transport as well as chemical interactions. The vapor and liquid contents...... are conducted. The theoretical background for the model is to a large extent based on the hybrid mixture theory, which is a modern continuum approach. The hybrid mixture theory description considers the individual phases and species, building up the whole mixture, with individual differential equations....... The differential equations includes exchange terms between the phases and species accounting for the exchange of physical quantities which are essential for a stringent physical description of concrete. Balance postulates for, mass, momentum and energy, together with an entropy inequality are studied within...

  7. Prediction of Adequate Prenatal Care Utilization Based on the Extended Parallel Process Model.

    Science.gov (United States)

    Hajian, Sepideh; Imani, Fatemeh; Riazi, Hedyeh; Salmani, Fatemeh

    2017-10-01

    Pregnancy complications are one of the major public health concerns. One of the main causes of preventable complications is the absence of or inadequate provision of prenatal care. The present study was conducted to investigate whether Extended Parallel Process Model's constructs can predict the utilization of prenatal care services. The present longitudinal prospective study was conducted on 192 pregnant women selected through the multi-stage sampling of health facilities in Qeshm, Hormozgan province, from April to June 2015. Participants were followed up from the first half of pregnancy until their childbirth to assess adequate or inadequate/non-utilization of prenatal care services. Data were collected using the structured Risk Behavior Diagnosis Scale. The analysis of the data was carried out in SPSS-22 using one-way ANOVA, linear regression and logistic regression analysis. The level of significance was set at 0.05. Totally, 178 pregnant women with a mean age of 25.31±5.42 completed the study. Perceived self-efficacy (OR=25.23; Pprenatal care. Husband's occupation in the labor market (OR=0.43; P=0.02), unwanted pregnancy (OR=0.352; Pcare for the minors or elderly at home (OR=0.35; P=0.045) were associated with lower odds of receiving prenatal care. The model showed that when perceived efficacy of the prenatal care services overcame the perceived threat, the likelihood of prenatal care usage will increase. This study identified some modifiable factors associated with prenatal care usage by women, providing key targets for appropriate clinical interventions.

  8. Effects of lightning on trees: A predictive model based on in situ electrical resistivity.

    Science.gov (United States)

    Gora, Evan M; Bitzer, Phillip M; Burchfield, Jeffrey C; Schnitzer, Stefan A; Yanoviak, Stephen P

    2017-10-01

    The effects of lightning on trees range from catastrophic death to the absence of observable damage. Such differences may be predictable among tree species, and more generally among plant life history strategies and growth forms. We used field-collected electrical resistivity data in temperate and tropical forests to model how the distribution of power from a lightning discharge varies with tree size and identity, and with the presence of lianas. Estimated heating density (heat generated per volume of tree tissue) and maximum power (maximum rate of heating) from a standardized lightning discharge differed 300% among tree species. Tree size and morphology also were important; the heating density of a hypothetical 10 m tall Alseis blackiana was 49 times greater than for a 30 m tall conspecific, and 127 times greater than for a 30 m tall Dipteryx panamensis . Lianas may protect trees from lightning by conducting electric current; estimated heating and maximum power were reduced by 60% (±7.1%) for trees with one liana and by 87% (±4.0%) for trees with three lianas. This study provides the first quantitative mechanism describing how differences among trees can influence lightning-tree interactions, and how lianas can serve as natural lightning rods for trees.

  9. A Predictive Model for Guillain-Barré Syndrome Based on Single Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Juana Canul-Reich

    2017-01-01

    Full Text Available Background. Guillain-Barré Syndrome (GBS is a potentially fatal autoimmune neurological disorder. The severity varies among the four main subtypes, named as Acute Inflammatory Demyelinating Polyneuropathy (AIDP, Acute Motor Axonal Neuropathy (AMAN, Acute Motor Sensory Axonal Neuropathy (AMSAN, and Miller-Fisher Syndrome (MF. A proper subtype identification may help to promptly carry out adequate treatment in patients. Method. We perform experiments with 15 single classifiers in two scenarios: four subtypes’ classification and One versus All (OvA classification. We used a dataset with the 16 relevant features identified in a previous phase. Performance evaluation is made by 10-fold cross validation (10-FCV. Typical classification performance measures are used. A statistical test is conducted in order to identify the top five classifiers for each case. Results. In four GBS subtypes’ classification, half of the classifiers investigated in this study obtained an average accuracy above 0.90. In OvA classification, the two subtypes with the largest number of instances resulted in the best classification results. Conclusions. This study represents a comprehensive effort on creating a predictive model for Guillain-Barré Syndrome subtypes. Also, the analysis performed in this work provides insight about the best single classifiers for each classification case.

  10. Predictive mapping of soil organic carbon in wet cultivated lands using classification-tree based models

    DEFF Research Database (Denmark)

    Kheir, Rania Bou; Greve, Mogens Humlekrog; Bøcher, Peder Klith

    2010-01-01

    ) selected pairs of parameters, (vi) soil type, parent material and landscape type only, and (vii) the parameters having a high impact on SOC distribution in built pruned trees. The best constructed classification tree models (in the number of three) with the lowest misclassification error (ME......) and the lowest number of nodes (N) as well are: (i) the tree (T1) combining all of the parameters (ME ¼ 29.5%; N¼ 54); (ii) the tree (T2) based on the parent material, soil type and landscape type (ME¼ 31.5%; N ¼ 14); and (iii) the tree (T3) constructed using parent material, soil type, landscape type, elevation...

  11. Validation of prediction model for successful vaginal birth after Cesarean delivery based on sonographic assessment of hysterotomy scar.

    Science.gov (United States)

    Baranov, A; Salvesen, K Å; Vikhareva, O

    2018-02-01

    To validate a prediction model for successful vaginal birth after Cesarean delivery (VBAC) based on sonographic assessment of the hysterotomy scar, in a Swedish population. Data were collected from a prospective cohort study. We recruited non-pregnant women aged 18-35 years who had undergone one previous low-transverse Cesarean delivery at ≥ 37 gestational weeks and had had no other uterine surgery. Participants who subsequently became pregnant underwent transvaginal ultrasound examination of the Cesarean hysterotomy scar at 11 + 0 to 13 + 6 and at 19 + 0 to 21 + 6 gestational weeks. Thickness of the myometrium at the thinnest part of the scar area was measured. After delivery, information on pregnancy outcome was retrieved from hospital records. Individual probabilities of successful VBAC were calculated using a previously published model. Predicted individual probabilities were divided into deciles. For each decile, observed VBAC rates were calculated. To assess the accuracy of the prediction model, receiver-operating characteristics curves were constructed and the areas under the curves (AUC) were calculated. Complete sonographic data were available for 120 women. Eighty (67%) women underwent trial of labor after Cesarean delivery (TOLAC) with VBAC occurring in 70 (88%) cases. The scar was visible in all 80 women at the first-trimester scan and in 54 (68%) women at the second-trimester scan. AUC was 0.44 (95% CI, 0.28-0.60) among all women who underwent TOLAC and 0.51 (95% CI, 0.32-0.71) among those with the scar visible sonographically at both ultrasound examinations. The prediction model demonstrated poor accuracy for prediction of successful VBAC in our Swedish population. Copyright © 2017 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2017 ISUOG. Published by John Wiley & Sons Ltd.

  12. Data-Based Predictive Control with Multirate Prediction Step

    Science.gov (United States)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  13. Comprehensive model-based prediction of micropollutants from diffuse sources in the Swiss river network

    Science.gov (United States)

    Strahm, Ivo; Munz, Nicole; Braun, Christian; Gälli, René; Leu, Christian; Stamm, Christian

    2014-05-01

    Water quality in the Swiss river network is affected by many micropollutants from a variety of diffuse sources. This study compares, for the first time, in a comprehensive manner the diffuse sources and the substance groups that contribute the most to water contamination in Swiss streams and highlights the major regions for water pollution. For this a simple but comprehensive model was developed to estimate emission from diffuse sources for the entire Swiss river network of 65 000 km. Based on emission factors the model calculates catchment specific losses to streams for more than 15 diffuse sources (such as crop lands, grassland, vineyards, fruit orchards, roads, railways, facades, roofs, green space in urban areas, landfills, etc.) and more than 130 different substances from 5 different substance groups (pesticides, biocides, heavy metals, human drugs, animal drugs). For more than 180 000 stream sections estimates of mean annual pollutant loads and mean annual concentration levels were modeled. This data was validated with a set of monitoring data and evaluated based on annual average environmental quality standards (AA-EQS). Model validation showed that the estimated mean annual concentration levels are within the range of measured data. Therefore simulations were considered as adequately robust for identifying the major sources of diffuse pollution. The analysis depicted that in Switzerland widespread pollution of streams can be expected. Along more than 18 000 km of the river network one or more simulated substances has a concentration exceeding the AA-EQS. In single stream sections it could be more than 50 different substances. Moreover, the simulations showed that in two-thirds of small streams (Strahler order 1 and 2) at least one AA-EQS is always exceeded. The highest number of substances exceeding the AA-EQS are in areas with large fractions of arable cropping, vineyards and fruit orchards. Urban areas are also of concern even without considering

  14. Prediction models in the design of neural network based ECG classifiers: A neural network and genetic programming approach

    Directory of Open Access Journals (Sweden)

    Smith Ann E

    2002-01-01

    Full Text Available Abstract Background Classification of the electrocardiogram using Neural Networks has become a widely used method in recent years. The efficiency of these classifiers depends upon a number of factors including network training. Unfortunately, there is a shortage of evidence available to enable specific design choices to be made and as a consequence, many designs are made on the basis of trial and error. In this study we develop prediction models to indicate the point at which training should stop for Neural Network based Electrocardiogram classifiers in order to ensure maximum generalisation. Methods Two prediction models have been presented; one based on Neural Networks and the other on Genetic Programming. The inputs to the models were 5 variable training parameters and the output indicated the point at which training should stop. Training and testing of the models was based on the results from 44 previously developed bi-group Neural Network classifiers, discriminating between Anterior Myocardial Infarction and normal patients. Results Our results show that both approaches provide close fits to the training data; p = 0.627 and p = 0.304 for the Neural Network and Genetic Programming methods respectively. For unseen data, the Neural Network exhibited no significant differences between actual and predicted outputs (p = 0.306 while the Genetic Programming method showed a marginally significant difference (p = 0.047. Conclusions The approaches provide reverse engineering solutions to the development of Neural Network based Electrocardiogram classifiers. That is given the network design and architecture, an indication can be given as to when training should stop to obtain maximum network generalisation.

  15. L70 life prediction for solid state lighting using Kalman Filter and Extended Kalman Filter based models

    Energy Technology Data Exchange (ETDEWEB)

    Lall, Pradeep; Wei, Junchao; Davis, Lynn

    2013-08-08

    Solid-state lighting (SSL) luminaires containing light emitting diodes (LEDs) have the potential of seeing excessive temperatures when being transported across country or being stored in non-climate controlled warehouses. They are also being used in outdoor applications in desert environments that see little or no humidity but will experience extremely high temperatures during the day. This makes it important to increase our understanding of what effects high temperature exposure for a prolonged period of time will have on the usability and survivability of these devices. Traditional light sources “burn out” at end-of-life. For an incandescent bulb, the lamp life is defined by B50 life. However, the LEDs have no filament to “burn”. The LEDs continually degrade and the light output decreases eventually below useful levels causing failure. Presently, the TM-21 test standard is used to predict the L70 life of LEDs from LM-80 test data. Several failure mechanisms may be active in a LED at a single time causing lumen depreciation. The underlying TM-21 Model may not capture the failure physics in presence of multiple failure mechanisms. Correlation of lumen maintenance with underlying physics of degradation at system-level is needed. In this paper, Kalman Filter (KF) and Extended Kalman Filters (EKF) have been used to develop a 70-percent Lumen Maintenance Life Prediction Model for LEDs used in SSL luminaires. Ten-thousand hour LM-80 test data for various LEDs have been used for model development. System state at each future time has been computed based on the state space at preceding time step, system dynamics matrix, control vector, control matrix, measurement matrix, measured vector, process noise and measurement noise. The future state of the lumen depreciation has been estimated based on a second order Kalman Filter model and a Bayesian Framework. The measured state variable has been related to the underlying damage using physics-based models. Life

  16. Predicting the planform configuration of the braided Toklat River, AK with a suite of rule-based models

    Science.gov (United States)

    Podolak, Charles J.

    2013-01-01

    An ensemble of rule-based models was constructed to assess possible future braided river planform configurations for the Toklat River in Denali National Park and Preserve, Alaska. This approach combined an analysis of large-scale influences on stability with several reduced-complexity models to produce the predictions at a practical level for managers concerned about the persistence of bank erosion while acknowledging the great uncertainty in any landscape prediction. First, a model of confluence angles reproduced observed angles of a major confluence, but showed limited susceptibility to a major rearrangement of the channel planform downstream. Second, a probabilistic map of channel locations was created with a two-parameter channel avulsion model. The predicted channel belt location was concentrated in the same area as the current channel belt. Finally, a suite of valley-scale channel and braid plain characteristics were extracted from a light detection and ranging (LiDAR)-derived surface. The characteristics demonstrated large-scale stabilizing topographic influences on channel planform. The combination of independent analyses increased confidence in the conclusion that the Toklat River braided planform is a dynamically stable system due to large and persistent valley-scale influences, and that a range of avulsive perturbations are likely to result in a relatively unchanged planform configuration in the short term.

  17. Day of the year-based prediction of horizontal global solar radiation by a neural network auto-regressive model

    Science.gov (United States)

    Gani, Abdullah; Mohammadi, Kasra; Shamshirband, Shahaboddin; Khorasanizadeh, Hossein; Seyed Danesh, Amir; Piri, Jamshid; Ismail, Zuraini; Zamani, Mazdak

    2016-08-01

    The availability of accurate solar radiation data is essential for designing as well as simulating the solar energy systems. In this study, by employing the long-term daily measured solar data, a neural network auto-regressive model with exogenous inputs (NN-ARX) is applied to predict daily horizontal global solar radiation using day of the year as the sole input. The prime aim is to provide a convenient and precise way for rapid daily global solar radiation prediction, for the stations and their immediate surroundings with such an observation, without utilizing any meteorological-based inputs. To fulfill this, seven Iranian cities with different geographical locations and solar radiation characteristics are considered as case studies. The performance of NN-ARX is compared against the adaptive neuro-fuzzy inference system (ANFIS). The achieved results prove that day of the year-based prediction of daily global solar radiation by both NN-ARX and ANFIS models would be highly feasible owing to the accurate predictions attained. Nevertheless, the statistical analysis indicates the superiority of NN-ARX over ANFIS. In fact, the NN-ARX model represents high potential to follow the measured data favorably for all cities. For the considered cities, the attained statistical indicators of mean absolute bias error, root mean square error, and coefficient of determination for the NN-ARX models are in the ranges of 0.44-0.61 kWh/m2, 0.50-0.71 kWh/m2, and 0.78-0.91, respectively.

  18. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2017-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  19. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2018-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  20. Predicting geographic distributions of Phacellodomus species (Aves: Furnariidae in South America based on ecological niche modeling

    Directory of Open Access Journals (Sweden)

    Maria da Salete Gurgel Costa

    2014-08-01

    Full Text Available Phacellodomus Reichenbach, 1853, comprises nine species of Furnariids that occur in South America in open and generally dry areas. This study estimated the geographic distributions of Phacellodomus species in South America by ecological niche modeling. Applying maximum entropy method, models were produced for eight species based on six climatic variables and 949 occurrence records. Since highest climatic suitability for Phacellodomus species has been estimated in open and dry areas, the Amazon rainforest areas are not very suitable for these species. Annual precipitation and minimum temperature of the coldest month are the variables that most influence the models. Phacellodomus species occurred in 35 ecoregions of South America. Chaco and Uruguayan savannas were the ecoregions with the highest number of species. Despite the overall connection of Phacellodomus species with dry areas, species such as P. ruber, P. rufifrons, P. ferrugineigula and P. erythrophthalmus occurred in wet forests and wetland ecoregions.

  1. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  2. Operational modal analysis based prediction of actual stress in an offshore structural model

    DEFF Research Database (Denmark)

    Glindtvad Tarpø, Marius; Silva Nabuco, Bruna; Skafte, Anders

    2017-01-01

    In this paper the accuracy of predicting stresses directly from the operational responses is investigated. The basic approach to the stress prediction is to perform an operational modal analysis (OMA) and then applying a modal filtering to the operating response, so that the modal coordinates...

  3. Operational modal analysis based prediction of actual stress in an offshore structural model

    DEFF Research Database (Denmark)

    Glindtvad Tarpø, Marius; Silva Nabuco, Bruna; Skafte, Anders

    2017-01-01

    In this paper the accuracy of predicting stresses directly from the operational responses is investigated. The basic approach to the stress prediction is to perform an operational modal analysis (OMA) and then applying a modal filtering to the operating response, so that the modal coordinates of ...

  4. Predictive Model Based Battery Constraints for Electric Motor Control within EV Powertrains

    NARCIS (Netherlands)

    Roşca, B.; Wilkins, S.; Jacob, J.; Hoedemaekers, E.R.G.; Hoek, S.P. van den

    2014-01-01

    This paper presents a method of predicting the maximum power capability of a Li-Ion battery, to be used for electric motor control within automotive powertrains. As maximum power is highly dependent on battery state, the method consists of a pack level state observer coupled with a predictive

  5. A model-based analysis of the predictive performance of different renal function markers for cefepime clearance in the ICU.

    Science.gov (United States)

    Jonckheere, Stijn; De Neve, Nikolaas; De Beenhouwer, Hans; Berth, Mario; Vermeulen, An; Van Bocxlaer, Jan; Colin, Pieter

    2016-09-01

    Several population pharmacokinetic models for cefepime in critically ill patients have been described, which all indicate that variability in renal clearance is the main determinant of the observed variability in exposure. The main objective of this study was to determine which renal marker best predicts cefepime clearance. A pharmacokinetic model was developed using NONMEM based on 208 plasma and 51 urine samples from 20 ICU patients during a median follow-up of 3 days. Four serum-based kidney markers (creatinine, cystatin C, urea and uromodulin) and two urinary markers [measured creatinine clearance (CLCR) and kidney injury molecule-1] were evaluated as covariates in the model. A two-compartment model incorporating a renal and non-renal clearance component along with an additional term describing haemodialysis clearance provided an adequate description of the data. The Cockcroft-Gault formula was the best predictor for renal cefepime clearance. Compared with the base model without covariates, the objective function value decreased from 1971.7 to 1948.1, the median absolute prediction error from 42.4% to 29.9% and the between-subject variability in renal cefepime clearance from 135% to 50%. Other creatinine- and cystatin C-based formulae and measured CLCR performed similarly. Monte Carlo simulations using the Sanford guide dose recommendations indicated an insufficient dose reduction in patients with a decreased kidney function, leading to potentially toxic levels. The Cockcroft-Gault formula was the best predictor for cefepime clearance in critically ill patients, although other creatinine- and cystatin C-based formulae and measured CLCR performed similarly. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Prediction of Above-elbow Motions in Amputees, based on Electromyographic(EMG Signals, Using Nonlinear Autoregressive Exogenous (NARX Model

    Directory of Open Access Journals (Sweden)

    Ali Akbar Akbari

    2014-08-01

    Full Text Available Introduction In order to improve the quality of life of amputees, biomechatronic researchers and biomedical engineers have been trying to use a combination of various techniques to provide suitable rehabilitation systems. Diverse biomedical signals, acquired from a specialized organ or cell system, e.g., the nervous system, are the driving force for the whole system. Electromyography(EMG, as an experimental technique,is concerned with the development, recording, and analysis of myoelectric signals. EMG-based research is making progress in the development of simple, robust, user-friendly, and efficient interface devices for the amputees. Materials and Methods Prediction of muscular activity and motion patterns is a common, practical problem in prosthetic organs. Recurrent neural network (RNN models are not only applicable for the prediction of time series, but are also commonly used for the control of dynamical systems. The prediction can be assimilated to identification of a dynamic process. An architectural approach of RNN with embedded memory is Nonlinear Autoregressive Exogenous (NARX model, which seems to be suitable for dynamic system applications. Results Performance of NARX model is verified for several chaotic time series, which are applied as input for the neural network. The results showed that NARX has the potential to capture the model of nonlinear dynamic systems. The R-value and MSE are  and  , respectively. Conclusion  EMG signals of deltoid and pectoralis major muscles are the inputs of the NARX  network. It is possible to obtain EMG signals of muscles in other arm motions to predict the lost functions of the absent arm in above-elbow amputees, using NARX model.

  7. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  8. Applying Regression Models with Mixed Frequency Data in Modeling and Prediction of Iran's Wheat Import Value (Generalized OLS-based ARDL Approach

    Directory of Open Access Journals (Sweden)

    mitra jalerajabi

    2014-10-01

    Full Text Available Due to the importance of the import management, this study applies generalized ARDL approach to estimate MIDAS regression for wheat import value and to compare the accuracy of forecasts with those competed by the regression with adjusted data model. Mixed frequency sampling models aim to extract information with high frequency indicators so that independent variables with lower frequencies are modeled and foorcasted. Due to a more precise identification of the relationships among the variables, more accurate prediction is expected. Based on the results of both estimated regression with adjusted frequency models and MIDAS for the years 1978-2003 as a training period, wheat import value with internal products and exchange rate was positively related, while the relative price variable had an adverse relation with the Iran's wheat import value. Based on the results from the conventional statistics such as RMSE, MAD, MAPE and the statistical significance, MIDAS models using data sets of annual wheat import value, internal products, relative price and seasonal exchange rate significantly improves prediction of annual wheat import value for the years2004-2008 as a testing period. Hence, it is recommended that applying prediction approaches with mixed data improves modeling and prediction of agricultural import value, especially for strategic import products.

  9. An insulin infusion advisory system for type 1 diabetes patients based on non-linear model predictive control methods.

    Science.gov (United States)

    Zarkogianni, Konstantia; Mougiakakou, Stavroula G; Prountzou, Aikaterini; Vazeou, Andriani; Bartsocas, Christos S; Nikita, Konstantina S

    2007-01-01

    In this paper, an Insulin Infusion Advisory System (IIAS) for Type 1 diabetes patients, which use insulin pumps for the Continuous Subcutaneous Insulin Infusion (CSII) is presented. The purpose of the system is to estimate the appropriate insulin infusion rates. The system is based on a Non-Linear Model Predictive Controller (NMPC) which uses a hybrid model. The model comprises a Compartmental Model (CM), which simulates the absorption of the glucose to the blood due to meal intakes, and a Neural Network (NN), which simulates the glucose-insulin kinetics. The NN is a Recurrent NN (RNN) trained with the Real Time Recurrent Learning (RTRL) algorithm. The output of the model consists of short term glucose predictions and provides input to the NMPC, in order for the latter to estimate the optimum insulin infusion rates. For the development and the evaluation of the IIAS, data generated from a Mathematical Model (MM) of a Type 1 diabetes patient have been used. The proposed control strategy is evaluated at multiple meal disturbances, various noise levels and additional time delays. The results indicate that the implemented IIAS is capable of handling multiple meals, which correspond to realistic meal profiles, large noise levels and time delays.

  10. Influence of delayed muscle reflexes on spinal stability: model-based predictions allow alternative interpretations of experimental data.

    Science.gov (United States)

    Liebetrau, Anne; Puta, Christian; Anders, Christoph; de Lussanet, Marc H E; Wagner, Heiko

    2013-10-01

    Model-based calculations indicate that reflex delay and reflex gain are both important for spinal stability. Experimental results demonstrate that chronic low back pain is associated with delayed muscle reflex responses of trunk muscles. The aim of the present study was to analyze the influence of such time-delayed reflexes on the stability using a simple biomechanical model. Additionally, we compared the model-based predictions with experimental data from chronic low back pain patients and healthy controls using surface-electromyography. Linear stability methods were applied to the musculoskeletal model, which was extended with a time-delayed reflex model. Lateral external perturbations were simulated around equilibrium to investigate the effects of reflex delay and gain on the stability of the human lumbar spine. The model simulations predicted that increased reflex delays require a reduction of the reflex gain to avoid spinal instability. The experimental data support this dependence for the investigated abdominal muscles in chronic low back pain patients and healthy control subjects. Reflex time-delay and gain dependence showed that a delayed reflex latency could have relevant influence on spinal stability, if subjects do not adapt their reflex amplitudes. Based on the model and the experimental results, the relationship between muscle reflex response latency and the maximum of the reflex amplitude should be considered for evaluation of (patho) physiological data. We recommend that training procedures should focus on speeding up the delayed reflex response as well as on increasing the amplitude of these reflexes. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Density prediction and dimensionality reduction of mid-term electricity demand in China: A new semiparametric-based additive model

    International Nuclear Information System (INIS)

    Shao, Zhen; Yang, Shan-Lin; Gao, Fei

    2014-01-01

    Highlights: • A new stationary time series smoothing-based semiparametric model is established. • A novel semiparametric additive model based on piecewise smooth is proposed. • We model the uncertainty of data distribution for mid-term electricity forecasting. • We provide efficient long horizon simulation and extraction for external variables. • We provide stable and accurate density predictions for mid-term electricity demand. - Abstract: Accurate mid-term electricity demand forecasting is critical for efficient electric planning, budgeting and operating decisions. Mid-term electricity demand forecasting is notoriously complicated, since the demand is subject to a range of external drivers, such as climate change, economic development, which will exhibit monthly, seasonal, and annual complex variations. Conventional models are based on the assumption that original data is stable and normally distributed, which is generally insignificant in explaining actual demand pattern. This paper proposes a new semiparametric additive model that, in addition to considering the uncertainty of the data distribution, includes practical discussions covering the applications of the external variables. To effectively detach the multi-dimensional volatility of mid-term demand, a novel piecewise smooth method which allows reduction of the data dimensionality is developed. Besides, a semi-parametric procedure that makes use of bootstrap algorithm for density forecast and model estimation is presented. Two typical cases in China are presented to verify the effectiveness of the proposed methodology. The results suggest that both meteorological and economic variables play a critical role in mid-term electricity consumption prediction in China, while the extracted economic factor is adequate to reveal the potentially complex relationship between electricity consumption and economic fluctuation. Overall, the proposed model can be easily applied to mid-term demand forecasting, and

  12. A Physically-based Model for Predicting Soil Moisture Dynamics in Wetlands

    Science.gov (United States)

    Kalin, L.; Rezaeianzadeh, M.; Hantush, M. M.

    2017-12-01

    Wetlands are promoted as green infrastructures because of their characteristics in retaining and filtering water. In wetlands going through wetting/drying cycles, simulation of nutrient processes and biogeochemical reactions in both ponded and unsaturated wetland zones are needed for an improved understanding of wetland functioning for water quality improvement. The physically-based WetQual model can simulate the hydrology and nutrient and sediment cycles in natural and constructed wetlands. WetQual can be used in continuously flooded environments or in wetlands going through wetting/drying cycles. Currently, WetQual relies on 1-D Richards' Equation (RE) to simulate soil moisture dynamics in unponded parts of the wetlands. This is unnecessarily complex because as a lumped model, WetQual only requires average moisture contents. In this paper, we present a depth-averaged solution to the 1-D RE, called DARE, to simulate the average moisture content of the root zone and the layer below it in unsaturated parts of wetlands. DARE converts the PDE of the RE into ODEs; thus it is computationally more efficient. This method takes into account the plant uptake and groundwater table fluctuations, which are commonly overlooked in hydrologic models dealing with wetlands undergoing wetting and drying cycles. For verification purposes, DARE solutions were compared to Hydrus-1D model, which uses full RE, under gravity drainage only assumption and full-term equations. Model verifications were carried out under various top boundary conditions: no ponding at all, ponding at some point, and no rain. Through hypothetical scenarios and actual atmospheric data, the utility of DARE was demonstrated. Gravity drainage version of DARE worked well in comparison to Hydrus-1D, under all the assigned atmospheric boundary conditions of varying fluxes for all examined soil types (sandy loam, loam, sandy clay loam, and sand). The full-term version of DARE offers reasonable accuracy compared to the

  13. A model to predict radon exhalation from walls to indoor air ba